Daniel Stufflebeam's Contribution to Programme Evaluation Stephanie Burrows, University Of the Witwatersrand
This paper attempts to provide a holistic view of Daniel Sufflebeam's contribution to the field of evaluation. Stufflebeam's contributions were significant, not only his development of the CIPP model, but also his continual attempts to improve evaluation methods and enhance communication in all areas.
INTRODUCTIONIn order to assess Daniel Stufflebeam's contribution to programme evaluation, it is necessary to examine the evaluation field as a whole. This requires an understanding of what 'evaluation' means and what as aims are, so that one can judge whether Stufflebeam's contribution has been significant or valuable in terms of these conceptualizations. It also calls for an examination of the background from which Stufflebeam's ideas arose. Stufflebeam's principal contribution to educational evaluation has been his decision-facilitating CIPP (Context, input, Pocess, Product) model, so it is thus the main focus of this paper. However, the aim of this paper is not to describe this model, or other contributions made by Stufflebeam, but rather to assess their value for evaluation. I shall begin by providing a brief description of the history from which the CIPP model arose.
HISTORICAL DEVELOPMENT OF THE FIELD OF EVALUATIONIn the 1950's and early 1960's there was a call in the United States for large-scale curriculum development projects to be funded by federal capital. As a result of the National Defense Education Act of 1958, a number of national curriculum development projects were established and funds were made available to evaluate these efforts. When the 'War on Poverty' was launched in 1965, in an attempt to equalize and upgrade the health, educational and social services for all citizens, large amounts of money was poured into these social development programmes which raised the concern that much of it may be wasted if appropriate accountability requirements were not imposed (Madaus, Stufflebeam, & Scriven, 1983). In the field of education, the Elementary and Secondary Education Act of 1964 was amended to include specific evaluation requirements. This meant that each school district receiving funds under its terms had to annually evaluate the extent to which objectives had been reached using appropriate standardized test data. Evaluators were forced to 'shift their concern for educational evaluation from the realm of theory and supposition into the realm of practice and implementation' (Madaus et al., 1983, p.13). Evaluators approached the problem using traditional techniques and methodologies which were survey techniques, standardized tests, and the type of criterion-referenced tests developed on the objectives model (Potter, 1994). However, it was found that the existing tools and strategies employed by evaluators were largely inappropriate Stufflebeam (1983) points out that firstly, no common set of objectives could have been responsive to the varied development levels and needs of the students in these schools. Secondly, the existing standardized tests were not geared to the language and functional levels of disadvantaged students. He realized that there should be an emphasis on 'making the methodology fit the needs of society, its institutions, and its citizens, rather than vice versa' (Madaus et al., 1983, p.18). Lastly, the Tylerian Evaluation Rationale (Tyler, 1942) would only produce reports at the end of each year which was not a very useful evaluative feedback technique. Through careful examination of exactly where the limitations of the traditional models lay, Stufflebeam recognized that evaluators needed a broader definition of evaluation than one reserved to determine whether objectives had been achieved. An advantage of Stufflebeam's contributions was his development of new models based on the problems in previous approaches, that is, he worked with the current situation rather than impose a completely new method that had no roots. This is important to avoid repeating the same mistakes and to build on past successes. It was through a trial and error process, that he gradually developed the CIPP approach. It was developed in the late 1960's and was unlike the predominant views that focused on objectives, testing and experimental designs. The 1960's can be considered as the time when evaluation methodology expanded. and evaluation became a field of study in !ts own right; and the 1970's as the time when evaluation came of an age ot professionalism. Many of the models developed in the late 1960's and early 1970's were developed in reaction to the limitations of the traditional evaluation. Critiques of prevailing evaluation methodology in the 1960's focused on the need for a broadening of methodology for pragmatic reasons, while the 1970's witnessed an increasing commitment to naturalistic and contextual approaches to evaluation, in preference to traditional positivistic designs (Hamilton, Jenkins, King, MacDonald, & Parlett, 1977; Parlett & Hamilton, 1976; Potter, 1994). As an alternative to the Tylerian approach, Stufflebeam suggested that evaluation be redefined as 'the process of providing useful information for decision making' (Stufflebeam, 1983. p 120), because he felt that the best way to aid management and improvement of programmes would be to provide school administrators, project directors, and school staff with information they could use to decide on and bring about changes in the programmes.
THE HOLISTIC APPROACH OF THE CIPP MODELStufflebeam (1971) recognized the need for evaluation to be more holistic in its approach. The CIPP model has also been used for accountability purposes since it 'provides a record-keeping framework that facilitates public review of educational needs, objectives, plans, activities, and outcomes. School administrators and school boards have found this approach useful in meeting public dernands an information' (Worthen & Sanders, 1g87, p.83). It also represents a rationale for assisting educators to be accountable for the decisions that they have made in the course of implementing a programme. Stufflebeam suggested that 'there were different aspects of programme planning, design and implementation to which evaluators needed to be sensitive, and that there were ditferent types of evaluation necessary to these different aspects' (Potter, 1994, p.11). His CIPP model expressed the need to evaluate goals, look at inputs, examine implementation and delivery of services, as well as measure intended and unintended outcomes of the program. It also emphasized the need to make judgements about the merit and worth of the object being evaluated. Berk (1981) notes that since Tyler's original definition, other clarifications of evaluation have been diverse and numerous. Yet a critical survey of these definitions indicate that there was a single common thread running through all of them, that is, evaluation is the process of providing information for decision making. This concept of evaluation is expressed most clearly in the definition by Stufflebeam et al. (1971), that evaluation is 'the process of delineating, obtaining, and providing useful information for judging decision alternatives' (p.36). Its application in other eva.uation approaches suggest the great impact that Stufflebeam's model has had on evaluation as a developing field. Nevo (1983) argues that any indication of the judgmental character of evaluation, may invoke anxiety in potential evaluees and resistance among opponents of evaluation. Although he adrnits that a nonjudgmental definition would not reflect the true qualities of evaluation, he suggests that a preferable way to develop positive attitudes towards evaluation might be to emphasize its corlstructive functions in the realm of education. A positive public opinion of evaluation is important for cooperation, especially in areas of funding. The CIPP achieves this through its view that 'the most important purpose of eva!uation is not to prove but to improve' (Stufflebeam, t983). This quote actually seems to underlie most of StufflebeamÕs thoughts in relation to evaluation. His entire approach to evaluation emphasizes a need to improve the techniques and rnethods used in evaluation, as well as encourage more communication among evaluators, on the one hand, and between evaluators and decision-makers, on the other. He also continually strives to improve his own contributions, for example, updating his CIPP model (Stufflebeam, 1S83).
THE DECISION-MAKER AS THE JUDGE OF MERRIT
Decision-facilitation models of evaluation may overlap with goal-attainment models (e.g. Tyler), judgmental models emphasizing extrinsic criteria (e.g. Scriven), and judgmental models emphasizing intrinsic criteria, but the irnportance difference is that decision-facilitation evaluators are less willing to assess personally the worth of educationalBulletin of Assessment and Evaluation phenomena. The CIPP model is intended to serve its audience - the decision-makers - so that the concerns, the informational needs, and the criteria for effectiveness of the decision-maker guide the direction of the study. 'By attending directly to the informational needs of people who are to use the evaluation, this approach addressed one of the biggest criticisms of evaluation in the 1960's: that it did not provide useful information' (Worthen & Sanders, 1987, p.84). Parlett and Hamilton (1976) point to at least three separate but related groups of decision-mahers to whom the evaluator addresses his report: (1) the programme's participants, (2) the programme's sponsors, supervisory committee, or educational board, (3) interested outsiders (such as other researchers or curriculum planners). Each group will use the evaluation report for assistance in making different decisions. The evaluator can not acknowledge the interests of all these groups since there is bound to be a great deal of disagreement between them. The evaluator does not make decisions, he or she cannot except as a representative or agent for one the interest groups. In this case the evaluator accepts a more limited role and quite different task of the 'service' researcher. The final determination of merit is thus in the decision-maker's province, not that of the evaluators (Popham, 1375). However, as pointed out by Potter (1994), evaluations are normally about the work people do, and the way in which they do it. This results in a context that is value-laden, political, and tension-filied. 'The implication is that evaluators cannot be neutral to the context of the values of the people in particular programmes, and the context introduced by their own value systems' (ibid, 8). Stufflebeam and Webster (1983) are aware that the evaiuator's perception of an evaluation may differ considerably from that of the client and audience. They thus suggest that from the beginning of the study, evaluators must be attentive to their own agendas for an evaluation study as well as those that are held by client and audience. Also, the evaluator should advise all the parties involved of possible conflicts in the purposes for doing the evaluation and should negotiate a common understanding at the outset. The success of the evaluation rests on the quality of teamwork between evaluators and decision-makers throughout the evaluation. Interaction in necessary at the beginning to establish evaluation needs, and to communicate findings at the end. Stufflebeam is not alone in his emphasis on accurate communication. Datta (1 981) recognizes that attention to probable findings and how they best may be communicated can influence major features of design, measures and study conduct. A main limitation of the collaboration between the evaluator and decision-maker is that it introduces opportunities for biasing evaluation results (Stufflebeam & Webster, 1983). Though Scriven criticized the CIPP model, saying that it had too much bias toward the concerns and the values of the educational establishment, it was popular in the U.S. Office of Education for several years. Stake (1983) feels that it gradually fell 'into disfavor not because it was a bad model, but partly because managers were unable or unwilling to examine their own operations as part of the evaluation. Actually, no evaluation model could have succeeded. A major obstacle was a federal directive, which said that no federal office could spend its funds to evaluate its own work; that could only be done by an office higher up' (p.289).
DEVELOPMENT OF QUALlTY CONTROL MEASURESMETAEVALUATION: With the proliferation of research that occurs in the the field of evaluation today, it is essential that particular standards of evaluation are maintained. In response to this need, 'metaevaluation'1 has been developed as a form of quality control and to limit any bias. Scriven (1969) introduced the term metaevaluation and applied the underlying concept to the assessment of a plan for evaluating educational products. The Phi Delta Kappa National Study Committee on Evaluation (Stufflebeam et al., 1971) and Stufflebeam (1976) among others proposed criteria for judging evaluations. According to Stufflebeam, metaevaluation is valuable in assisting evaluators to identify possible problems in primary evaluations2, and by helping the public realistically assess the strengths and weaknesses of those evaluations, and thereby ensuring that evaluators perform at their best (Berk, 1981). Good evaluations require that they themselves are evaluated. Evaluations should be checked for problems such as bias, technical error, administrative difficulties, excessive costs, and misuse. For example, since in many cases, evaluators receive higher rewards for a favourable report than for an unfavourable one, many evaluators may, consciously or unconsciously, choose to focus on aspects where the programme was successful (Stake, 1983). Metaevaluation would be in the publicÕs and professionalÕs best interests because of the large amounts of money and time spend on evaluations. Stufflebeam has been involved in both formative metaevaluation3 (e.g. Stufflebeam et al., 1971a) and summative metaevaluation4 (e.g. Stufflebeam, 1974). In this laner work, Stufflebeam made extensive use of checklists to evaluate evaluation plans and operations and argued that the writing of formal evaluation contracts helps to ensure the viability and quality of evaluation projects. Stufflebeam believed that metaevaluation should be a form of communication and a technical, data-gathering process. He has contributed greatly to evaluation through his writings because he has attempted to record his findings as accurately as possibie, thereby increasing the communication with others In his field. Stufflebeam and Webster (1983) attempt to evaluate and assess various alternative approaches to evaluation which they believe is important for 'the operation and scientific advancement of evaluatlon. Operationally, a critical revlew of alternatives car help evaluators to consider and assess optional frameworks which they can use to plan and conduct their studies. Scientifically, such a review can help evaluation researchers to identify issues, assumptions, and hypotheses that should be assessed' (p.23-2 +). The purpose of studying alternative approaches to evaluation is to ascertain their strengths and weaknesses and to acquire direction for devising better approaches. Sufflebeam realizes that there could be problems with metaevlauation. Whereas rnetaevaluation is seen as a method of exposing weaknesses in the prirnary evaluation, metaevaluations thernselves are prone error. They rnay even yield faulty data and thereby nonfound the problem's of the primary evaiuation, and at the same time using up valuable resources and creating confusion (Stufflebeam 1981). While metaevaluations are limited in this way Stufflebeam argues that they can still provide valuable servlces. Primary evaluations will cotinue to be carried out and all evaluations are biased, so it is important for metaevaluations to assist the evaluators and the public to consider relevant questions about the adequacy of primary evaluations and to control their bias. He says that 'raising such questions is sufficient reason to justify metaevaluations even if they cannot give unequivocal answers to these questions' (Stufflebeam, 1981, p.153) . This quote is indicative of Stufflebeam's continual attempts to develop new techniques to improve methods of evaluating objects in spite of the many obstacles that constantly present themselves. He does so by exploring different avenues and objectively presenting both the advantages and disadvantages of various approaches. Stufflebeam saw the importance of 'critical perspectives on the field of evaluation as a whole; on particular evaluation approaches, techniques and tools; and on particular studies' (Stufflebeam, 1381, p.1490).
THE STANDARDS: The Standards for Eva!uations of Educational Programs, Projects, and Materials produced by the Jolnt Committee in 1980, chaired by Stufflebeam, were an attempt to deal with these kinds of issues. Such standards are essential to evaluate evaluations and to establish a foundation for quality control and accountability (Berk, 1981). The 30 standards were devised to help judge and guide evaluations. They pertain to four attributes of an evaluation: utility, feasibility, propriety, and accuracy. 'Utility' standards reflect the general consensus that emerged in the late 1960's concerning the need for program evaluations that are responsive to the needs of their clients; 'feasibility' standards are reflective of the realization that evaluations need to be cost-effective and actually possible in politically charged settings; 'propriety' standards consider ethical issues, constitutional concerns, and litigatlon regarding such matters as rights of human subjects and freedom of information; and accuracy' standards emerged frorn those that have long been accepted for judging the technical merit of information. In a cordance with these Standards the CIPP rnodel can be placed 'withln what appears to be an ernerglng cornsensus about what constitutes good and beneficial evaluation' (Stufflebeam, 1983).
The Standards constitute a worthy metaevaluation tool. Stufflebeam suggests that metaevaluation should become part of the comrnon practice of the work of educational eva!uators and he provides recommendations as to ways of developing and using metaevaluation, such is through reseach, organization development, and training. Thus to help in the application of the Standards, the Joint Committee has provided a Functional Table of Contents, which lists those standards Judged to be most useful for various tasks; a standard Citation Form, to allow evaluators and metaevaluators to systematically report their judgements of how well a given evaluation has met the standards and thereby increase the communication about the applications of the Standards as well as the understanding about the state of the art of evaluation; and a general discussion of problems and procedures that pertain to application. Format of the standards include lists of pitfalls5, pertinent guidelines6, and possible conflict with other standards. Only application, research, and revision, will reveal their true worth (Stufflebeam & Madaus, 1983). Their form aimed to be as straightforward as possible so as to enhance the communication among evaluators. The Standards were intended to improve the quality of professional practice.
THE IMPROVEMENT FUNCTIQN OF EVALUATION
In a similar way, primary evaluation is intended to improve the educational programmes. Stufflebeam stresses its improvement function:
Evaluation is also a concomitant of improvement. We cannot make our programs better unless we know where they are weak and strong and unless we become aware of better means. We cannot be sure that our goals are worthy unless we can match them to the needs of the people they are intended to serve. We cannot plan effectively if we are unaware of options and their relative merits: and we cannot convince our constituents that we have done good work and deserve continued support unless we can show them evidence that we have done what we promise and produced beneficial results. For these and other reasons, public servants must subject their work to competent evaluation, which must help the sort out the good from bad point the way to needed improvements (Stufflebeam, 1983, p 140)The CIPP model has been useful in guiding educators in programme planning, opeation and review as well as programme improvement (Worthen & Sanders, 1987). Bryk and Light (1981) examine the developments in the evaluation field in the 1970Õs. It could be argued that Stufflebeam's contribution to the field is responsive to, or is at least reflective of, a number of these changes. For example, they note that programme evaluation has expanded in scope and function, resulting in increased complexity of the design task so that evaluation design must blend a variety of consideratons. By highlighting different levels of decisions and decision-makers, the CIPP model clarifies who wll use the evaluation result, how will they use them, and what aspect(s) of the system they are making ddecisions about (Worken & Sanders, 1987). Evaluation design is a continual and complex task. The main advantage of decision-facilitation models is that they encourage educators to use evaluation continuously and systematically in their efforts to plan and implemant programmes that meet educational needs (Stufflebearn & Webster 198~?). Feedback mechanisrns allow the evaluation to keep up with the ongoing and changing realities and needs of educational programmes. Educational evaluation is 'concerned with educational development, and with the development of the curriculum as intrinsic to the educational enterprise. As with curriculum, educational evaluation is concerned with context, content and process, and is a field of inquiry which focuses or issues relevant to the vested interests of different stakeholders in education' (Potter, 1994, p.1). This implies that the process of evaluation design requires appropriate research methods to be linked with particular questions of interest in a particular context (Bryk & Light, 1981). Stufflebeam asserted that context was an important factor influencing educational programmes, and that evaluation of context should be an important component of evaluation. Evaluation designs should recognize the relevant practical constraints. Stufflebeam's contributions are therefore in accordance with this definition. THE NEED FOR CONTINUAL COMMUNICATION AMONG EVALUATORS In all his writings, Stufflebeam strongly emphasizes the need for continual communication among evalliators. At the time of the emergence of the CIPP rnodel, when there was a strong call for new theories and methods of evaluation as well as for new training of programmes in evaluation, evaluators were unsure what idenlity they really had since there was no professional organization devoted to evaluation as a field nor any journals or published papers through which to share information (Madaus et al., 1983). Although evaluations were implemented and the various models and methodological procedures were tested at local, state, and national level, it was only in 1975 that a journal devoted to evaluation research,. Studies in Educational Evaluation, that dealt exclusively with the issues and problems in educational evaluation, emerged (Berk, 1981). This allowed for information to be recorded and shared. However, Stufflebeam (Madaus et al., 1983) points out that the professional development in evaluation has produced mixed results. For example, although there was increased communicatlon and understanding, development of new techniques, and a search for new appropriate methods; the actual practice of evaluation changed very iittle in general. The l980's could be characterized as a time of immense division in the field around methodological and ideological issues (Potter, 1995)7. By the mid-l980's it was generally accepted that the experimental and quasi-experimental designs were limited and that the broad-based and holistic methodologies were more valuabl!e. Yet, evaluation practices at the state and local levels suggested that Worthen and Sanders's observations of 1973 (cited in Berk, 1981, p.4) still held true in 1981: 'Despite the newly developed evaluation strategies, ... the methodology of evaluation [remains] fuzzy in the minds of most evaluators.... Perhaps the major reason [is] that the useful information on evaluation plans and techniques [is] badly fragmented and [appears] in a variety of sources, some of which [are] fugitive materials'. Potter (1995)8 draws attention to the growing sense among evaluators that evaluators must deal with numerous values and concerns. Madaus et al. (1983) suggest the need for increased education for evaluators in terms of the availability, reporting, and development, of new techniques. Berk (1981) saw a need for 'scrupulous and perceptive planning at all stages, from design to communication of results for decision making. [Also] criteria and standards by which the quality of evaluations can be judged are sorely needed' (p146). The CIPP model has aimed to meet the former, while the development of the Standards has attempted to rectify the latter problem. The CIPP evaluation model has been widely used, and represents in the literature the first full-blown evaluation framework designed to guide evaluator's udertaking evaluations for decision-making purposes (Popham, 1975).
CONCLUSIONThere is still a need for more research to be undertaken in the field of evaluation. Stufflebeam 4. (Stutffebeam & Webster. 1983) states that no one sudy is best for all evaluations of educational programmes, it depends upon the situational characteristics. House (1980) explains that 'models themselves are idealizations of evaluation approaches. An actual evaluation is shaped by many different contingencies: thus it may take many shapes even when it begins conceptually as a particuiar type. A model is an ideal type, in other words' (p.21). Stufflebeam's contribution to evaluatlon has been most valuable. Besicles the CIPP model, he has attempted to increase and improve the field through numerous writings. He has worked closely with others in the profession, for example, he attempted to relate the CIPP model with Scriven's Countenance Model. In this way he broadened the field and aided communication.
NOTES1. 'Metaevaluation': the assessment of worth and merit of an evaluation. 2. 'Primary evaluation': an evaluation that is the subject of metaevaluation. 3. 'Formative metaevaluation': a metaevaluation that is intended to guide a primary evaluation. 4. 'Summative metaevaluation': a study that judges the worth and merit of an evaluation. 5. 'Pitfalls': mistakes that are commonly made by persons who are unaware of the importance of a given principle of sound evaluation. 6. 'Guidelines': procedural suggestions intended to help evaluators meet evaluation standards.
7. This idea is taken from Charles Potter: Lecture notes for Psychology Honours (1995).
8. Ibid. (~ 995)
APPENDIX ASTUFFLEBEAM'S CIPP MODEL
THE CIPP DEFINITION OF EVALUATION
Evaluation is the process of delineating, obtaining, and providing useful information for judging decision alternatives.
This means that evaluation:
- assists in decision-making, it should therefore provide useful information for decision-makers.
- is a cyclic and continuing process; it should therefore be systematic.
- includes three main steps:
(1) delineating - using specification, definition, and explication to focus the information required by decision-makers;
(2) obtaining - using measurement and statistics to collect, organize, and analyze information;
(3) providing- synthesizing information so that it will best serve the needs of evaluation.
- involves judgement. Evaluators tend to believe that their task is to collect and present the information needed by another who will determine worth; the decision-maker must make the ultimate value judgement, not them.
- involves choosing one alternative over another, by determining their relative values.
Decisions for evaluation can occur in different settings:
Homeostatic decisions are those that involve the maintenance of the normal balance of an educational system.
Incremental decisions aim at continuous improvement of a program through developmental activities.
Neomobilistic decisions denote innovative activity to solve significant problems.
Metamorphic decisions indicate utopian activity aimed to produce complete changes in a system. Stufflebeam believes this setting to be only theoretical since it is rarely found in real-life education.
As well as the decision settings, it is necessary to have a typology of decisions in order to formulate a model capable of decision-making.
Planning Decisions- to determine objectives.
Structuring Decisions - to design instructional methods.
Implementing Decisions - to use, monitor, and improve these methods.
Recycling Decisions - to judge and react to the outcomes resulting from the methods.
TYPES OF EVALUATION
For each different type of decision, a corresponding type of evaluation - context, input, process, product, respectively - is proposed.
Context evaluation seeks to identify the strengths, weaknesses, needs, and opportunities of some project and to offer direction for improvement. It also examines whether existing goals correspond to the needs of the person being served. The aim of this evaluation is to lead a decision about whether to introduce change into a programme (Stufflebeam 1983). Context evaluation is usually concluded when a specific set of objectives is identified for which an instructional programme can be developed (Popham 1975). Context evaluation is primarily descriptive and comparative: describing the present state of an object and comparing the present, probable, and possible outcomes (ibid.1975).
Input evaluation provides information concerning how to use resources to achieve objectives (Popham 1975). It needs to identify and assess present and possible relevant approaches, searching the environment for barriers and potential available resources. The aim is to help the clients to consider alternatives in terms of their needs and the environmental factors, and to select and design an appropriate procedure (Stufflebeam 1983).
Process evaluation becomes necessary once an instructional project is underway. It monitors project operations so that defects in the procedural design are identified or predicted, and guidance for alteration in the plan is provided. This evaluation allows for a full account of the procedure to be recorded. It requires regular feedback meetings between the process evaluator and the project personnel (Stufflebeam et a/. 1971; Popham 1975; Stufflebeam 1983).
Product evaluation attempts to measure, interpret and judge the achievements of a programme (Stufflebeam 1983). This outcome information is related to the objectives of the programme, and the extent to which the programme has met the needs of those it was intended to serve is ascertained by making comparisons between expectations and actual results (Popham 1975; Stufflebeam 1983). The process evaluator helps others to decide whether to continue, modify, or end the programme.
Once a type of evaluation has been selected (context, input, process, product), the evaluator needs to develop a design to implement his evaluation (Stufflebeam 1968). The goals and procedures should be developed in advance, but they should be flexible to allow for periodical revision since design is a process, not a product. The basic structure for context, input, process, and product evaluation is the same. These plans involve a large number of decisions on how to conduct the evaluation and what instrument will be used (ibid. 1968). Procedures must be formulated to indicate how information will be collected, organized, analyzed, and reported. Finally, an overall plan for executing the evaluation design must be provided (Stufflebeam et al. 1971).
Berk, R.A. (1981). Introduction. In R.A. Berk (Ed.). Educational Evaluation: The state of the art. London: Johns Hopkins University Press.
Bryk, A.S. & Light, R.J. (1981). Designing Evaluations. In R.A. Berk (Ed.). Educational Evaluation: The state of the art. London: Johns Hopkins University Press.-Berk, R.A. (1981) Introduction
Datta, L. (1981). Communicating Evaluation Results for Policy Decision Making. In R.A. Berk (Ed.). Educational Evaluation: The state of the art. London: Johns Hopkins University Press.
Hamilton, D., Jenkins, D, King, C., MacDonald, B. & Parlett, M. (Eds.). (1977). Beyomd the Numbers Game. London: MacMillan.
House, E.R. (1980). Evaluating with Validity. Beverley Hills: Sage.
Madaus, G.F., Stufflebeam, D.L., & Scriven, M.S. (1983). Program Evaluation: A historical overview. In G.F. Madaus, D.L. Stufflebeam, & M.S. Scriven (Eds.). Evaluation Models: Viewpoints on Educatlonal and Human Services Evaluation Boston: Kluwer Nijhoff.
Parlett, M. & Hamilton, D. (1976). Evaluation as illumination: A new approach to the study of innovatory programs. In Glass, G.V.(Ed.). Evaluation Studies Review Annual, Vol 1.
Popham, W.J. (1975). Educational Evaluation. Englewood Cliffs: Prentice Hall.
Potter, C. (1994). Evaluation, Knowledge, and Human Interests. Johannesburg: University of the Witwatersrand.
Stake, R.E. (1983). Program Evaluation: Particularly Responsive Evaluation. In G.F. Madaus, M.S. Scriven, & D.L. Stufflebeam (Eds.). Evaluahon Models: Viewpoints on Educatioanl and Human Services Evaluation. Boston: Kluwer Nijhoff.
Stufflebeam, D.L. (1968). Evaluation as enlightenment for decision-making. Columbus, Ohio: Evaluation Center, Ohio State University.
Stufflebeam, D.L. (1981). Metaevaluation: Concepts, Standards, and Uses. In R.A. Berk (Ed.). Educational Evaluation: The state of the art. London: Johns Hopkins University Press.
Stufflebeam, D.L. (1983). The CIPP Model for Program Evaluation. In G.F. Madaus, M. Scriven, and D.L. Stufflebeam (Eds.), Evaluation Models: Viewpoints on Educational and Human Services Evaluation. Boston: Kluwer Nijhof.
Stufflebeam, D.L; Foley, W.J.; Gephart, W.J.; Guba, E.G; Hammond, R.L.; Merriman, H.O.; and Provus, M.M. (1971). Educational Evaluation and Decision Making. Itasca, Ill.: F.E. Peacock.
Stufflebeam, D.L. & Madaus, G.F. (1983). The Standards for Evaluation of Educational Program, Projects, and Materials: A description and summary. In G.F. Madaus, M.S. Scriven, & D.L. Stufflebeam (Eds.). Evaluation Models: Viewpoints on Educational and Human Services Evaluation. Boston: Kluwer Nijhoff.
Stufflebeam, D.L. & Webster, W.J. (1983). Alternative Approaches to Evaluation. In G.F. Madaus, M. Scriven & D.L. Stufflebeam (Eds.). Evaluation Models: Viewpoints on Educational and Human Services Evaluation. Boston: Kluwer Nijhoff.
Worthen, B.R. & Sanders, J.R. (1973). Educational Evaluation: Theory and Practice. Belmont: Wadsworth.Worthen, B.R. & Sanders, J.R. (1987). Educational Evaluation: Alternative Approaches and Practical Guidelines.New York: Longman.
Back to Top