Evaluation use and influence – A review of related literature

This paper reviews research on the use of evaluation and evaluation influence. The

literature review located 36 publications that met minimum standards. It examines different

definitions of evaluation use and influence provided by different evaluation researchers and

theorists and offers a taxonomy of use and influence. Evaluation influence as a next generation

term is proposed as an alternative to the concept of evaluation use due to its limitations in

meaning, coverage, and mechanisms. In addition, the paper describes the evolution of the

evaluation influence construct as well as the theory of evaluation influence. The review of this

paper offers the theoretical framework for research related to evaluation use and influence.

pdf 9 trang thom 08/01/2024 240
Bạn đang xem tài liệu "Evaluation use and influence – A review of related literature", để tải tài liệu gốc về máy hãy click vào nút Download ở trên

Tóm tắt nội dung tài liệu: Evaluation use and influence – A review of related literature

Evaluation use and influence – A review of related literature
96 Journal of Science Ho Chi Minh City Open University – No. 2(14) 2015 – June/2015 
EVALUATION USE AND INFLUENCE – 
A REVIEW OF RELATED LITERATURE 
Ha Minh Tri 
Ho Chi Minh City Open University 
Email: tri.hm@ou.edu.vn 
(Received: 22/03/2015; Revised: 15/05/2015; Accepted: 19/05/2015) 
ABSTRACT 
This paper reviews research on the use of evaluation and evaluation influence. The 
literature review located 36 publications that met minimum standards. It examines different 
definitions of evaluation use and influence provided by different evaluation researchers and 
theorists and offers a taxonomy of use and influence. Evaluation influence as a next generation 
term is proposed as an alternative to the concept of evaluation use due to its limitations in 
meaning, coverage, and mechanisms. In addition, the paper describes the evolution of the 
evaluation influence construct as well as the theory of evaluation influence. The review of this 
paper offers the theoretical framework for research related to evaluation use and influence. 
Keywords: evaluation influence, evaluation use, literature review, research on evaluation, 
theory of evaluation influence 
1. Introduction 
The past 50 years have seen advances in 
the field of evaluation. The primary goal of 
evaluation is social betterment (Mark and 
Henry, 2004). One of the purposes of 
evaluation is to fulfil the objective of 
accountability, especially in the public sector. 
The use of evaluation has been of interest to 
evaluators and funders of evaluation work 
since the beginnings of the evaluation 
profession (Preskill and Torres, 2000). 
Criticism from the United States congressional 
members in the late 1960s regarding the lack 
of use of evaluation results in decision making 
stimulated evaluation researchers to seek a 
better understanding of the full range of 
evaluation use (Preskill and Torres, 2000). 
Since the beginning of 2000s, scholars have 
proposed to use the term “evaluation 
influence” instead of “evaluation use” to 
broaden the scope of what is to be understood 
as evaluation use (Kirkhart, 2000; Henry and 
Mark, 2003; Mark and Henry, 2004). How 
evaluations are used and the influence of 
evaluation on social development can affect 
the way the public sector spend their 
resources. This paper provides a review of the 
literature related to evaluation use and 
influence. 
This paper is structured as follows. The 
second section presents the method used for 
the literature review. Different scholars have 
proposed different definitions on evaluation 
use and influence, and the third section 
discusses these different definitions. The 
fourth section explores different types of use 
and influence. The fifth section highlights the 
evolution of evaluation influence and presents 
the theory of evaluation influence that offers a 
framework for the study of evaluation 
influence. The paper concludes with some 
remarks. 
2. Method for the review 
Searches of articles and book chapters 
were conducted for the terms “evaluation use”, 
“evaluation utilisation”, “use of performance 
information” and “evaluation influence” 
mainly in ISI Web of Science. The findings 
 Evaluation Use And Influence – A Review Of Related Literature 97 
were narrowed down to evaluation-related and 
performance information related journals, 
including American Journal of Evaluation, 
Evaluation, Evaluation Review, Evaluation 
Practice, Evaluation and Programme Planning, 
New Direction for Evaluation, Public 
Administration Review, Public Performance 
and Management Review, and Public 
Administration. 
The searches returned over 135 journal 
articles, and book chapters. After scanning 
publication titles and abstracts, irrelevant 
publications were removed. A closer review 
was conducted to see whether the publications 
met either of these criteria: (1) Focus on 
programme or policy evaluation, (2) Empirical 
research study, (3) Published journal article, or 
book, and (4) Inclusion of “evaluation 
utilisation”, “evaluation use”, “evaluation 
influence”, or “use of performance 
information” as at least one of the variables 
under study. 
The process continued with an abstract 
review, identifying 36 publications that were 
applicable for a full-text review, applying the 
above-mentioned four criteria. This process 
produced a set of articles which formed a basis 
for the analysis. A lot of the empirical research 
studies identified for the review were 
conducted in education, health and social 
services. Empirical studies which specifically 
focus on evaluation influence are available but 
limited in number. 
3. Definition Of Evaluation Use And 
Influence 
Extensive research on evaluation use has 
been carried out since the 1970s, whereas 
research on the topic of evaluation influence is 
more recent and dates back to the 2000s. 
Numerous scholars have defined evaluation 
use and influence. According to Rich (in 
Weiss, 1977: 200) the term “use” refers to 
“information entering into the policy making 
process.” If use is exercised, there is a 
potential of influencing a decision; and if 
information is used, it is influencing policy 
decisions (Rich in Weiss, 1977). Agawala-
Rogers (1977: 328) defines utilisation as “the 
process by which research results are produced 
to answer practitioner needs, and 
communicated to practitioners for their use.” 
Similarly, Caplan (in Weiss, 1977: 353) 
defines utilisation as “efforts on the part of the 
decision maker to put policy-relevant social 
science information into use.” The above 
definitions do not put an emphasis on change 
as a requisite of evaluation use but rather focus 
on the process. This is in contrast with a 
number of other authors who emphasize 
change. 
In their review of 65 empirical studies in 
education, mental health, and social services, 
Cousins and Leithwood (1986) indicate that 
there are two conventional definitions of 
evaluation use or utilisation, including: (1) use 
as support for discrete decisions, and (2) use as 
education or enlightenment for decision 
makers (e.g. influencing perceptions of current 
and ideal programme structure). They pointed 
out that evaluation use was described in an 
even more basic manner to comprise 
psychological processing of the evaluation 
results without necessarily informing 
decisions, or changing thinking or actions 
(Cousins and Leithwood, 1986). 
Johnson et al. (2009: 378), in their 
review of 41 empirical studies of evaluation 
use from 1986 to 2009 using Cousins and 
Leithwood’s 1986 framework, define 
evaluation use or utilisation “as the application 
of evaluation processes, products, or findings 
to produce an effect.” 
King and Pechman (in Patton, 1997: 82) 
define use as “intentional and serious 
consideration of evaluation information by an 
individual with the potential to act on it.” In 
his comment upon King and Pechman’s 
definition of use, Patton (1997: 82) highlights 
that evaluation is only one input among many 
in the “taking of an action or making a 
decision.” Patton further assures that it is 
reasonable to consider that an evaluation has 
been used if it has been “seriously considered 
and the findings are genuinely taken into 
account” (Patton, 1997: 82). Such a definition 
makes sense when evaluators are “trying to 
study use after the fact, and sort out relative 
influences” (Patton, 1997: 82). 
Since the beginning of the 2000s, some 
scholars have attempted to expand the concept 
98 Journal of Science Ho Chi Minh City Open University – No. 2(14) 2015 – June/2015 
of evaluation use to a broader construct called 
“evaluation influence” (Henry and Mark, 
2003; Kirkhart, 2000; Mark and Henry, 2004). 
According to Alkin (in Mathison 2005: 436) 
evaluation influence refers to the “impact on 
an external programme, which may or may not 
be related to the programme being evaluated 
or to the impact of the evaluation at some 
future time.” Mark (2011: 113) contends that 
“evaluation influence explicitly includes both 
changes that take place at the location and 
general time frame of the evaluation and 
changes that take place elsewhere and later.” 
Kirkhart (2000: 7) thus characterises 
evaluation influence as “intangible or 
indirect” unlike evaluation use which he 
considers to be more “tangible and direct.” 
Alkin and Taut (2003: 9) point out that while 
the likelihood of evaluator’s actions 
increasing evaluation use is great, the 
likelihood of evaluators’ actions increasing 
influence is not, given the fact that influence 
is by definition “unintended”, and “outside the 
domain of the evaluator to affect such possible 
evaluation influences.” Furthermore, 
distinction is made between evaluation use 
and evaluation influence in terms of 
awareness. That is to say, the “awareness of 
evaluation’s intended and unintended impacts 
of use”, as opposed to the ”unawareness and 
unintentionality of evaluation’s influence” 
(Alkin and Taut, 2003: 10). 
Mark (2011: 111) adds to the distinction 
between “use” and “influence” that use is 
more restricted to “local effects of 
evaluation”, and that it implies “a kind of 
intentionality and awareness”, but that 
evaluation can have “important consequences 
that are removed from the location of the 
evaluation” for which he prefers the notion of 
“influence.” 
From the above review, it can be 
summarised that there are different 
perspectives in defining evaluation use and 
influence. Early definitions of use were narrow 
and more process oriented. Later definitions of 
use were broader and identified change as a 
core aspect in the definition. The term 
“influence” has been proposed as a broader 
alternative to use. 
4. A Taxonomy Of Use And Influence 
One of the fundamental themes of 
research on utilisation in the late 1970s and 
early 1980s was the exploration and 
conceptualisation of types of use (Preskill, 
1991). Researchers have identified three broad 
types of use that can be distinguished by their 
purposes: instrumental, conceptual, and 
persuasive (Leviton and Hughes, 1981). Over 
time, other types of use were identified as 
process use and imposed use (Patton, 1997; 
Preskill et al., 2003; Weiss et al., 2005). 
Instrumental use dominates studies on 
evaluation use (Alkin et al., 1979). In this 
manner, evaluation results are expected to 
”affect decision making or problem solving 
purposes” (Rich in Weiss, 1977: 200). 
Instrumental use represents the traditional or 
“mainstream” type of use (Preskill, 1991: 5). 
This type of use suggests that the evaluation 
findings are put into “direct, concrete, and 
observable use” (Preskill, 1991: 5). 
Conceptual use, or enlightenment, as 
Weiss (1977) termed it, refers to “influencing 
a policymaker’s thinking about an issue 
without putting information to any specific, 
documentable use” (Rich in Weiss, 1977: 
200). Rossi et al. (2004: 411) put conceptual 
use as “the use of evaluations to influence 
thinking about issues in a general way.” In this 
definition, evaluation results or findings are 
not expected to directly result in any action or 
decision. According to Mark (2011: 108) 
conceptual use refers to “changed or new 
understandings or new ways of thinking.” In 
conceptual use, information does not lead to 
any immediate action but influences the user’s 
thinking over time (Leviton and Hughes, 1981; 
Preskill, 1991). Clearly, conceptual use of 
evaluation results for general enlightenment 
demand much less of the users than 
instrumental use. 
In addition, scholars have proposed 
“persuasive use” as another type of evaluation 
use (Leviton and Hughes, 1981). It involves 
“drawing on evaluation evidence in an attempt 
to convince others to support a political 
position or to defend such a position in attack” 
(Leviton and Hughes, 1981: 528), or refers to 
“enlisting of evaluations results in efforts 
 Evaluation Use And Influence – A Review Of Related Literature 99 
either to support or to refute political 
positions” (Rossi et al., 2004: 411). In this 
manner, evaluation results can be used to 
influence or convince others in terms of 
providing evidence. Weiss (in Leviton and 
Hughes, 1981: 530) argues that “using 
research to delay decisions, to allow policy 
makers to appear concerned about a problem, 
or to jockey a political position are not 
considered instances of use.” In addition to 
instrumental, conceptual, and persuasive uses 
discussed above, process use has been 
proposed as an alternative type of evaluation 
use (Greene, 1988; Patton, 1997; Preskill et 
al., 2003). Patton (1997: 90) refers to process 
use as “individual changes in thinking and 
behaviour, and programme or organisational 
changes in procedures and cultures, that occur 
among those involved in evaluation as a result 
of learning that occurs during the evaluation 
process.” The author suggests four primary 
process uses: (1) enhancing shared 
understandings, (2) supporting and reinforcing 
programme interventions, (3) increasing 
engagement, self-determination, and 
ownership, and (4) programme or 
organisational development (Patton, 1997: 91). 
By proposing these, the author means that: 
firstly, the evaluation helps clarifying expected 
outcomes, and the ways in which the efforts 
can be made towards accomplishing the 
expected outcomes. Secondly, the evaluation 
can be integrated into programme processes to 
reinforce and enhance programme 
interventions. Thirdly, by participating in and 
exposing the evaluation process, participants 
have the opportunities to engage, and exercise 
their self-determination and ownership of 
evaluation results. Finally, the evaluation 
process helps to stimulate changes in 
organisations by engaging participants in real 
settings. In this way, it helps them to think 
empirically, and make sensible decisions 
(Patton, 1997). 
In addition to instrumental, conceptual, 
persuasive, and process use, Weiss et al. 
(2005: 16) also use the notion of “imposed 
use” to refer to a “type of use that comes about 
because of pressure from the outside.” 
Imposed use can be considered another kind of 
instrumental use while it can also be 
understood as “incentives for using evaluation 
results” (Weiss, 2005: 26). Mark (2011: 110) 
states that imposed use occurs when “people 
are mandated to use the results of evaluation, 
or at least believe they are mandated.” 
The literature on evaluation use shows 
that the concept has evolved considerably 
(Preskill and Torres, 2000), while the literature 
on evaluation influence is still limited but 
growing. 
Mark and Henry (2004) propose a theory 
of influence, characterising a change 
mechanism of evaluation influence that is 
directly or indirectly affected and mediated by 
evaluation inputs, evaluation activities, 
environment, and evaluation outputs to 
achieve social betterment. 
Mark and Henry (2004) identify a 
change mechanism of influence that can 
operate at a dif ... ism 
 Legislative hearings 
 Coalition formation 
 Drafting legislation 
 Standard setting 
 Policy consideration 
Cognitive and 
affective 
 Salience 
 Opinion/attitude 
valence 
 Local descriptive 
norms 
 Agenda setting 
 Policy-oriented 
learning 
Motivational Personal goals and 
aspirations 
 Injunctive norms 
 Social reward 
 Exchange 
 Structural incentives 
 Market forces 
Behavioural New skill 
performance 
 Individual change in 
practice 
 Collaborative 
change in practice 
 Programme 
continuation, cessation, 
or change 
 Policy change 
 Diffusion 
Source: Mark and Henry (2004: 41) 
Mark and Henry (2004: 43) view each of 
the entries in Table 1 as an “outcome of an 
evaluation; each can also be an underlying 
mechanism, leading in turn to some other 
outcome.” In other words, the entries or 
elements in Table 1 can play the dual roles of 
an outcome of evaluation and a mechanism 
that stimulates other outcomes and are referred 
to as “processes” (Mark and Henry, 2004: 43). 
For example, to know that a reader elaborated 
on the findings of a public service delivery 
programme evaluation does not necessarily tell 
you if any significant and important change 
occurred (Mark and Henry, 2004). General 
influence processes are of more interest as 
they may (or may not) help stimulate the 
outcomes of greater interest, that is, changes in 
beliefs, motivations and actions (Mark and 
Henry, 2004). 
In addition, Mark (2006) proposed and 
tentatively labelled “relational consequences” 
as an additional category of evaluation 
consequences or processes to the Mark and 
Henry (2004) framework (Mark in Alkin, 
2013). According to Mark (in Alkin, 2013: 
151), the relational consequences comprise 
evaluators’ efforts to “modify not behaviour or 
attitude but aspects of ongoing relationships, 
structures, and organisational processes.” For 
example, it contains potential consequences 
such as individuals’ self-perception of their 
empowerment (Fetterman, 1996), the creation 
 Evaluation Use And Influence – A Review Of Related Literature 101 
of a democratic forum for deliberation (House 
and Howe, 1999), and the facilitation of the 
learning organisation (Preskill and Torres, 
1998). 
In sum, a central taxonomy of evaluation 
use includes instrumental use, conceptual use, 
persuasive use, process use, and imposed use. 
It seems there is no universal definition on 
types of use. Instrumental use is among the 
first to be identified and it dominates 
evaluation literature. Conceptual use refers to 
changed or new ways of thinking. Persuasive 
use is the third type which involves 
interpersonal influence, persuading or 
convincing others to go along with 
implications of evaluation. The taxonomy of 
evaluation use further identifies process use 
and imposed use. Process use is not considered 
a distinct type of use, it is rather a different 
source of use, and can take place at different 
points in time. Imposed use occurs when 
people are mandated to use the results of 
evaluation. As regards evaluation influence, it 
explicitly includes both changes that take place 
at the location and within the general time 
frame of the evaluation as well as changes that 
take place elsewhere and later. Relational 
consequence is proposed as an additional type 
of evaluation influence. These types of 
evaluation influence or process may take place 
at individual, interpersonal, and collective 
levels. 
5. Evolution of evaluation influence 
and theory of evaluation influence 
The literature on evaluation use has been 
considerable, and stable progress has been 
made to improve our understanding of 
evaluation use (Johnson, 1998). The evolution 
of evaluation use has been signified by an 
“increasing recognition of its multiple 
attributes” (Kirkhart 2000: 5). However, 
existing conceptualisations of use still have 
significant gaps and shortcomings, especially 
insufficient attention has been given to change 
processes and provisional outcomes (Henry 
and Mark, 2003). Describing the changes that 
occur as a result of an evaluation as 
“evaluation use” has limitations, and they are 
better described and understood if referred to 
as “evaluation influence” (Henry and Mark, 
2003; Kirkhart, 2000; Weiss et al., 2005). 
Henry and Mark (2003) have supported 
Kirkhart’s idea regarding the needs to 
reconceptualise evaluation use and the 
conception of influence. Their agenda is to 
move beyond use and build from Kirkhart’s 
model a theory of evaluation influence that 
includes multiple levels, pathways and 
mechanisms in an attempt to explain influence 
(Mark and Henry, 2004). Compared with 
evaluation use, empirical studies of evaluation 
influence are still limited and relatively little is 
known about how evaluation influence may 
impact on decision makers’ attitudes and 
actions (Mark and Henry, 2004). Concretely, 
Mark and Henry (2004) have proposed a 
preliminary theory of evaluation influence. 
This theory describes that evaluation influence 
is affected by various factors either directly or 
indirectly (Mark and Henry, 2004). These 
factors include evaluation inputs (including 
evaluation context and decision/policy 
setting), evaluation activities (including 
stakeholder selection and participation, 
evaluation planning and design, data collection 
and analysis, developing conclusions and 
recommendations, report generation, and 
information dissemination), evaluation 
knowledge (including responsiveness, 
credibility, sophistication, communication, and 
timeliness), and contingencies in the 
environment (competing processes, facilitating 
factors, and inhibiting conditions (Mark and 
Henry, 2004). As shown in Table 1 in section 
4 above, Mark and Henry (2004: 43) also 
argue that “each of the evaluation 
process/outcome can be an outcome of 
evaluation, and can also be an underlying 
mechanism, leading to some other outcome.” 
Thus, each individual process can be a “short-
term, intermediate or long term evaluation 
outcome in the pathways to social betterment” 
(Mark and Henry, 2004: 43). Figure 2 presents 
the schematic theory of evaluation influence. 
102 Journal of Science Ho Chi Minh City Open University – No. 2(14) 2015 – June/2015 
Evaluation context*
 Expertise
 Communication
 Instruction
 Time
 Resources
 Role flexibility
Decision/policy setting*
 Administrative 
support
 Micro politics
 Culture
 Information needs
 Impetus
 Skills
Evaluation activitiesEvaluation inputs
Attributes of:
 Stakeholder 
selection and 
participation
 Evaluation planning 
and design
 Data collection and 
analysis
 Developing 
conclusions and 
recommendations
 Report generation
 Information 
dissemination
Evaluation “Outputs”
Knowledge attributes*
 Responsiveness
 Credibility
 Sophistication
 Communication
 Timeliness 
General mechanisms
 Elaboration
 Heuristics
 Priming
 Salience
 Skill acquisition
 Persuation
 Justification
 Minority-opinion
 Policy consideration
 Standard setting
 Policy discussion and 
deliberation
 Coalition formation
Intermediate and long-
term outcomes
Cognitive/affective
 Salience
 Opinion valence
 Descriptive norms
 Agenda setting
Motivational
 Personal goals
 Social reward
 Incentives
 Market forces
Behavioural
 Individual practice
 Collaborative practice
 Programme 
continuation. 
Termination or 
expansion
 Policy adoption
Social 
betterment
Contingencies in the environment:
 Competing processes
 Facilitating factors
 Inhibiting factors
The proposed theory of evaluation 
influence by Mark and Henry (2004) was 
preliminarily developed. There have been 
studies using this theory with an attempt to 
establish empirical basis and evidence for the 
practice of evaluation influence. These studies 
include those by Weiss et al. (2005), Christie 
(2007), and Gildemyn (2014). The first two 
studies reported that all three types of 
evaluation information (including large-scale 
evaluation study data, case study evaluation 
data, and anecdotes) “influence decision 
makers’ decisions” (Christie, 2007: 22), and 
“evaluation evidence travelled to influence 
decisions about D.A.R.E
1” (Weiss et al., 2005: 
27). These two studies were both conducted in 
the US educational sector. Gildemyn’s study 
1
 D.A.R.E stands for Drug Abuse Resistance Education 
programme. 
(2014) is about influence of monitoring and 
evaluation by civil society organisations in the 
health sector in Ghana. 
Mark and Henry (2004) have also 
realised that there are still some limitations in 
their general framework (as presented in Table 
1, Section 4). The noteworthy limitations 
include (1) the general framework is still not a 
final product and could be tailored to a specific 
context, (2) various complexities that impinge 
on evaluation influence processes have not 
been adequately focused although these 
complexities are partly represented by the 
“Decision/policy setting” box in Figure 2 
(Mark and Henry, 2004: 50). With regard to 
the first limitation, Mark and Henry indicate 
that future conceptual frameworks and 
empirical work may lead to modifications of 
this framework (Mark and Henry, 2004). As 
Note: * Selected elements from Cousins (2003). Categories in bold taken from Table 1. 
Figure 1. Schematic theory of evaluation influence 
Source: Mark and Henry (2004: 46). 
 Evaluation Use And Influence – A Review Of Related Literature 103 
far as the second limitation is concerned, they 
indicate that the complexities are partly 
represented by the “Contingencies” box in 
Figure 1, and all change processes are 
contingent, i.e. they will operate in some 
circumstances and not others (Mark and 
Henry, 2004). They further state that by 
acknowledging such contingencies, evaluators 
may be more modest with respect to their 
aspirations for evaluation as a source of 
influence that may contribute to social 
betterment (Mark and Henry, 2004). 
6. Concluding Remarks 
The literature review of evaluation use 
and influence offers a theoretical framework 
for studies related to evaluation use and 
influence. It presents the definitions of 
evaluation use and influence, types of use and 
influence, and theory of evaluation influence. 
Scholars have extensively discussed about the 
definitions and types of evaluation use. 
Evaluation influence as a next generation term 
is proposed as an alternative to the concept of 
evaluation use due to its limitations in 
meaning, coverage, and mechanisms. This has 
led to a new area for debate between 
evaluation use and evaluation influence. 
Finally, conducting a study on evaluation use 
and influence may be a challenge as effects of 
evaluation use and influence can appear in 
various contexts, timing, and forms. 
REFERENCES 
Agarwala-Rogers, R. (1977). Why is evaluation research not utilised? In M. Guttentag (Ed.), 
Evaluation Studies Review Annual (Vol. 2), Beverly Hills: Sage. 
Alkin, M. C. (2013) Evaluation Roots: A Wider Perspective of Theorists' Views and Influences: 
Sage. 
Alkin, M. C., & Christie, C. A. (2004). An Evaluation Theory Tree. In M. C. Alkin (Ed.), 
Evaluation roots: tracing theorists' views and influences (pp. 12-65): Sage Publications. 
Alkin, M. C., Daillak, R., & White, P. (1979) Using evaluations: does evaluation make a 
difference?: Sage Publications. 
Alkin, M. C., & Taut, S. M. (2003). Unbundling evaluation use". Studies In Educational 
Evaluation 29(1), 1-12. 
Caplan, N. (1977). A Minimal Sets of Conditions Necessary for the Utilisation of Social Science 
Knowledge in Policy Formulation at National Level. In C. H. Weiss (ed.), Using Social 
Research in Public Policy Making: Lexington Books. 
Christie, C. A. (2007). Reported Influence of Evaluation Data on Decision Makers’ Actions. 
American Journal of Evaluation 28(1), 8-25. 
Cousins, J. B., & Leithwood, K. A. (1986). Current Empirical Research on Evaluation 
Utilization. Review of Educational Research 56(3), 331-364. 
Fetterman, D. M. (1996). Foundations of empowerment evaluation: Sage. 
Greene, J. G. (1988). Stakeholder Participation and Utilisation in Program Evaluation. 
Evaluation Review 12(2), 91-116. 
Henry, G. T., & Mark, M. M. (2003). Beyond Use: Understanding Evaluation's Influence on 
Attitudes and Actions. American Journal of Evaluation, 24(3), 293-314. 
House, E., & Howe, K. R. (1999) Values in evaluation and social research: Sage. 
104 Journal of Science Ho Chi Minh City Open University – No. 2(14) 2015 – June/2015 
Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. (2009). 
Research on Evaluation Use. American Journal of Evaluation, 30(3), 377-410. 
Kirkhart, K. E. (2000). Reconceptualising evaluation use: An integrated theory of influence. New 
Direction for Programme Evaluation, 2000(88), 5-23. 
Kirkhart, K. E. (2011). Culture and influence in multisite evaluation. New Directions for 
Evaluation, 2011(129), 73-85. 
Ledermann, S. (2011). Exploring the Necessary Conditions for Evaluation Use in Program 
Change. American Journal of Evaluation, 32(2), 159-178. 
Leviton, L. C., & Hughes, E. F. X. (1981). Research On the Utilization of Evaluations. 
Evaluation Review, 5(4), 525-548. 
Mark, M. M. (2011). Toward better research on - and thinking about - evaluation influence, 
especially in multisite evaluations. New Directions for Evaluation, 2011(129), 107-119. 
Mark, M. M., & Henry, G. T. (2004). The Mechanisms and Outcomes of Evaluation Influence. 
Evaluation, 10(1), 35-57. 
Mathison, S. (2005). Encyclopaedia of evaluation: Sage. 
Patton, M. Q. (1997). Utilisation-focused evaluation: Sage Publications. 
Preskill, H. (1991). The cultural lens: Bringing utilization into focus. New Directions for 
Program Evaluation, 1991(49), 5-15. 
Preskill, H. & Caracelli, V. 1997. Current and developing conceptions of use: Evaluation use 
TIG survey results. American Journal of Evaluation, 18, 209-225. 
Preskill, H., & Torres, R. T. (2000). The learning dimension of evaluation use. New Direction for 
Programme Evaluation, 2000(88), 25-37. 
Preskill, H., Zuckerman, B., & Matthews, B. (2003). An Exploratory Study of Process Use: 
Findings and Implications for Future Research. American Journal of Evaluation, 24(4), 
423-442. 
Rich, R. F. (1977). Uses of Social Science Information by Federal Bureaucrats: Knowledge for 
Action versus Knowledge for Understanding. In C. H. Weiss (Ed.) Using Social Research 
in Public Policy Making: Lexington Books. 
Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A Systematic Approach (7 
ed.). Thousand Oaks, CA: Sage. 
Shulha, L. M., & Cousins, J. B. (1997). Evaluation Use: Theory, Research, and Practice Since 
1986. American Journal of Evaluation, 18(1), 195-208. 
Weiss, C. H. (1998). Evaluation: methods for studying programs and policies: Prentice Hall. 
Weiss, C. H., Murphy-Graham, E., & Birkeland, S. (2005). An Alternate Route to Policy 
Influence. American Journal of Evaluation, 26(1), 12-30. 

File đính kèm:

  • pdfevaluation_use_and_influence_a_review_of_related_literature.pdf