CultureWork
A Periodic Broadside
for Arts and Culture Workers
June 2004. Vol. 8, No. 4.
Institute for Community Arts Studies
Arts & Administration Program, University of Oregon                  ISSN 1541-938X

Previous article: Assessing Arts and Cultural Programming


Outcome-based Evaluation: Practical and Theoretical Applications

Robert Voelker-Morris

Imagine you are a small business owner. You have created and sold a product and now you want to see how successful the product is. By measuring the costs of production versus the returned profit, you calculate the 'bottom-line'. What if you are a non-profit educational organization or you are a funding agency that is directly answerable to a government entity? How do you calculate the bottom line if what you are measuring is not dependent on numbers?

An excellent starting point is to ask the following questions:

"How has my program made a difference?"
"How are the lives of the program participants better as a result of my program?"

These questions are the foundation upon which outcome-based evaluation (OBE) is built. As an evaluation process, outcome-based evaluation's history has two main sources of origin. One is the United Way's creation, in 1995, of a specific and canonized evaluation process, streamlining reports by funded organizations. Additionally, it allows for a unified reporting system, cutting down costs and time by combining many different evaluations into one. The other source for OBE is the passing of the Government Performance and Results Act (GPRA) in 1993. Again, this process has been created to streamline United States Government reporting of federal funds usage.

In September of 2003, the University of Oregon Museum of Natural History (MNH) received an Institute of Museum and Library Services (IMLS) grant for a digital archive project. The award is a National Leadership grant in which the museum will collaborate with the Media Services department of the University Libraries. The project entails creating DVD versions of nine multiple slide projector presentations, which will highlight select Pacific Northwest cultural and natural history. As part of this project there will be outreach programming developed that includes screenings at the museum, DVD distribution to all the middle schools in the state of Oregon, and possible Web streaming. A component of this project is to evaluate the product by following OBE standards.

The Don L. Hunter Archive Project logoInstitute of Museum and Library Services Logo

Museum of Natural History: Don Hunter Archive Projects Page
-
IMLS Leadership Grant: Press Release

The GPRA lays the groundwork for the reasons an organization should utilize OBE. Findings include being able to cut down on "waste and inefficiency in Federal programs" that "undermine the confidence of the American people in the Government and reduces the Federal Government's ability to address adequately vital public needs." Additionally, "Federal managers are seriously disadvantaged in their efforts to improve efficiency and effectiveness, because of insufficient articulation of program goals and inadequate information on program performance." Finally, that "congressional policymaking, spending decisions and program oversight are seriously handicapped by insufficient attention to program performance and results" (Office of Management and Budget: The Executive Office of the President, 1993, Section 2).

The GPRA's main purposes (1) are:

  • Improving the confidence of the American public in Federal Government accountability,
  • Initiating pilot programs for program performance reform,
  • Improving Federal program effectiveness and public accountability by focusing on results, service quality, and customer satisfaction,
  • Helping improve service delivery for Federal managers, and
  • Improving the Federal Government's internal management.

Around the same time, 1995, the United Way had formulated an evaluation process that no longer focused on the service providers, but on the recipients of services. The United Way's version of OBE has introduced the non-profit world to the main points of the model (though numerous variations are to be found in other versions, as we will explore later):

  • Inputs = resources dedicated to or consumed by a program,
  • Activities = how the inputs are used to fulfill the mission through the program,
  • Outputs = direct products of the program activities, measured as the work accomplished, and
  • Outcomes = benefits and/or changes in the targeted population of a program.

Traditionally, organizations identify evaluation processes through which to measure outputs. These outputs likely revolve around financial bottom-lines, such as product numbers in relation to overhead costs. As such, non-profits often put themselves in a position of measuring financial data as an easy way to report results. Frumkin (2002) commented that "the problem that this creates in the nonprofit world is clear: From foundations and universities to hospitals and museums, nonprofit groups of all kinds, but particularly large institutions, are understandably led to focus on financial measures of performance because they are so much more concrete and robust than programmatic ones" (p.1). It is the programmatic measures that are the focus of outcome-based evaluation.

What does this mean for museums and their programming? Let's return to the question: How would you calculate your bottom line if what you are measuring is not dependent on numbers? Stephen Weil, in the November/December 2003 issue of
Museum News, argued that it is important to look at positive and intended differences a museum makes on the lives of its targeted individuals and communities. This is an excellent description of outcomes and why they are so important in measuring a program's success. Additionally, it brings us back to the outcome-based questions posed in the beginning of this writing that address the non-profit 'bottom-line' dilemma:

"How has my program made a difference?"
"How are the lives of the program participants better as result of my program?"

These two questions opened the IMLS sponsored OBE workshop in Washington, DC that representatives of the University of Oregon MNH and University Libraries attended in January 2004 (I was the museum representative). The workshop is required of grant recipients in order for the IMLS to consolidate and streamline their reporting in accordance with the GPRA. Performance Results, Inc, presented the two-day workshop.

 

Performance Results, Inc is a 100 percent women-owned, organizational services and support firm. We provide management services, technical assistance, and training to government agencies and nonprofit, faith-based and community-based organizations in a variety of important management support areas. With strengths in qualitative and quantitative analyses, strategic planning, and business development, we offer a diverse range of services including program and policy evaluation and implementation, human service delivery system design and development, outcome based evaluation design and implementation, marketing, and project management. (Performance Results, Inc., 2003)

 


According to the Performance Results, Inc. model (referred to as a Logic Model) outcome-based evaluation can be broken down into the following:

 

Assumptions

Results

"Influencers"

Program Purpose

Inputs

Activities and Services

Outputs

Outcomes

Indicators

Data Sources

Applied to

Data Intervals

Goals


Let us look at each category in more detail:

1. Assumptions are the elements that define the way a program targets its audience. They answer "Who are we doing this for?" Assumptions can be drawn from such varied sources as one's own, or a program partner's, experiences and/or formal/informal research. One can build assumptions by looking back at the past or to the future. An example would be the skills learned by participants that are continued from past programming or introducing new skills into future programming.

2. Results focus on the big picture; they build upon the individual outcome photographs to create an album of what one is attempting to accomplish with a program.

3. The term "influencers" was created by Performance Results Inc. to identify the stakeholders of a program. "Influencers" are the individuals, agencies, funding organizations, competitors, community groups, professional affiliations, and others who influence the type of service provided and who can be targeted as participants. Additionally, these "influencers" determine what the desired outcomes are and the ways in which results are reported. An important question to ask is "How will people ("influencers"/stakeholders) use the outcome information you have gathered?" From this point one needs to break down each "influencer" into 1) what they want to know, and 2) how the results will be used.

4. Program purpose is the concise and specific statement of what the program does, whom it serves, and for what outcomes. A program purpose is driven by your assumptions and relates back to your organization's mission statement, and as such combines one's specific assumptions about a particular program and the ways the program reflects the organization's mission.

5. The various resources dedicated to one's program are the inputs. These can include staff, curriculum, materials, equipment (such as computers and other audio/visual), money, consultants, facilities, and other specifics such as a Web site.

6. Activities are the administrative elements that take place in the creation and implementation of your program. Services are the elements that directly involve the end-users. Examples of activities include recruitment, promotional design, coordination of staff and materials, and facilities reservations and/or upkeep. Service examples include workshops, classes, projects, and mentoring. A simple way of thinking of the differences is that activities are action verbs (recruit, coordinate, promote) and services are nouns (workshop, class, project, mentor).

7. Outputs are the direct products produced by the program. Usually measured quantitatively, outputs may be the numbers of participants served, materials developed, workshops or classes given, supplies consumed, or Web site hits. Outputs are important because they inform and give direct support to the outcomes, they inform your assumptions (both present and future). Some "influencers" prefer receiving output data. One should keep in mind the balance between a "keeper" and "thrower" in that you do not want to collect every single digit of data, but additionally you do not want to throw everything out. Decisions about what type of data collection fits your program evaluation are very important to determine in the beginning, and should be formatively assessed as the program develops.

8. Outcomes are the key element in this type of evaluation process. They are the "target audience's changed or improved skills, attitudes, knowledge, behaviors, status, or life condition brought about by experiencing a program" (Performance Results Inc., Slide 33 ). When creating outcomes, make certain to keep the audience first in priority, to write each outcome statement as a single thought, and to avoid terms that suppose a baseline (as in stating any type of increase).

Examples:

"Students know the key principles of archival selection."
"Students can create basic archival records" (Performance Results Inc., Slide 34 ).

Outcomes can be broken down into categories (stages) of 'immediate' (the audience knows something), 'intermediate' (the audience uses something), and 'long-term' (the audience adapts the something into their behavior). This can be simplified into awareness, utilization, and adaptation. Specific timeframes for each of these stages is dependent upon one's unique programming. In the grand scheme of things, these categories (stages) build upon each other to create a larger social change-impact outcome.

9. The measurable conditions or behaviors that show how an outcome was achieved are indicators. Simply stated they are the numbers (#) and/or percentages (%) of participants you want to reach as a defined level of criteria. When the level of criteria is reached it will equate with program success. An example can be "50% of the students who can identify 10 out of 30 key geological concepts." The # and/or % equals 50% of the students, and the defined level of criteria equals the 10 out of 30 key geological concepts that can be identified. Aspects to remember are that indicators are what you hope to see or know about the outcome and are observable evidence of accomplishments, changes, or gains.

10. Data sources are "the tools, documents, and locations for information that will show what happened to your target audience" (Performance Results Inc., Slide 39). Examples can include pre-post tests scores, program records (formative evaluation documents such as assessment reports), records from partner organizations, and observations (such as during interviews, focus groups, or the program itself). Many organizations look to surveys as the default data source, but there are many other sources available.

11. The applied to is the target audience for which the indicator is aimed. It is important to decide if you want to measure all the participants and materials or a specific subgroup, and what special characteristics of the target audience can be utilized to further clarify the group being measured.

12. The points in time when data are collected are the data intervals. Outcome information can be collected at specific intervals or at the end of an activity or phase. Data is usually collected at the program beginning and ending for comparisons.

13. "
Goals (targets) are the stated expectations for the performance of outcomes" (Performance Results Inc., Slide 53), the filling in of the number (#) and percentage (%) listed in the evaluation indicators. As noted in the example above, the goal of the stated indicator, "50% of the students who can identify 10 out of 30 key geological concepts" is 50% of students. In the end, goals meet the "influencer's" expectations and can be measured by your programs' past performance.

All of these categories lead up to your report of findings. Outcome-based evaluation reports should include:

1) Participant characteristics (who were served),

2) The
inputs, activities and services, outputs, and outcomes (what was put into the program and what came out of the program),

3) Requested elements by the
"influencers" (how did this program best reflect the money spent on it),

4)
Comparisons to prior periods and programs (how did this program advance the organization's programming and mission), and

5) The
interpretation of the data (what did (does) it all mean).

As stated by Performance Results Inc. a report basically states, "We wanted to do what?", "We did what?", and "So what?" (Slide 59).

At the practical level these specific categories are important in building a strong evaluation report that articulates not only the outcomes, but also the way one's mission operates within a program. Additionally, the theoretical aspect of OBE is important in the way in which it relates to such educational theories as multiple learning styles and constructivism educational theory.(2) Practical aspects of OBE should not be overlooked; indeed practicality is the most important part. But the theoretical underpinnings are important because of their connections (intended or otherwise) to strains of thought and argument in the museum field.

Hein (1997) argued that educational theory could be divided into two generalized camps. The first is what he described as the maze approach to learning and instruction where one finds knowledge by following the correct path, and other paths lead to dead ends. The second is the web where the learner spins their education together from various angles to create a holistic approach. It is interesting to note how the language utilized by Hein compares to that of outcome-based evaluation, "we generate a theory of education by embracing some view of what it is that people learn, as well as a position on how they learn [underline in original]" (p. 2). Other overlaps can be seen in psychological learning theory, which Hein divides into passive (absorption, transmission) and active (development construction). Much of museum education has revolved around the web and active sides of learning (without discounting the more traditional maze and passive approaches), and this constructivist theory focus fits nicely into the process of a museum's outreach. With education a museum can better serve its public, reaching them from a level that each individual brings to the learning experience.

This leads us back to the practical side of OBE very quickly - it is a short hop from the theoretical discussion about education and learning theory to articulating that discussion in the daily workings of a museum and its programming. This is where OBE enters. Weil (2003) described this as "in the museum, though, we must remain sensitive to those peculiarly unstructured and frequently unexpected aspects of the visit that can make it such a different and idiosyncratic experience" (p. 44). As such, everyone's learning experience is unique to that person. Outcome-based evaluation can then articulate "the full complexity of museum evaluation" that "requires multiplying those institutional agendas by the equally diverse personal agendas of museum visitors" (p. 52). Or as Frumkin (2002) argued in his examination of the reasons for non-profits relying too heavily on financial performance (outputs) and not as much on program performance (outcomes), "bringing some parity to the availability and comparability of financial and program measures of performance represents an intellectual and practical task with mammoth potential rewards" (p. 3). Implementing the outcome-based evaluation format outlined above begins the process of achieving those "mammoth potential rewards."


1. Politically, one needs to take into account the public's perception of the Federal Government, and other non-profit organizations, during the early to mid 1990s. The Clinton Administration came to office in 1992 facing a Federal deficit of $290 billion, which created a heated public and political debate about government spending. In 1992 the Congressional Budget Office projected the FY 2001 deficit would be at $513 billion. The Clinton Administration made ridding this deficit a high priority, and by 2000 the administration was able to claim "an expected surplus of $256 billion in FY 2001" (United States White House, 2000). This has been combined with the current Bush Administration's favorable attitude for museum and library funding through the IMLS. "President Bush and I (Laura Bush) are committed to strengthening America's libraries and museums. In his 2005 budget, the President has proposed a 14 percent increase for IMLS.... With this additional funding, IMLS can continue to support museums and libraries and a nation of lifelong learners. And supporting lifelong learning is the ultimate goal of museums and libraries today" (Institute of Museum and Library Services, 2004). The GPRA creates a forum for the continued articulation of the importance of both accountable and balanced government spending with support for increased museum and library funding.

2. Multiple learning theory is based upon Howard Gardner's work on the seven intelligences (Linguistic Intelligence, Musical Intelligence, Logical-Mathematical Intelligence, Spatial Intelligence, Bodily-Kinesthetic Intelligence, Interpersonal Intelligence, and Intrapersonal Intelligence) (Gardner, 1983; Gardner, 2003), and is described by Gardner (2003) as "that all human beings possess not just a single intelligence (often called 'g' for general intelligence). Rather, as a species we human beings are better described as having a set of relatively autonomous intelligences....No intelligence is in and of itself artistic or non-artistic; rather several intelligences can be put to aesthetic ends, if individuals so desire. No direct educational implications follow from this psychological theory; but if individuals differ in their intellectual profiles, it makes sense to take this fact into account in devising an educational system" (pp. 4-5).

Constructivism educational theory, described by Hein (1995), "argues that both knowledge and the way it is obtained are dependent on the mind of the learner" and as such "proponents of constructivism argue that learners construct knowledge as they learn; they don't simply add new facts to what is known, but constantly reorganize and create both understanding and the ability to learn as they interact with the world" (p. 3).



Acknowledgements

I would like to thank Claudia Horn of Performance Results Inc. (PRI), for reviewing this article to verify that PRI's organization and workshop information was presented appropriately. Julie Voelker-Morris, who took time out of her busy schedule to review various drafts, covered all the rest of the general editing duties. Thank you both.

References

Frumkin, P. (2002, May 30). Good performance is not measured by financial data alone. The Chronicle of Philanthropy. Retrieved January 14, 2004, from http://www.newamerica.net/index.cfm?sec=programs&pg=article&pubID=852&T2=Article

Gardner, H. (1983). Frames of Mind: The theory of multiple intelligences. New York: Basic Books. Basic Books Paperback, 1985. Tenth Anniversary Edition with new introduction, New York: Basic Books, 1993.

Gardner, H. (2003). Multiple intelligences after twenty years. Paper presented at the American Educational Research Association, Chicago, Illinois, April 21, 2003.

Hein, G.E. (1995). The constructivist museum. Retrieved February 16, 2004, from http://www.gem.org.uk/hein.html

Hein, G.E. (1997, July 19). The maze and the web: Implications of constructivist theory for visitor studies. Visitor Studies Association Keynote Speech. Birmingham, AL.

Institute of Museum and Library Services. (2004, January 23). Laura Bush presents national awards for museum and library service: Announces increase in president's FY 05 budget for IMLS. Press Release. Retrieved March 18, 2004, from http://www.imls.gov/whatsnew/current/012304.htm

Office of Management and Budget: The Executive Office of the President. (1993). Government performance results act of 1993. Retrieved November 10, 2003, from http://www.whitehouse.gov/omb/mgmt-gpra/gplaw2m.html

Performance Results, Inc. (2003, April). Measuring results in an age of accountability. [Company brochure]. Retrieved March 17, 2004, from http://www.performance-results.net/resources/Performance Results.pdf

Performance Results, Inc. (n.d.). Measuring program outcomes: Outcome-based evaluation for national leadership grants. A training workshop sponsored by the Institute of Museum and Library Services.

United States White House. (2000, December 28). President Clinton: The United States on track to pay off the debt by end of the decade. Press Release. Retrieved February 3, 2004, from http://clinton4.nara.gov/WH/new/html/Fri_Dec_29_151111_2000.html

Weil, S.E. (2003, November/December). Beyond big and awesome: Outcome-based evaluation. Museum News. pp.40-45, 52-53.


Currently, Robert Voelker-Morris is the Project Coordinator for the IMLS grant project at the University of Oregon Museum of Natural and Cultural History. He received his Master's degree in Arts Management from the University of Oregon with a focus in Museum Studies and received a Bachelor's degree in Art History from Oregon State University. Additionally, Robert works for the U of O Arts and Administration Program as adjunct instructor in the visual and media literacy fields. When he is not working he is playing with his one-year old son, Isaac.

Robert Voelker-Morris
Project Coordinator - Don Hunter Archive
Museum of Natural History
University of Oregon
rmorris1@darkwing.uoregon.edu
541-346-3987

Creative Commons License
This work is licensed under a Creative Commons License.

This page has been designed with the following validations and accessibility compliance:
Valid XHTML 1.0!| Valid CSS!| Bobby WorldWide Approved 508 | Bobby WorldWide Approved AAA


CultureWork is an electronic publication of the University of Oregon Institute for Community Arts Studies. Its mission is to provide timely workplace-oriented information on culture, the arts, education, and community. For previous issues of CultureWork, visit the Previous Issues page. Prospective authors and illustrators please see the Guidelines.

Opinions expressed by authors of CultureWork broadsides do not necessarily express those of the editors, the Institute for Community Arts Studies, or the University of Oregon.

Arts and Administration | The Institute for Community Arts Studies(I.C.A.S.)

©2004 The Institute for Community Arts Studies unless otherwise noted (see above Creative Commons license); all other publication rights revert to the author(s), illustrator(s), or artist(s) thereof.

Editor: Maria Finison                                        Advisor: Dr. Douglas Blandy

Comments to: mfinison@darkwing.uoregon.edu