Let us look at each category in more detail:
are the elements that define the way a program targets its audience.
They answer "Who are we doing this for?" Assumptions can
be drawn from such varied sources as one's own, or a program partner's,
experiences and/or formal/informal research. One can build assumptions
by looking back at the past or to the future. An example would be
the skills learned by participants that are continued from past
programming or introducing new skills into future programming.
focus on the big picture; they build upon the individual outcome
photographs to create an album of what one is attempting to accomplish
with a program.
3. The term "influencers"
was created by Performance Results Inc. to identify the stakeholders
of a program. "Influencers" are the individuals, agencies,
funding organizations, competitors, community groups, professional
affiliations, and others who influence the type of service provided
and who can be targeted as participants. Additionally, these "influencers"
determine what the desired outcomes are and the ways in which results
are reported. An important question to ask is "How will people
("influencers"/stakeholders) use the outcome information
you have gathered?" From this point one needs to break down
each "influencer" into 1) what they want to know, and
2) how the results will be used.
purpose is the concise and specific statement
of what the program does, whom it serves, and for what outcomes.
A program purpose is driven by your assumptions and relates back
to your organization's mission statement, and as such combines one's
specific assumptions about a particular program and the ways the
program reflects the organization's mission.
5. The various resources dedicated
to one's program are the inputs.
These can include staff, curriculum, materials, equipment (such
as computers and other audio/visual), money, consultants, facilities,
and other specifics such as a Web site.
are the administrative elements that take place in the creation
and implementation of your program. Services
are the elements that directly involve the end-users. Examples of
activities include recruitment, promotional design, coordination
of staff and materials, and facilities reservations and/or upkeep.
Service examples include workshops, classes, projects, and mentoring.
A simple way of thinking of the differences is that activities are
action verbs (recruit, coordinate, promote) and services are nouns
(workshop, class, project, mentor).
are the direct products produced by the program. Usually measured
quantitatively, outputs may be the numbers of participants served,
materials developed, workshops or classes given, supplies consumed,
or Web site hits. Outputs are important because they inform and
give direct support to the outcomes, they inform your assumptions
(both present and future). Some "influencers" prefer receiving
output data. One should keep in mind the balance between a "keeper"
and "thrower" in that you do not want to collect every
single digit of data, but additionally you do not want to throw
everything out. Decisions about what type of data collection fits
your program evaluation are very important to determine in the beginning,
and should be formatively assessed as the program develops.
are the key element in this type of evaluation process. They are
the "target audience's changed or improved skills, attitudes,
knowledge, behaviors, status, or life condition brought about by
experiencing a program" (Performance Results Inc., Slide 33 ).
When creating outcomes, make certain to keep the audience first
in priority, to write each outcome statement as a single thought,
and to avoid terms that suppose a baseline (as in stating any type
"Students know the key principles of archival selection."
"Students can create basic archival records" (Performance Results Inc., Slide
Outcomes can be broken down into
categories (stages) of 'immediate' (the audience knows something),
'intermediate' (the audience uses something), and 'long-term' (the
audience adapts the something into their behavior). This can be
simplified into awareness, utilization, and adaptation. Specific
timeframes for each of these stages is dependent upon one's unique
programming. In the grand scheme of things, these categories (stages)
build upon each other to create a larger social change-impact outcome.
9. The measurable conditions or
behaviors that show how an outcome was achieved are indicators.
Simply stated they are the numbers (#) and/or percentages (%) of
participants you want to reach as a defined level of criteria. When
the level of criteria is reached it will equate with program success.
An example can be "50% of the students who can identify 10
out of 30 key geological concepts." The # and/or % equals 50%
of the students, and the defined level of criteria equals the 10
out of 30 key geological concepts that can be identified. Aspects
to remember are that indicators are what you hope to see or know
about the outcome and are observable evidence of accomplishments,
changes, or gains.
sources are "the tools, documents,
and locations for information that will show what happened to your
target audience" (Performance Results Inc., Slide 39). Examples
can include pre-post tests scores, program records (formative evaluation
documents such as assessment reports), records from partner organizations,
and observations (such as during interviews, focus groups, or the
program itself). Many organizations look to surveys as the default
data source, but there are many other sources available.
11. The applied
to is the target audience for which the
indicator is aimed. It is important to decide if you want to measure
all the participants and materials or a specific subgroup, and what
special characteristics of the target audience can be utilized to
further clarify the group being measured.
12. The points in time when data
are collected are the data intervals.
Outcome information can be collected at specific intervals or at
the end of an activity or phase. Data is usually collected at the
program beginning and ending for comparisons.
13. "Goals (targets)
are the stated expectations for the performance of outcomes"
(Performance Results Inc., Slide 53), the filling in of the number
(#) and percentage (%) listed in the evaluation indicators. As noted
in the example above, the goal of the stated indicator, "50%
of the students who can identify 10 out of 30 key geological concepts"
is 50% of students. In the end, goals meet the "influencer's"
expectations and can be measured by your programs' past performance.
All of these categories lead up to
your report of findings. Outcome-based evaluation reports should include:
characteristics (who were served),
2) The inputs,
(what was put into the program and what came out of the program),
3) Requested elements by the "influencers"
(how did this program best reflect the money spent on it),
to prior periods and programs (how did this program advance the
organization's programming and mission), and
5) The interpretation of the data
(what did (does) it all mean).
As stated by Performance Results Inc. a report
basically states, "We wanted to do what?", "We did
what?", and "So what?" (Slide 59).
At the practical level these specific
categories are important in building a strong evaluation report that
articulates not only the outcomes, but also the way one's mission
operates within a program. Additionally, the theoretical aspect of
OBE is important in the way in which it relates to such educational
theories as multiple learning styles and constructivism educational
theory.(2) Practical aspects of OBE should not
be overlooked; indeed practicality is the most important part. But
the theoretical underpinnings are important because of their connections
(intended or otherwise) to strains of thought and argument in the
Hein (1997) argued that educational
theory could be divided into two generalized camps. The first is what
he described as the maze approach to learning and instruction where
one finds knowledge by following the correct path, and other paths
lead to dead ends. The second is the web where the learner spins their
education together from various angles to create a holistic approach.
It is interesting to note how the language utilized by Hein compares
to that of outcome-based evaluation, "we generate a theory of
education by embracing some view of what it is that people learn,
as well as a position on how they learn [underline in original]" (p. 2). Other overlaps can be seen in psychological learning theory,
which Hein divides into passive (absorption, transmission) and active
(development construction). Much of museum education has revolved
around the web and active sides of learning (without discounting the
more traditional maze and passive approaches), and this constructivist
theory focus fits nicely into the process of a museum's outreach.
With education a museum can better serve its public, reaching them
from a level that each individual brings to the learning experience.
This leads us back to the practical
side of OBE very quickly - it is a short hop from the theoretical
discussion about education and learning theory to articulating that
discussion in the daily workings of a museum and its programming.
This is where OBE enters. Weil (2003) described this as "in the
museum, though, we must remain sensitive to those peculiarly unstructured
and frequently unexpected aspects of the visit that can make it such
a different and idiosyncratic experience" (p. 44). As such, everyone's
learning experience is unique to that person. Outcome-based evaluation
can then articulate "the full complexity of museum evaluation"
that "requires multiplying those institutional agendas by the
equally diverse personal agendas of museum visitors" (p. 52).
Or as Frumkin (2002) argued in his examination of the reasons for
non-profits relying too heavily on financial performance (outputs)
and not as much on program performance (outcomes), "bringing
some parity to the availability and comparability of financial and
program measures of performance represents an intellectual and practical
task with mammoth potential rewards" (p. 3). Implementing the
outcome-based evaluation format outlined above begins the process of achieving those "mammoth
one needs to take into account the public's perception of the Federal
Government, and other non-profit organizations, during the early to
mid 1990s. The Clinton Administration came to office in 1992 facing
a Federal deficit of $290 billion, which created a heated public and
political debate about government spending. In 1992 the Congressional
Budget Office projected the FY 2001 deficit would be at $513 billion.
The Clinton Administration made ridding this deficit a high priority,
and by 2000 the administration was able to claim "an expected
surplus of $256 billion in FY 2001" (United States White House,
2000). This has been combined with the current Bush Administration's
favorable attitude for museum and library funding through the IMLS.
"President Bush and I (Laura Bush) are committed to strengthening
America's libraries and museums. In his 2005 budget, the President
has proposed a 14 percent increase for IMLS.... With this additional
funding, IMLS can continue to support museums and libraries and a
nation of lifelong learners. And supporting lifelong learning is the
ultimate goal of museums and libraries today" (Institute of Museum
and Library Services, 2004). The GPRA creates a forum for the continued
articulation of the importance of both accountable and balanced government
spending with support for increased museum and library funding.
learning theory is based upon Howard Gardner's work on the seven intelligences
(Linguistic Intelligence, Musical Intelligence, Logical-Mathematical
Intelligence, Spatial Intelligence, Bodily-Kinesthetic Intelligence,
Interpersonal Intelligence, and Intrapersonal Intelligence) (Gardner, 1983; Gardner, 2003), and is described by Gardner (2003) as "that
all human beings possess not just a single intelligence (often called
'g' for general intelligence). Rather, as a species we human beings
are better described as having a set of relatively autonomous intelligences....No
intelligence is in and of itself artistic or non-artistic; rather
several intelligences can be put to aesthetic ends, if individuals
so desire. No direct educational implications follow from this psychological
theory; but if individuals differ in their intellectual profiles,
it makes sense to take this fact into account in devising an educational
system" (pp. 4-5).
Constructivism educational theory, described by Hein (1995), "argues
that both knowledge and the way it is obtained are dependent on the
mind of the learner" and as such "proponents of constructivism
argue that learners construct knowledge as they learn; they don't
simply add new facts to what is known, but constantly reorganize and
create both understanding and the ability to learn as they interact
with the world" (p. 3).