Cross-Cultural Issues in
Alaskan Education
Vol. I
AN EVALUATOR VIEWS NORTHERN
EDUCATION PROGRAMS
by
Kathryn A. Hecht, Ed.D.
Center for Northern Educational Research
University of Alaska, Fairbanks
(Ed. note: This paper was originally prepared
for presentation at the Northern Cross-Cultural Symposium, Fairbanks, Alaska,
November 1973, and has been revised for this volume.)
As a program planner
and evaluator working in Alaska these past three years, I think many of
my opinions and concerns in viewing educational programs
are the same as they would have been had I been employed elsewhere in
the country. However, my brief experience in the North, with its many
unique attributes, has also directed my attention toward concerns which probably
would not be considered priority evaluation issues elsewhere.
This paper
will first address may general perspective and opinions concerning program
evaluation, followed by comments specific to the Northern situation.
What
is evaluation? One of the popularly quoted evaluation definitions was that
of the Phi Delta Kappa National Study Committee on Evaluation
(Stufflebeam
and others, 1971, p. 40):
“Educational evaluation is a process of delineating, obtaining,
and providing useful information for judging decision alternatives.”
The
Committee did not consider judging to be the evaluator’s role.
Other evaluators disagree; hence, the question of who has the
judgmental role is often the key to disputes over defining evaluation.
Discussions
of other definitions can be found in Worthen and Sanders (1973), and Steele
(1973), but further attention to this interesting debate over
theory is not
essential to the purpose of this paper.
Regardless of disputed
definitions of evaluation, I make certain assumptions about what the
term evaluation connotes. First, evaluation
assumes
there is a rational decision-making process in that decision-makers
use information
available to them to make the process a more workable one.
Here I am mainly considering evaluation in terms of its usefulness
to the
project
in which
it operates, rather than an additional purpose of helping to
clarify the project and its effectiveness to others, which,
while important,
is considered
secondary.
Second, evaluation assumes some stability of program
funds which would allow for planning and recycling. If this condition
does
not exist,
evaluation data will be of limited use to the decision-makers
and those administering
the project. Questions of timing and funding certainly are
central problems of operating programs under federal or state
funding.
Past examples of
legislative
action exemplify that neither time nor funding have been
sufficiently taken into account. This is perhaps the main reason why evaluation
has not previously proven very useful, since adequate planning
time was not
given-much less funded. Subsequently, programs with uncertain
futures often had to be revised and resubmitted before data
were available.
This situation
has had the unfortunate consequence of giving evaluation
the
appearance of
an act that is done for someone else to fulfill a requirement
without realizing the many benefits that could have accrued
to the program
itself.
Therefore, my third assumption is that evaluation
can have usefulness for local program improvement and should not be
thought of solely
as a necessary
exercise to satisfy an external funding source. Federally
imposed evaluation can be considered a failure if judged
in terms of
providing data for
local program improvement. Too often this failure is equated
with the lack of
likelihood that any evaluation will be useful at the local
program level. Historically,
it is perhaps unfortunate that the impetus for most evaluation
efforts came from the federal government, specifically
the evaluation requirements
of the 1965 Elementary and Secondary Education Act; and
that the impetus continues to be tied to federal funding. Though
the Office
of Education
and Congress can be commended for requiring programs to
attempt to describe their
effectiveness, they have lost their potential for leadership
and example settings in this area. (Further description
of evaluation developments
and shortcomings at the federal level, related to ESEA
1965, can
be found in Worthen and Sanders, 1973, and Hecht, 1973.)
Fourth,
any evaluation must be conducted in an atmosphere which has a tolerance
for program failure as well as for
success.
If one cannot
accept
the fact
that some parts of a program may do better than others
and some may not do well at all, then information relative
to
making such
judgments
is
not useful, and collecting data becomes wasted effort.
Educators, from the
federal
to the local level, have failed to make this point clear
in educating themselves and the public to realize that
innovative quasi-experimental
programs
are just that-they have no guarantees of being better
than the current program until they are tried and proven. If
evaluation is not used
in this manner,
it then becomes apparent that every new program will
continue or be dropped on a basis other than whether it is warranted
or
not.
(For
further discussion
of the need to allow quasi-experimental programs to fail
as
well as succeed, see an excellent discussion by Campbell,
1969.)
Fifth, communication and cooperation among administrators,
implementors and evaluators are essential to adequate
evaluation. Though this
sounds elementary and perhaps unnecessary to mention,
it is probably the most
significant adversary
to evaluations. Evaluators are so accustomed to the negative
reactions from those who feel threatened by evaluative
efforts that they
often neglect to
take precautions to alleviate such tensions. Everyone
involved in a program needs and deserves a thorough understanding
of what an
evaluation
is to be and how it will be used. Though some basic researchers
set up blind experiments or use deceitful conditions
with
subjects on
a temporary
basis,
such tactics are not part of the repertoire of an evaluator.
All
of the above assumptions are based on the fact that evaluation differentiates
itself from more basic research
in its usefulness
to the programs in
which it operates. While research strives for generalizable
knowledge, applicable
in every situation, evaluation strives to provide useful
information for the improvement of a specific program.
Programs conducted
in a realistic educational setting, unlike research
projects, are
fluid in nature and
are apt to incur unexpected pressures, directions,
and changes along the
way.
Program evaluation therefore must be flexible, and
need not adhere to a pre-set design or plan. Regardless of
how technically
good
an evaluation
may be considered,
if it is not useful for program management, improvement,
and other requirements of those responsible for the
program, it
is not considered
a successful
evaluation in that it has missed its primary purpose.
Given
this brief look as to what is meant by evaluation generally (at least to
this author), let me attempt
to relate the above
to evaluation
of programs
in the North. What are the problems, why do we have
them and what can we do about them? First of all,
I think
many of you
would agree
that
many
of the programs with which we deal lack clear goal
definitions. We perhaps may
think we know intuitively what we seek to do; if
there is more than one person involved in the program, each
may think
he
or she has
an intuitive
knowledge
of what he or she intends to do, but without goal
specifications, there is no way of knowing whether there is any agreement
among these intuitions
or
a way to explain them to others. One might argue
that
anything might be better than what currently exists
in our schools,
but such attitudes
are
hardly
professionally responsible.
Without defined goals
and objectives a program lacks not only understanding of where it is going,
but
also any way
of knowing
if it gets there.
There is a quote from Alice in Wonderland which
applies here:
“Alice asks the Cheshire Cat, ‘Would you tell me please which
way I ought to go from here?’
‘That depends a good deal on where you want to get to,’ said
the Cat.
‘I don’t much care where-----‘ said Alice.
‘Then it doesn’t matter which way you go,’ said
the Cat.”
Sometimes even when goals are defined, often the criteria
by which to judge whether the goals are being achieved
are unclear. For
instance, take a
program whose goal is to instill a regard for Native
Heritage in Native students.
What criteria will one use to see if the goal has
been met? What are the specific objectives of the program?
What are students
expected to
do as
a result of being in the program? Are they expected
to learn a Native craft, talk more freely with their elders,
demonstrate
an
improved
self-image or
some other such change as a method of judging whether
the goal has been achieved?
Another factor which is seriously
lacking in the North is research; not as part of programs themselves, but
as the background with
which programs
are
assembled. Lacking is both the published educational
research that would provide answers to many of the
questions that
we now try
to resolve by
the use of new and different programs and also
the willingness to consider what
is available. For instance, in a Native Heritage
program improving self-image is often considered a rationale
or goal. However,
there is some research
which suggests that minority children may already
have
a positive self-image. (Fuchs and Havighurst, 1972;
Martig and DeBlassie,
1973; Powers and
others, 1971; Soares and Soares, 1969). New programs
should have the benefit
of the application of new research findings that
indicate that they are on
the right
track and have a good chance of being successful.
In bilingual programs, for instance, there is only a
smattering of research
on the effects
of bilingualism and bilingual training on various
types
of students. The
whole area of cross-cultural
education is one which in many ways lacks all
three of the above-goals, criteria, and basic research, especially
applied to the Northern
situation.
A third problem could be considered
the lack of adequate evaluation methodology and
techniques
in general,
and especially those
applied to the special
conditions of Northern education dealing with
such
programs as the ones mentioned above.
Evaluation is a new and developing field, and
as such there is almost as much disagreement
among
its members
as there
is agreement
on the “how
to’s.” There is little ready-made
training material available that could be used
in the absence of having full-time professional
evaluation
help. The large cities can afford research and
evaluation departments and many now have them,
but the type of rural school system with which
we most
often interact can never be expected to gear
up to that extent and, therefore, will require
other means. The evaluation field needs to pull
together to
provide evaluation techniques and training that
can be used in local school districts by the
districts themselves with little or only periodic
outside
help. Also there is definitely a shortage of
people knowledgeable in special evaluation techniques,
such as for bilingual programs and cross-cultural
situations.
Another problem that confronts us
in the North is that we sorely lack a communication
and dissemination
network.
Though
problems
across Alaska,
Canada, and our
more far-flung neighbors in Greenland, Siberia,
et
cetera, have much in
common and often the programs we are attempting
seem very similar, currently there
are few effective mechanisms to share such information.
The
fifth problem is a lack of manpower trained and available to handle the
evaluation program
planning
and improvement
techniques. There are
usually people who are willing to come North
for a day, look around,
write a report
and leave. This is hardly what is meant when
we talk of useful evaluation.
What can be done?
We have alluded to five “lacks” above and some
of the things to be suggested are fairly
self-evident. The first is the need for training of people who are on the
scene, who can
be part of programs
and provide the necessary evaluation support.
Training must include evaluation in its broadest sense, the intricacies
of planning, carrying out and revising
the program. Obviously, this requires both
funding and commitment. The communication/dissemination network spoken
of above is another area for action. The type of communication/dissemination
network being suggested requires a great
deal
of openness on the part of
those people planning and innovating new
programs; a willingness to share their experiences, including their failures
as well
as their successes. Even
though each program is different and has
its own particular circumstances, certainly there is also a great deal
of similarity.
There is another type of openness required-an
openness to take a look at more general work
done outside,
though always
keeping
in
mind the
questions
of applicability. In the North, we have often
seen work done outside as clearly being so
inapplicable as not
to have any
use. We might
be missing much of what could be very useful.
Though
cultures are indeed
different,
dealing with a variety of youth may not necessarily
be. Though language patterns are different,
dealing with
second languages
may be another
area of common
problems with similar solutions. Very often
what has been imposed from outside, especially
in
Alaska, has
been inapplicable
or
at least has
not been adapted
so that it was useful. But given attempts
to improve education for a variety of peoples
throughout the
country, we should
be open to
exchange and use
as much of this information as we can.
Also
we must remember the road goes the other
way. Alaska, as I see it, by its very isolation,
sparse
population,
multi-culturalism and
other
unique characteristics, is a place which
could provide a proving ground for many
new types of programs, approaches, innovations,
et cetera, which
could be
shared far outside its borders. So often
we think in terms of taking
in, but seldom it seems do Alaskans think
in terms of exporting their ideals.
As increasing attention is focused on the
North for economic reasons, education has
an opportunity
to
display the
best of what it is
doing to the rest
of the education community.
We need to play
upon our own resources, for evaluation and for education in general.
On a per capita
basis, the population
in
Alaska has a
greater diversity of background and expertise
than probably any other state-a
resource which is seldom considered and
tapped as we perhaps have looked at education
too narrowly in the past. Also, due to
its newness in terms of its statehood, the formation
of regional
school
districts
in
rural areas,
and recent
interest in providing diversity of educational
programs, Alaska could be considered
ahead in that it has little past experience
or tradition to overcome. There are no
required curriculums or
other barriers which innovators
in many other states have had to battle
before the real work could begin. In Northern
areas in general there seems to be a willingness
to seek new
solutions and a recognition that new solutions
indeed are needed.
BIBLIOGRAPHY
Campbell, D., “Reforms as Experiments,” American
Psychologist, 24:4, April, 1969.
Carroll, L., Alice’s Adventures in Wonderland, 1865, in The
Complere Works of Lewis Carroll. The Modern Library, New York, 1936, (p. 71).
Fuchs, E. and Havighurst, R., To Live
on This Earth, Doubleday, New York,
1972.
Hecht, K., “Title I Federal Evaluation: The First Five Years,” Teachers
College Record, 75:1, September, 1973.
Martig, R. and DeBlassie, B., “Self-Concept Comparisons of Anglo and
Indian Children,” Journal
of American Indian Education, May, 1973.
Powers, J., and others, “A Research Note on the Self-Perception of
Youth,” American Educational
Research Journal, 3:4, November, 1971.
Soares, A. and Soares, L., “Self-Perception of Culturally Disadvantaged
Children,” American Educational
Research Journal, 6:1, January, 1969.
Steele, S., Contemporary Approaches
to Program Evaluation, Capitol Publications,
Washington,
D.C., 1973.
Stufflebeam, D., and others, Educational
Evaluation and Decision Making, Peacock,
Stasca, Ill.,
1971.
Worthen, B., and Sanders, J., Educational
Evaluation: Theory and Practice, C.A.
Jones, Worthington,
Ohio, 1973.
|