Carmina
Fallada Pouget and Anthony Pym 2000
Paper written
for a publication that has something to do with the Euroliterature
project.
Abstract:
It is commonly argued that distance-education programmes
are the future of higher education. This has led to considerable
debate, both for and against. Some argue that it is not
worth spending huge amounts of money on infrastructure,
software, training and technological pedagogy. Reviewing
the literature on these issues we find three basic issues:
for or against qualitative research, the ‘no significant
difference’ debate, and criticisms of serious flaws in research
design. It is concluded that more qualitative approaches
are necessary and that the object of research should be
not just the various media involved in ODL, but specific
social sets of goals, tasks and strategies.
Keywords:
Keywords: Open and Distance Learning, no-significant-difference,
educational research methodology, electronic tools.
Open
and distance learning (ODL) takes place when teachers and
students are separated by physical distance and technology
is used to bridge the physical gap between them in relation
to the third component, namely ‘learning’ (Sánchez
1997: 11). Online courses reach wider audiences and give
people the chance to study at home at any time. More important,
their ‘openness’ is leading to what is called ‘internationalised
education’, addressing not culturally and linguistically
homogenous populations but people with different social,
cultural, and economic backgrounds. These are all good reasons
for pursuing ODL.
However,
it is too commonly assumed that the ODL learner is essentially
the same person as the traditional face-to-face learner.
This may not be the case. In a survey conducted by the Opinion
Research Corporation on 667 American working adults, it
was found that 50% of the respondents thought the great
advantage of online courses was the chance to work from
home. As has been pointed out by Willis (1995: 1), most
distance-education students are older and have jobs and
families. Their schedules do not allow them to attend classes
on-campus. They look for other ways of taking up courses
that will qualify them for better jobs. Distance education
programmes thus provide a new interactive means of overcoming
time and distance to reach these learners. For all these
reasons, the users of ODL may quite probably be very different
from the young students that populate our bricks-and-mortar
university classrooms.
This
difference in the learner population may be a key factor
when teachers and administrators attempt to assess how effective
their courses are (Knobloch 2000: 3). Do ODL courses really
meet the students’ needs? Do they work better than face-to-face
courses? If distance learning provides no significant advantage
in learning achievement, why should capital be invested
in technologies that support distance learning?
In
order to answer these questions, research is being carried
out to test if students in a virtual classroom can learn
as much as students in a traditional classroom. Distance
educators are able to choose from a wide range of technological
options but they have to know which ones are best for a
particular course. In order to ensure that the technology
chosen enhances the quality of education rather than degrades
it, it is crucial that there be some quantitative data on
how well electronic tools fulfil the students’ needs. Further,
is also seems advisable for instructors to develop new skills
in instruction strategies, methods of teaching, teacher-student
interaction, feedback, and evaluation (Knobloch 2000: 2).
This
bundle of questions has given rise to a rich body of research
and debate, mainly in the United States and almost always
in online form. The following is a review of the literature
under four headings: student distress, the ‘no difference’
argument, the ‘significant difference’ reply, and the ‘no
more research’ issue. We will attempt to situate our own
research projects with respect to each of these points.
Studies
of student distress
The
issue of student distress in ODL ensues from attention to
‘student-centred learning’ in the 1980s. An example of research
in this vein would be ‘Students’ Distress with a Web-based
Distance Education Course’, carried out by Noriko Hara and
Ron Kling in 1997 (available online in a 2000 rewrite),
which consequently became the object of some debate in the
American context.
Hara
and Kling’s originally ethnographic study involved qualitative
research on a web-based course on educational technology
designed for university students and taught by a recent
graduate who had previously done the same course as a student.
The researchers’ original question was ‘how and how well
do the students in this course manage their feelings of
isolation in a virtual classroom in order to create the
sense of a community of learning’. However, as the research
proceeded it was found that isolation was not the major
problem. Instead, ‘recurrent experiences of other types
of distress such as frustration, anxiety and confusion seemed
to be pervasive’. Direct observations and interviews with
the students showed that their main problems were actually
with computer technology, communication breakdowns, and
uncertainty about what was expected of them. The teacher
was found to be relatively enthusiastic but inexperienced;
she tended to assume that the students could handle all
the communication problems they encountered. In particular,
it was found that email may not be the answer to all communication
problems, since the lack of immediate feedback makes it
difficult for many students to distinguish between an ironic
and an angry teacher. The researchers concluded that more
attention had to be paid to training teachers for ODL technology,
and that serious work should be done adjusting the various
media to specific educational purposes. Much of the enthusiasm
for ODL was, for these researchers, clearly premature.
Hara
and Kling’s paper has given rise to various defences of
ODL methodology (summarized in recent versions of their
paper). It was argued, for example, that one case study
cannot be representative, that the teacher involved was
exceptionally naïve, and that there was no reason to
extend the findings to the whole of ODL.
Our
own research in this area does, however, give some credence
to Hara and Kling’s position. In a tandem-email project
involving Spanish and American learners, some students expressed
uneasiness with what they saw as the external obligation
to produce email messages (a reaction noted by Hara and
Kling). Others were looking for extremely explicit instructions
about how to use Blackboard websites and felt frustrated
when they were left to discover things by themselves. Yet
others were immediately concerned with the way ODL activities
were to be assessed for academic credits, and were disappointed
when there was no clear answer. We have thus had some students
withdraw from our programme or simply ‘go very quiet’.
It
can only be concluded that there is a real need for qualitative
as well as quantitative research in this area, and that
Hara and Kling’s focus on students’ distress should be treated
with considerable respect.
The
‘no difference’ argument
Perhaps
a more significant position in the literature is occupied
by the ‘no difference’ debate, basically concerned with
the idea that face-to-face learning and ODL achieve much
the same results. Thomas L. Russell (1999) provides a compilation
of 355 reports, summaries and papers on this general issue.
Summaries of those texts are available online (see References
below). For example, a 1928 doctoral dissertation on ‘Correspondence
and Class Extension Work in Oklahoma’ found ‘no differences
in test scores of college classroom and correspondence study
students enrolled in the same subjects’ (cit. Russell 1999).
The dates of the texts range from 1928 to 2000, and the
very quantity of the research enables Russell to argue that,
contrary to the standard research position (everyone concludes
that further research is necessary), a significant body
of ODL research has already been accumulated.
It
is interesting to note that, when sorted by whether the
papers are for or against the ‘no significant difference’
position, only in nine years (all since 1991) has there
been serious support for the argument that there is a ‘significant
difference’.
The
‘significant difference’ argument
In
some experiments carried out in the 1990s it was found that
students learn better online and that there is indeed a
significant difference between students who attend classes
on campus and those who learn in virtual classrooms. These
studies point out that students who attend classes online
should previously be given some training on how to use technological
tools. They argue that students in virtual classes tend
to spend more time working with each other and that this
collaboration leads to greater student-student interaction
and better results (see, for example, Schutte 1996; Black
1997). They also suggest that some students feel less
intimidated in online classes than in face-to-face situations
because chat sessions or email exchanges allow relative
‘anonymity’ (McCollum 1997). For such reasons, one might
legitimately expect ODL learning to be quite unlike traditional
face-to-face experiences.
Both
sides of this debate nevertheless seem limited in that they
focus on the students’ results, without paying great attention
to the kinds of issues raised by Hara Kling. The arguments
are thus mainly over quantitative criteria (what should
be assessed? how should it be assessed?), rather than with
quantitative in-depth case studies, which in any case require
methodologies unsuited to direct comparisons.
A
second obvious shortcoming in the ‘no significant difference’
debate is that there is no guarantee that we are dealing
with comparable students in the first place. As has been
pointed out above, the students who take ODL courses are
often older, at work, or taking a degree for non-vocational
reasons. This might be contrasted with the traditional university
degree, which still retains many social virtues as the ‘world’s
greatest youth camp’, as opposed to ODL as offering a highly
developed ‘intellectual shopping mall’ (to sum up the general
positions of O’Donnell 1998). For example, the Catalan Open
University recently calculated that, on average, it would
take each student some 15 years to complete an undergraduate
degree... which can only mean that most of their ODL students
are not actively studying for a degree. This in turn means
that methodologies based on comparing the two teaching methods
are fatally flawed from the outset, since very different
student populations are involved.
The
‘no more research’ argument
While
controversy on the efficiency of ODL still continues, some
have found significant cause to argue that the research
carried out in this field is largely flawed and that the
methodologies and conclusions are inadequate. In a 2000
paper on ‘Measuring Learning Effectiveness: A New Look at
No-Significant-Difference Findings’, Joy and Garcia suggest
that instructors should carefully interpret the results
of the studies that compare different media, since fundamental
design flaws abound. In particular, Joy and Garcia detect
a common failure to control ‘time on task’ (how long students
actually take to complete a task), and to constitute adequate
experiment and control groups. Treatment periods (the time
taken for the observations) were mostly too short, and it
was frequently assumed that all instructors were equally
able to teach in all media. In sum, many research projects
failed to control major variables and were thus able to
find whatever they set out to look for. Joy and Garcia further
suggest that, given the range of complex variables involved,
it may be wrong to ask which medium of instruction is the
best. The more legitimate question would seem to be ‘what
combination of instructional strategies and delivery media
will best produce the desired learning outcome for the intended
audience’.
Such
a conclusion would seem to contradict the position of Russell,
who basically argues that there is already a significant
body of research in this field. Clearly, there can be no
question of going back and starting from scratch. Yet it
is equally clear that many wrong questions have been asked,
and many proposed answers are thus misleading. This, in
turn, should provide a strong argument for pursuing research
on a revised footing.
Conclusion
From
our brief survey of the copious literature available on
ODL research, we retain the message that qualitative research
is uniquely suited to the complexity of the many variables
involved, whereas a purely quantitative focus on assessing
student performance is likely to lead into cul-de-sac dialectics.
Our
second conclusion is that much must be done to assess the
impact of having different students and different instructors
involved in ODL activities. Since these human variables
are notoriously difficult to control, their diversity must
be accepted before it can be countered. In many cases, a
period of controlled prior training, for both instructors
and learners, may be necessary before research as such can
begin.
Third,
and in the same vein, researchers must be prepared to look
at packages of media and strategies, rather than at isolated
‘pure’ forms of ODL and its negation. This especially concerns
focusing not just on the media but on the specific goals
to be achieved for each task. To take a simple example,
a brainstorming session might seem suited to chat exchanges,
whereas an activity aimed at improving formal written language
would be better suited to email or even pen-and-paper. In
most cases, however, there is much collateral learning going
on. The chat session can teach students a great deal about
language use (usually more than about whatever content is
involved), and the greater reflection time involved in email
should improve the quality of the ideas exchanged and thus
the conceptual focus of any brainstorming. Either way, it
is the combination of medium and goal that must be assessed.
In
sum, given the complexity of this domain and the many doubts
raised by much previous investigation, the best research
projects may be the actual teaching activities themselves,
to be assessed interactively by the teachers and students
involved. In questions of ODL, there can perhaps be little
fundamental difference between doing it and studying it.
References
Hara,
Noriko & Kling, Rob. (2000). ‘Students’ Distress with
a Web-based Distance Education Course: An Ethnographic Study
of Participants’ Experiences’.
(http://www.slis.indiana.edu/CSI/wp00-01.html)
Joy,
Ernest H. II, & Garcia, Federico. (2000). Measuring
Learning Effectiveness: A new Look at No-Significant-Difference
Findings. Journal of Asynchronous Learning Networks.
(http://www.aln.org/alnweb/journal/Vol4_issue/joygarcia.html)
Knobloch,
Neil. A. (2000). Distance Learning: Is It Working?
(http://telr.ohio-state.edu/conference/kickitup/knobloch.html)
McCollum,
Kelly (1997). ‘Students Taught Online Outdo Those Taught
in Class’.
(http://www.teleeducation.nb.ca/media/0297/betteronline.html)
O’Donnell,
James (1998). Avatars of the Word. From Papyrus to Cyberspace.
Cambridge MA, London: Harvard University Press.
Russell,
Thomas L. (1999). The ‘No Significant Difference Phenomenon’.
Fifth edition. North Carolina State University Office of
Instructional Telecommunications.
(http://cuda.teleeducation.nb.ca/nosignificantdifference/
Sánchez-Mesa,
Domingo, ed. (1997). Crosscultural and linguistic perpectives
on European open and distance learning. Granada: Universidad
de Granada.
Schutte,
Jerald. (1996). The Intellectual Superhighway or Just Another
Traffic Jam?
(http://www.csun.edu/sociology/virexp.html)
Willis,
Barry. (1995). Strategies for Learning at a Distance. In
Distance Education at a Glance: A Practical Guide.
(http://www.uidaho.edu/evo/distglan.html)
-
- Last update
26 December 2000
|