NEW


Best of EDEN 2018 Special issue
 

There are not any recent contributions.

There are not any recent contributions.

Archives

EURODL Mailinglist

2851 subcribers
 

EURODL Visitors

back

Issues of Interface

Karen Swan (kswan@kent.edu)
Research Center for Educational Technology
Kent State University, Kent OH 44242
This paper was shortlisted for the 'Best paper award' at the
Third EDEN Research Workshop, Oldenburg, Germany, 2004

Abstract

Although online learning no longer entails the kinds of interface barriers it once did, recent research is making very clear that interactions with interfaces significantly affect other interactions in online courses. This paper reviews the current literature on online learning to see what it can tell us about the mediating effects of differing interfaces on the three kinds of interactions described by Michael Moore [1] - interactions with course content, interactions with instructors, and interactions with classmates. The results point to the need for further serious research in this area.

Keywords

online learning, interactions, interface issues

Introduction

In 1989, Michael Moore [1] identified three kinds of interactions that supported online learning — interaction with content, interaction with instructors, and interaction among peers — which have proved useful constructs for thinking about online learning up to the present. Not long thereafter, Hillman, Willis, and Gunawardena [2] noted that new and emergent technologies had, at least temporarily, created a fourth type of interaction, learner-interface interaction, which they defined as the interaction that takes place between a student and the technology used to mediate a particular distance education process. Interface thus refers to specific technologies, platforms, applications, and course templates students must use to interact with course content, instructors and classmates (Figure 1).

Ten years later, interfaces no longer represent the kinds of barriers to interaction they once did, but it is becoming increasingly clear that interactions with interfaces significantly afford and/or constrain the quality and quantity of the other three interactions [3]. Swan, Bowman, Vargas, Schweig and Holmes [4], for example, developed a user-response model of the ways in which people make sense of electronic texts based on rich observations of students searching them for information. Their grounded research found that, unlike printed texts which most readers interpret singly, users engage electronic texts at three levels, each of which affect meaning making — the content or page level, the design or website level, and the platform and browser level. These last two level represent issues of interface. Students not only needed to navigate and make sense of each of these levels before they could process content, but how they interacted with platforms/browsers and the structure of particular websites affected the meanings they eventually developed from the content of those websites.

This paper will review educational research and explore issues concerned with students' interaction with course interfaces and the ways in which these affect student learning. It will do so in terms of the mediating effects of course interfaces on the three types of interactions described by Moore [1] — interaction with content, interaction with instructors, and interactions among classmates — and in so doing provide good evidence of the need for significant new research on issues of interface.

1

Figure 1: Interaction with Interface Conceptualized
Swan, 2003

Interface Issues and Interaction with Content

Interaction with content refers to the learners' interaction with the knowledge, skills and attitudes being studied. In general, this has to do with the learners' interaction with the course materials. It is thus primarily concerned with course design factors. These, of course, include course interfaces. Measurement of online content learning has been undertaken in terms of performance (course grades, exams, written assignments, etc.) and perceptions of learning by students and faculty. Most of this research has involved comparisons of learning online with learning in traditional classrooms, and most of that has found no significant differences in learning outcomes between the two modes of learning [5, 6, 7].

Some of this research, however, has looked at specific interface issues. For example, pioneering research on online learning demonstrated that the structure [8], transparency [9], and communication potential [10] of course designs heavily impact students' learning. Swan, Shea, Fredericksen, Pickett, Pelz. and Maher [11] examined the relationships between course design factors and students' perceived learning in 73 different online courses and found significant correlations between the clarity, consistency, and simplicity of course designs and students' perceived learning. These findings suggest both a constraint of asynchronous online environments and a way of ameliorating that constraint. Because real-time negotiation of meaning is impossible among instructors and students separated by space and time, clarity of meaning is more important in online classes. Consistent, transparent, and simple course structures add to such clarity as well as insure that learners only have to adapt to course structures once.

A growing focus in research on the effects of interface and interface design on online student learning involves the use of a variety of media to deliver course content. Researchers, designers and practitioners are beginning to ask what combinations of text, pictures, animations, audio and video best support student learning. Richard Mayer [12] has been studying these issues for the past fifteen years in experimental studies of students' understanding of how scientific systems work. In over 20 separate investigations, Mayer and his colleagues meticulously tested the multimedia conditions which resulted in the greatest transfer of learning from differing presentations of scientific explanations.

For example, they randomly assigned students to interact with two versions of a computer-based explanation of the phenomenon of lightning, one in which animations were accompanied by textual explanations and one in which the same animations were accompanied by audio narrations. Student performances on tests of their ability to transfer their understanding of lightening were compared between groups and significant differences favoring animation with narration were found. Mayer made similar comparisons of differing combinations of media and variations in multimedia presentations and replicated his results multiple times in all cases. Findings from this work are summarized in Table 1 which shows both research results (research effect) and practical applications of the findings (design principle).

Chi-Hui Lin [13] reports similar results, in similar experimental studies of the effects of differing multimedia presentations on student learning of mathematical concepts. In particular, Lin found that students given animated representations of concepts outperformed those shown video. This work also provides an intriguing glimpse into interactions between interface design and students' epistemological beliefs and their effects on attitudes toward learning. As online courses and online development packages are including a greater variety of media all the time, further research of this kind seems particularly useful and timely. In particular, research on the effects of differing media representations on different kinds of learning, for example support for collaborative learning, or the learning of differing kinds of knowledge, might be particularly useful.

 
RESEARCH EFFECT

DESIGN PRINCIPLE
When designing multimedia, ...

MODALITY better transfer from animation and narration than from animation and text ...present explanations of animations in spoken form.
CONTIGUITY better transfer when narration and animation are presented simultaneously rather than sequentially ...present narration and animation simultaneously.
MULTIMEDIA better transfer from animation and narration rather than from narration alone ...provide narration for animations.
PERSONALIZATION better transfer when narration is conversational rather than formal ...present narration in a conversational style.
COHERENCE better transfer when irrelevant video, narration, and/or sounds are excluded ...avoid extraneous video and audio.
REDUNDANCY better transfer from animation and narration than from animation, narration and on-screen text ...do not add text to presentations involving animations with narration.
PRETRAINING better transfer when explanations of system components precedes rather than follows a narrated animation ...begin explanations with concise descriptions of system components
SIGNALING better transfer when different parts of a narration are signaled ...include signaling that identifies the organization of the presentation.
PACING better transfer when the pace of presentation is learner controlled ...allow the learner to have control over the pace of the presentation.

Table 1: Effects and Principles of Multimedia Design
adapted from Mayer, 2001

Other research on the effects of interaction with online interfaces involves the design of particular interfaces. For example, Chang, Sung and Chiou [14] investigated the efficacy of a hierarchical hyper-concept map (HHCM) interface compared with a simple hierarchical navigation system and a linear course presentation for supporting junior high students learning of computer concepts. Dependent measures included a test of computer hardware achievement (CHAT) and logs of time students spent using online materials. The HHCM group scored significantly better than the linear group on performance measures, and took significantly less time reading the materials than students in either the linear group or the hierarchical navigation group. The authors thus maintained that students learned faster and slightly better from the HHCM interface.

Similarly, Gutl and Pivec [15] explored the efficacy of a Virtual Tutor (VT) application for scaffolding the problem solving of undergraduate computer science students. The VT combined capabilities for multimedia representation with an expert system to provide guided support for solving computer science design problems. The authors compared the problem solutions of students randomly assigned to work either with the VT or using traditional print resources. They found that all the VT students provided correct solutions to a transfer problem, whereas two of the students who worked with print materials provided incorrect solutions and two provided incomplete solutions. In addition, students working with the print materials experienced time problems, while students working with the VT did not. The authors argue that the results show that students solved problems better and faster using the Virtual Tutor.

The results of these studies of particular interfaces may suggest ways in which interfaces can be designed to better support student learning. It is also important to note that the interfaces advocated in both studies exploit the unique capabilities of computing environments. Further research of this sort will add to our knowledge of how we can better design course interfaces to support learning and is certainly indicated.

Interface Issues and Interaction with Instructors

A second type of interaction in online environments occurs between learners and their instructors. In any educational setting, the instructor serves as an expert who plans instruction to stimulate students' interests, motivates their participation in the learning process, and facilitates their learning. The relationship between instructor/student interactions and learning outcomes has been well documented in traditional classrooms. A similar relationship has been found online [11, 16, 17, 18]. Recent work by Shea [19], in particular, has demonstrated significant relationships between a variety of measures of teaching presence and perceived learning in a very large and diverse population.

Two studies that explored both positive and negative influences on learning of interfaces also deserve mention. Both studies compared instructor provided feedback on assignments with web-based model comparison types of feedback. Riccomini [20] investigated pre-service education students' application of behavior-analysis and instructional-analysis skills on criterion tasks after receiving either instructor-delivered corrective feedback on a similar task or being directed to a web-based exemplary model that students had to then compare with their own solutions to the task. Riccomini used an experimental, counter-balanced design in which students were randomly assigned to groups who received one type of feedback for one of the tasks and the other type of feedback on the other. He found that students receiving instructor delivered corrective feedback significantly outscored students using web-based model comparison feedback on both tasks.

Researchers at Michigan State University [21] made a similar comparison of instructor-delivered and web-based assignment feedback. They compared the performances of undergraduate physics students using an instructor supported discussion forum for help with assignments with the performance of students using a third party website where assignment solutions were given. This is an interesting study because it examines learning from real-world, web-based applications. The Michigan State physics department created a program to generate individualized homework assignments. In response, former students created a web application that generated answers with explanations to those problems. This study compared the performance of students using this third party site for help with their homework with the performance of students who took advantage of an instructor supported discussion site where they could get help on their homework from graduate assistants (GAs). The researchers further distinguished between students who posted to the instructor supported discussion, and students who just read those discussions. Using correlational analyses, they examined the relationships between the use of each of the online homework support sites and students' grades on homework, quizzes, and midterm and final exams, with the effects of aptitude (operationalized as composite ACT scores) partialled out. They found positive correlations between posting to the sanctioned site and grades on homework, midterm, and final exams, and between visiting the sanctioned site and grades on midterm and final exams. Interestingly, there was a negative correlation between just visiting the sanctioned site and homework grades. On the other hand, there was a positive correlation between using the third party site and homework scores, but negative correlations between using that site and grades on quizzes, midterms, and final exams.

The results of this and the previous study indicate that web-based explanations of homework may not support conceptual learning without instructor interaction and feedback, at least with undergraduate populations. These thus may argue against certain kinds of automated interfaces. Further research in this area could prove fruitful.

Interface Issues and Interaction among Classmates

Socio-cognitive theories of learning maintain that all learning is social in nature and that knowledge is constructed through social interactions. Online education seems particularly well situated to support such social learning because of the unique nature of asynchronous course discussions. Many researchers have found that students perceive online discussion as more equitable and more democratic than traditional classroom discourse [22]. In addition, because it is asynchronous, online discussion affords participants the opportunity to reflect on their classmates' contributions while creating their own, and on their own writing before posting it. This tends to create a certain mindfulness and a culture of reflection in online courses [23, 24, 25].

A great deal has been written on online discourse. Some of this work looks anecdotally at media issues, examining the best uses of synchronous vs. asynchronous discussion for example [26]. This work seems to suggest important issues for serious further investigation. Most of the research, however, examines asynchronous online discussion, and implicitly views online discussion as not influenced by interface issues other than its asynchronous nature. Particularly compelling research contesting such implicit assumption can be found in Jim Hewitt's [27, 28] studies of patterns of development in online discussions. Hewitt's work questions the implicit assumption many of us hold that discussion threads develop solely according to course requirements, students' needs and interests, and instructor facilitation.

In a large scale analysis of the online discussions of 92 graduate students enrolled in five asynchronous online courses, Hewitt [27] examined patterns of interactivity in 673 multi-message threads. For example, in a four message thread, he identified six possible patterns across time (Figure 2):

  1. a posting and three responses to that posting (depth of 2)
  2. a posting, two responses to it, and a response to the second response (depth of 3)
  3. a posting, a response, and two responses to that response (depth of 3)
  4. a posting, two responses to it, and a response to the first response before the second is posted (depth of 3)
  5. a posting, two responses to it, and a response to the first response after the second response was posted (depth of 3)
  6. a posting, a response to it, a response to the response, and a response to it (depth of 4)
2

Figure 2: Frequencies of Patterns of Interaction in Four Message Threads
adapted from Hewitt, 2003

Hewitt examined the frequencies of these various patterns. All things being equal, one would expect to find an equal distribution of these patterns. Instead, out of 342 four-message threads, Hewitt found 134 were in the elongated pattern (F), whereas only 32 were in the truncated pattern (A), and a mere 23 were in the late response pattern (E). The remaining threads with a depth of three (B, C, D) occurred either 51 or 52 times. Hewitt attributes these disparities in the frequencies of occurrences of patterns of interaction to students' habits of participation in online discussions, habits he maintains are encouraged by the design of discussion interfaces to flag unread notes. Indeed, when he investigated user logs, he found that most students (97.6%) read messages before they posted messages, read only messages flagged as unread (82%), and tended to respond to messages that were less than 48 hours old (80%).

In a follow-up study [28], Hewitt found that these patterns of interaction could be replicated using a Monte Carlo simulation based on nothing more that typical rates of reading and posting messages and a rule which stated that only messages flagged as unread would be responded to. He thus concludes that patterns of interactivity in online discussion are governed as much by what notes are flagged unread at any particular time as by course requirements, students' needs and interests, or perhaps even instructor facilitation. This practice, clearly resulting from interfaces that flag messages as unread and that only display a single message at a time, favors elongated threads and discussions he characterizes as growing like forest fires, at the edges. The problem with this, he observes, is that potentially interesting and important threads are unintentionally abandoned, and that unintentional changes in topic occur, resulting in disjointedness and discussions that are often peripheral to course content. This issue clearly deserves further investigation, especially investigation concerning alternative representations of online discussion.

Conclusions

Hewitt's work clearly demonstrates the impact course interfaces can have on learning online. There has long been a dispute in the field of educational technology, epitomized by the ongoing debate between Richard Clark [29] and Robert Kozma [30] in the late 1980s and 90s. Clark argued that findings of significant differences between technology-based and traditional interventions resulted not from media effects but rather from better designed technology-based instruction. Media, he maintained, were like trucks, they were delivery vehicles and no more. What mattered, according to Clark, was the quality of instruction, not how it was delivered. Kozma challenged Clark's position. He conceded the importance of instructional design, but argued that media mattered too. All media, Kozma argued, particularly support specific kinds of instruction and are less supportive of others. Media afford and constrain different kinds of learning simply because they mediate, they are necessarily in-between, instructional interactions. In online learning, the primary vehicle of that mediation is the course interface. The research reviewed in this paper quite clearly shows that interfaces matter. The notion surely deserves serious, specific, and rigorous investigation.

References

  1. MOORE, M.G. (1989) Three types of interaction, American Journal of Distance Education/3/2, 1-6.
  2. HILLMAN, D.C., WILLIS, D.J. & GUNAWARDENA, C.N. (1004) Learrner-interface interaction in distance education: An extension of contemporary models and strategies for practitioners. The American Journal of Distance Education/8/2, 30-42
  3. SWAN, K. (2003) Learning effectiveness: what the research tells us. In J. Bourne & J. C. Moore (Eds) Elements of Quality Online Education, Practice and Direction. Needham, MA: Sloan Center for Online Education, 13-45.
  4. SWAN, K., BOWMAN, J., VARGAS, J., SCHWEIG, S. & HOLMES, A. (1998/99) Reading the WWW: Making sense on the information superhighway. Journal of Educational Technology Systems/27/2, 95-104.
  5. RUSSELL, T.L. (1999) The no significant difference phenomenon, Montgomery, AL: IDEC. http://www.nosignificantdifference.org/
  6. BARRY, M. & RUNYAN, G. (1995) A review of distance-learning studies in the U.S. military, The American Journal of Distance Education/9/3, 37-47.
  7. HILTZ, R. ZHANG, Y. & TUROFF, M. (2002) Studies of effectiveness of learning networks, Elements of Quality Online Education: Volume 3 in the Sloan-C™ Series.
  8. ROMISZOWSKI, A.J. & CHENG, E. (1992) Hypertext's contribution to computer-mediated communication: in search of an instructional model, In Giardina, M. (Ed.) Interactive Multimedia Learning Environments. Berlin: Springer.
  9. EASTMOND, D.V. (1995) Alone but Together: Adult Distance Study through Computer Conferencing. Cresskill, NJ: Hampton Press.
  10. IRANI, T. (1998) Communication potential, information richness and attitude: A study of computer mediated communication in the ALN classroom, ALN Magazine/2/1.
  11. SWAN, K., SHEA, P., FREDERICKSEN, E., PICKETT, A, PELZ, W. & MAHER, G. (2000) Building knowledge building communities: consistency, contact and communication in the virtual classroom, Journal of Educational Computing Research/23/4, 389-413.
  12. MAYER, R.E. (2001) Multimedia Learning. New York: Cambridge University Press.
  13. LIN, C-H. (2002). Effects of computer graphics types and epistemological beliefs on students' learning of mathematical concepts. Journal of Educational Computing Research, 27, 3, 265-274.
  14. CHANG, K-E., SUNG, Y-T & CHIOU, S-K. (2002) Use of hierarchical hyper-concept maps in web-based courses, Journal of Educational Computing Research/27/4, 335-353.
  15. GUTL, C. & PIVEC, M. (2003) A multimedia knowledge module virtual tutor fosters interactive learnin,. Journal of Interactive Learning Research/14/2, 231-258.
  16. PICCIANO, A.G. (1998) Developing an asynchronous course model at a large, urban university, Journal of Asynchronous Learning Networks/2/1.
  17. JIANG, M. & TING, E. (2000) A study of factors influencing students' perceived learning in a web-based course environment, International Journal of Educational Telecommunications/6/4, 317-338.
  18. RICHARDSON, J.C. & SWAN, K. (2003) Examining social presence in online courses in relation to students' perceived learning and satisfactio, Journal of Asynchronous Learning Networks/7/1, 68-88.
  19. SHEA, P.J., PICKETT, A.M. & PELZ, W.E. (2003) A follow-up investigation of "teaching presence" in the SUNY Learning Network, Journal of Asynchronous Learning Networks/7/2, 61-80. http://www.sloan-c.org/publications/jaln/v7n2/v7n2_shea.asp
  20. RICCOMINI, P. (2002) The comparative effectiveness of two forms of feedback: web-based model comparison and instructor delivered feedback, Journal of Educational Computing Research/27/3, 231-228.
  21. KASHY, D.A., ALBERTELLI, G.H., BAUER, W., KASHY, E. & THOENNESSEN, M. (2003) Influence of non-moderated and moderated discussion sites on student success, Journal of Asynchronous Learning Networks/7/1, 31-36. http://www.sloan-c.org/publications/jaln/v7n1/v7n1_kashy.asp
  22. HARASIM, L. (1990) On-line Education: Perspectives on a New Environment. New York: Praeger.
  23. HILTZ, S.R. (1994) The Virtual Classroom: Learning without Limits via Computer Networks. Norwood, NJ: Ablex.
  24. POOLE, D.M. (2000) Student participation in a discussion-oriented online course: a case study, Journal of Research on Computing in Education/33/2, 162-177.
  25. GARRISON, D.R. (2003) Cognitive presence for effective asynchronous online learning: the role of reflective inquiry, self-direction and metacognition, In J. Bourne & J. C. Moore (Eds) Elements of Quality Online Educatio:, Practice and Direction. Needham, MA: Sloan Center for Online Education, 47-58.
  26. TRAININGZONE (2001) Synchronous vs. Asynchronous Learning. IT-Analysis.Com. http://www.it-analysis.com/article.php?articleid=2236
  27. HEWITT, J. (2003) How habitual online practices affect the development of asynchronous discussion threads, Journal of Educational Computing Research/28/1, 31-45.
  28. HEWITT, J. (2003) Towards an understanding of how threads die in asynchronous computer conference. Chicago: Paper presented at the annual meeting of the American Educational Research Association.
  29. CLARK, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53 (4), 445-459.
  30. KOZMA, R. B. (1991). Learning with media. Review of Educational Research, 61, 179-211.
  31.  

Tags

e-learning, distance learning, distance education, online learning, higher education, DE, blended learning, MOOCs, ICT, information and communication technology, collaborative learning, internet, interaction, learning management system, LMS,

Current issue on Sciendo

– electronic content hosting and distribution platform

EURODL is indexed by ERIC

– the Education Resources Information Center, the world's largest digital library of education literature

EURODL is indexed by DOAJ

– the Directory of Open Access Journals

EURODL is indexed by Cabells

– the Cabell's Directories

EURODL is indexed by EBSCO

– the EBSCO Publishing – EBSCOhost Online Research Databases

For new referees

If you would like to referee articles for EURODL, please write to the Chief Editor Ulrich Bernath, including a brief CV and your area of interest.