NEW


Best of EDEN 2018 Special issue
 

There are not any recent contributions.

There are not any recent contributions.

Archives

EURODL Mailinglist

2851 subcribers
 

EURODL Visitors

back

Achieving Improved Quality and Validity: Reframing Research and Evaluation of Learning Technologies

Adrian Kirkwood, Linda Price, The Open University, United Kingdom

Abstract

A critical reading of research literature relating to teaching and learning with technology for open, distance and blended education reveals a number of shortcomings in how investigations are conceptualised, conducted and reported. Projects often lack clarity about the nature of the enhancement that technology is intended to bring about. Frequently there is no explicit discussion of assumptions and beliefs that underpin research studies and the approaches used to investigate the educational impact of technologies. This presentation summarises a number of the weaknesses identified in published studies and considers the implications. Some ways in which these limitations could be avoided through a more rigorous approach to undertaking research and evaluation studies are then outlined and discussed.

Abstract in Portuguese

Uma leitura crítica da literatura de investigação relacionado com o ensino ea aprendizagem com a tecnologia para aberto, à distância e educação blended revela uma série de deficiências na forma como as investigações são conceituados, realizados e notificados. Projetos muitas vezes não têm clareza sobre a natureza do realce que a tecnologia se destina a trazer. Freqüentemente há nenhuma discussão explícita de suposições e crenças que sustentam estudos de investigação e as abordagens utilizadas para investigar o impacto educacional das tecnologias. Esta apresentação resume algumas das deficiências identificadas em estudos publicados e considera as implicações. Algumas maneiras em que essas limitações poderiam ser evitados através de uma abordagem mais rigorosa para a realização de estudos de investigação e avaliação são, então, descritos e discutidos.

Abstract in French

Une lecture critique de la littérature de recherche portant sur l'enseignement et l'apprentissage par la technologie pour ouvert, à distance et l'éducation mixte révèle un certain nombre de lacunes dans la façon dont les enquêtes sont conceptualisés, effectuées et relatées. Projets manquent souvent de clarté sur la nature de l'amélioration que la technologie est destinée à apporter. Souvent, il n'y a pas de discussion explicite des hypothèses et croyances qui sous-tendent les études de recherche et les approches utilisées pour étudier l'impact éducatif des technologies. Cette présentation résume un certain nombre de faiblesses identifiées dans des études publiées et considère les implications. Certaines façons dont ces limitations pourraient être évités grâce à une approche plus rigoureuse pour entreprendre des études de recherche et d'évaluation sont ensuite présentés et discutés.

Keywords: Epistemological models; learning technology; research design; student learning; university teaching; validity.

Introduction

In recent years open and distance education (ODE) has increasingly been equated with digital learning technologies. Through the use of technology, universities in many countries now offer aspects of ODE, whether they are dedicated ODE institutions or campus-based. Although technology uptake has been considerable, it is reasonable to ask why research and evaluation studies of learning technologies have had so little impact on implementation decisions and teaching practices. Has research contributed to building a body of evidence that can inform and provide a firm foundation for subsequent developments in academic practice? Is evidence being generated and reported that can inform the future practices of university teachers and students? Innovation and change should be evidence-informed and we need to ensure that the research and evaluation of learning technology projects produces findings that can inform other practitioners and policy-makers.

While there are concerns about what types of evidence are considered during any implementation decisions (Price & Kirkwood, 2014), misgivings have also been expressed about the lack of a well-established body of evidence and about the quality and validity of many research and evaluation studies. Selwyn (2012) has described this area of scholarship as “notoriously sloppy” and “brimming over with lazily executed ‘investigations’ and standalone case studies, while also tolerating some highly questionable thinking” (p.213). In their literature review of studies on the use of technology in schools, Cox and Marshall (2007) identified many methodological limitations and uncertainties that “point to the need for a thorough, rigorous, and multifaceted approach to analysing the impact of [learning technologies] on students’ learning” (p.60). Clearly there is much room for improvements to be made in the conduct of research and evaluation studies relating to technology and education.

We have reviewed research literature, reports and case studies relating to learning technology innovations at university level and identified many problems with the ways in which studies were conceived and conducted. Consequently, it is difficult to generalise any findings about effectiveness. We identified issues relating to assumptions and beliefs underpinning research studies and the approaches used to investigate the impact of technologies (Kirkwood & Price, 2013a). Frequently, there was a lack of clarity about the nature of the enhancement that technology was intended to bring about and what impact technology would have upon the student learning experience (Kirkwood & Price, 2014). Furthermore, relatively few published accounts of learning technology innovations at university level showed exhibited a scholarly approach to teaching. Frequently, interventions appear to be technology-driven rather than being undertaken in response to an identified teaching and/or learning concern (Kirkwood & Price, 2013b).

Here we examine some implications of the shortcomings we identified in published studies. We then suggest ways of avoiding these limitations through taking a more rigorous approach to conceptualising, designing, conducting and reporting research and evaluation studies relating to learning technologies.

How ‘fit for purpose’ are the research methods utilised?

Research methods are not value-free or neutral: they reflect epistemological positions that determine the scope of inquiries and findings. In other words, there are assumptions and limitations associated with all research methods and approaches and these are often implicit or unstated. In reviewing published accounts of research and evaluation studies relating to the use of technologies for education we have identified:

  • A lack of clarity and specificity about what outcomes were expected to be achieved and, therefore, what the focus of the research should have been;
  • Narrow or inappropriate conceptions of what constitutes ‘scientific’ experimentation;
  • Poorly conducted ‘scientific’ experimentation;
  • Insufficient attention to the underlying assumptions and models associated with any method of enquiry;
  • Unwarranted conclusions being drawn from research findings, often based upon inappropriate expectations.

Before discussing these shortcomings further we explore briefly what we mean by ‘rigour’ in such research.

What determines ‘rigour’ in educational research?

We are concerned that much of the published research on learning technologies has been undertaken without a rigorous approach. On the other hand, we are also troubled by the claims made by some researchers that only a highly constrained ‘scientific’ approach has any validity. A scientific enquiry involves the testing of hypotheses about why and/or how things happen. It is as much about framing the right questions as it is about adopting any particular approach or methodology. Testing is carried out by carefully collecting evidence that is both appropriate and sufficient to demonstrate whether or not the expected consequences of the hypothesis have happened. If not, the hypothesis must be rejected and a revised hypothesis subjected to scrutiny in a similar manner.

In recent years there has been considerable debate (particularly in the USA) about the extent to which educational research should be more experimental, ‘evidence-based’ and be directed towards informing policy-makers about ‘what works’. Ostensibly, the linking of research and policy-making for practice might seem fairly innocuous. However, it is necessary to examine the assumptions and theoretical positions that underlie the various claims in order to understand the nature of the controversy and debate.

Some people claim that generalisable results can only be obtained by the adoption of positivist experimental methods and approaches (Cook, 2002; Slavin, 2002; 2003; Torgerson & Torgerson, 2001). Randomised controlled experimentation, often found in medical research, is considered to be the ‘gold standard’ and proposed as the ideal to be emulated in educational research. It is claimed that research on the use of technology for teaching and learning should involve tightly controlled ‘comparative studies’ or other forms of experiment. A cumulative synthesis of results from many such studies can be developed through ‘systematic reviews’ and ‘meta analyses’ (e.g. Tamim et al., 2011). All studies of this kind require the adoption of a strict experimental approach, the use of quantitative data and statistical analysis techniques. They also relate only to certain types of educational innovation or intervention. Consequently, this narrow and prescriptive view of what constitutes ‘scientific’ research excludes consideration of any studies that do not meet strict criteria for inclusion. It also reflects just one view of what constitutes education, a highly contested concept.

Many educators and researchers contest that position for both practical and epistemological reasons (Biesta, 2007; Clegg, 2005; Howe, 2009; Reeves, 2011; Rowbottom & Aiston, 2006; Scriven, 2008; Simons, 2003). We cannot examine those criticisms in detail, but there are many problems to be explored by those aspiring to undertake rigorous experimental research in education. Questions should be asked, such as:

  • How similar are the educational and medical contexts – Is it appropriate to equate teaching and learning processes with the treatment of medical conditions?
  • How feasible and ethical is it to conduct randomised experiments within education contexts, particularly when (for example at university level) the number of participants tends to be fairly low?
  • Exactly what part of the educational process is being investigated when strictly controlled experiments can be conducted?

In respect of research on the use of learning technologies there are further contested aspects. For example, the applicability of much-used ‘comparative study’ method, which so often leads to ‘no significant difference’ being the reported outcome. Can that experimental method be an appropriate way to assess innovations aimed at transforming students’ learning (rather than maintaining the status quo in all respects other than the medium used) (Kirkwood, 2013)? Seeking a suitably rigorous ‘scientific’ approach, many researchers concentrate their attention on the wrong variables (e.g. instructional delivery modes) rather than on meaningful pedagogical dimensions (Reeves, 2011). Other research methods and approaches can be suitably rigorous (ibid.), without invoking narrow experimentation and technological determinism (Oliver, 2011).

Improving quality and validity

Better conceptualisation of the issues underpinning any study (i.e. the goals, aims and rationale of an innovation; the underlying assumptions about ‘teaching’, ‘learning’ and ‘enhancement’) are essential to improve the quality and validity of research. A better understanding can inform and influence the research approach adopted and the data collection methods involved. It will also clarify what interpretations of the findings are appropriate (or not) at the reporting stage. We suggest the following steps to improve the quality and validity of research.

1. Ascertain the aims and rationale of the e-Learning project

Why was a technology innovation initiated and implemented? What goals was it trying to achieve? These need to be understood before deciding on the most appropriate research approach and methods. Determine what precise form of enhancement is sought from this application of learning technology. For example, is the desired enhancement primarily concerned with issues such as:

  • increasing technology use?
  • catering for increased student numbers?
  • improving the circumstances or environment in which educational activities are undertaken?
  • improving teaching practices?
  • improving (quantitatively and/or qualitatively) student learning outcomes?

Researchers must consider how any enhancement will be achieved and demonstrated (e.g. greater use, increased time on task, improved student satisfaction with teaching, quantitative and/or qualitative improvements in learning). If the intended enhancement involves ‘improvements in learning’ how are these conceptualised and how will they be operationalised and demonstrated? These are discussed further in subsequent sections.

2. Determine the pedagogic purpose of the e-learning project

A recent critical review of published research and evaluation studies of actual technology interventions (Kirkwood & Price, 2014) found that the primary purpose of each project could be assigned to one of three categories:

  • Replicating existing teaching practices;
  • Supplementing existing teaching;
  • Transforming teaching and/or learning processes and outcomes.

Occasionally the stated outcomes expected of projects were inappropriate for the type of intervention being made. For example, projects that simply replicated existing teaching had unwarranted expectations about the transformation of student learning. Simply changing the delivery method does not alter the pedagogic function to any significant extent. A lecture remains a lecture (i.e. a primarily transmissive pedagogic method) whether it is delivered ‘live’ in a lecture-room, as a web-cast to be accessed synchronously and/or asynchronously or as an audio or video podcast accessed ‘on demand’.

3. Recognise that technologies and tools can be used for multiple educational purposes

Researchers and practitioners must recognise that most technologies/tools (such as blogs, forums, podcasts and wikis) are not associated with just a single ‘ideal’ role, but can function in a variety of ways for many different educational purposes. The manner in which a technology is used for a particular type of learning activity and anticipated outcomes will reflect the teacher’s epistemology and approach to teaching and learning (e.g. transmissive, constructivist, collaborative, etc.). Students’ use of a technology in that specific context can differ from that experienced in other contextual circumstances. It is insufficient to describe a technology innovation as being about students ‘using a wiki’ or ‘using a discussion forum’. The educational purpose and mode of deployment must also be specified and explored.

4. Determine what benefits are expected to be achieved from a technology intervention and for whom

Try to determine the origins of any learning technology project being investigated. Why was the innovation considered necessary? How was the pre-existing situation to be improved by the use of technology? It is essential to clarify not only the nature of the benefit(s) expected from any project, but also the anticipated beneficiaries. For example, the use of pre-prepared and quality-checked materials and resources available from an institutional VLE or LMS can benefit learners, teachers and institutional managers by ensuring that greater consistency and standardisation is achieved. Some other technology-based interventions seek to achieve novel outcomes, their primary aim being to enable learners to acquire and develop knowledge and skills that are difficult to achieve by other means. Research and evaluation studies of technology projects should ensure that (a) the full range of relevant benefits and beneficiaries is considered and (b) the methods and approaches used are appropriate. It would be insufficient, for example, for measures of satisfaction to be used to determine whether students’ learning had been improved (quantitatively or qualitatively) by a particular intervention. In much the same way, qualitative changes in students’ learning are unlikely to be demonstrated by using quantitative measures alone.

5. If some form of learning or teaching enhancement is expected, how is conceptualised in relation to the processes and experiences of those involved?

Is learning enhancement conceived primarily in quantitative terms? For example, many studies make use of the scores or grades achieved by students on ‘before’ and ‘after’ tests, often devised specifically for an intervention. Others use the normal assessment requirements of a course, usually comparing the results of one ‘with technology’ cohort of students with another ‘without technology’ group. Such measures indicate that enhancement is conceived in quantitative terms: demonstration of enhancement requires determining whether the technology innovation is associated with more – or less – learning being achieved, through the proxy of test scores. (This, of course, assumes that all other variables are held constant, which can rarely be achieved unless strictly controlled experimental conditions are applied.)

Alternatively, an innovation might be seeking to achieve outcomes that are more qualitative than quantitative. For example, designing students’ use of technology for the purpose of:

  • Developing and deepening knowledge and understanding, not simply in terms of knowing more (facts, principle, procedures, etc.), but of knowing differently (more elaborate conceptions and theoretical understanding, etc.);
  • Developing an understanding that knowledge is contested (legitimate differing perspectives) rather than absolute;
  • ‘Learning how to learn’, developing greater self-direction and the capacity – and aspiration – to continue learning throughout life;
  • Developing the capacity to participate in academic discourse and a community of practice related to their discipline or profession;
  • Developing a range of ‘generic’ or ‘life’ skills, e.g. critical thinking, coping with uncertainty, ability to communicate appropriately with different audiences, working effectively with other people, capacity for reflection upon practice, etc.

In such circumstances it is very unlikely that quantitative measures alone could determine whether or not the desired enhancement had been achieved. Some form of qualitative data collection is almost certainly necessary to demonstrate that the desired qualitative improvement had been brought about.

Whether improvements were conceived in quantitative or qualitative terms, it would never be sufficient to simply ask students whether they felt that their learning had been enhanced. Not only does this fail to demonstrate that any enhancement has been achieved, it also unreasonably assumes that each student questioned shares their teacher’s understanding of what that enhancement actually involves. For example, how can a single valid interpretation be deduced from aggregating students’ responses to the questionnaire item “Do you feel that your learning has been enhanced by the use of x”?

Further, for desired outcomes to be achieved the contextual circumstances must be appropriate. Most notably, the assessment methods and criteria must support those outcomes. The assessment for a course or module constitutes the de facto curriculum (Brown, 1997; Havnes, 2004; Rust, 2002; Sambell & McDowell, 1998). Assessment determines what learners do when studying: not only what they attend to (and what they ignore), but also how they go about learning (Kirkwood & Price, 2008). When students are expected to make use of tools such as wikis, blogs, podcasts, etc. within their normal studies, many will not bother to do so unless using the tool contributes in some way to the course assessment requirements. For this reason, intervention projects that focus on technology use that is not within the learners’ normal study context are highly likely to be unrepresentative and will usually produce over-optimistic findings.

6. Establish what evidence is considered necessary or appropriate to demonstrate the achievement of enhancement(s)?

As already mentioned, the type(s) of evidence collected in any research or evaluation study must be appropriate for not only the overall purpose or pedagogic goal of an intervention (sections 3.1 and 3.2 above), but also for the anticipated benefits and beneficiaries (sections 3.3 and 3.4). Demonstrating improvements in learning, especially those of a qualitative nature, can be difficult and will usually require the use of several data collection methods.

Any research or evaluation study that aims to gather evidence of better student performance or learning improvement must ensure that relevant forms of data are attained. Kirkpatrick’s four-stage evaluation model (Kirkpatrick, 1994) proposes that the effectiveness of education/training is best evaluated at four progressively challenging levels – Reaction, Learning, Behaviour and Results. It stresses that research and evaluation should aim to attend to all four stages. Students’ reactions might indicate feelings of satisfaction or positive attitudes, but are never sufficient to determine what learners know or what they can do as a result of an intervention. ‘Learning gains’ can only be established by the gathering of appropriate evidence, for example by students demonstrating their understanding or their ability to perform desired tasks or actions.

If course assessment is to be used as one form of data collection for a project, it is vital to ensure that the assessment method(s) used is/are appropriate for the outcomes being sought by the intervention. For example, if a wiki or discussion forum is introduced to encourage students to work collaboratively, the associated course assessment will need to acknowledge and reward group working practices. If assessment remains wholly focused on the outputs of individual students, the ‘backwash effect’ of assessment (Watkins et al., 2005) will lead learners to revert to competitive rather than collaborative ways of working. In other words, the design of assessment is key to developing particular behaviours in students. So, if we want to change student experiences and learning outcomes, we need to change the assessment strategy and related activities accordingly. Research or evaluation studies need to consider such wider contextual factors that can impact on the outcomes of an innovation.

7. Ensure that the findings justify the conclusions drawn and that no unsubstantiated generalisations or recommendations are made

It is important that any conclusions or recommendations resulting from a research or evaluation study should be substantiated by the findings. In our literature review (Kirkwood & Price, 2014) we found many articles in which this was not the case. Favourable reactions from learners (particularly if they are only in response to a multiple-choice question) should not be presented as the sole source of evidence for learning improvement. In situations where technology has been used to supplement existing teaching, any enhanced performance associated with a project could simply result from the fact that learners had received additional teaching resources or had spent more time on study activities. Similarly, where teaching has been altered significantly to accommodate the use of technology, researchers must be aware that because changes have been made to several variables it is inappropriate to claim that just one element (i.e. technology) has been responsible for bringing about any change in outcomes.

Over-generalisation is also of concern. It cannot be assumed that findings from research undertaken in one particular educational context can necessarily be applied in any other context. Often accounts of research or evaluation studies provide insufficient details about the context, the design of learning activities, the precise use made of technology (most can be used for a variety of purposes), the expected outcomes and the means by which learners were assessed for readers to be able to determine the extent to which findings might be of value elsewhere (Thorpe, 2008). Contextual differences reflect a combination of factors that include, among others, the beliefs and practices of individual teachers, the characteristics of students, the mode of education involved and the ethos, norms and culture of particular departments and institutions (Lindblom-Ylänne et al., 2006). Often the critical importance of contextual variability is underestimated in relation to how teaching and learning with technology actually takes place.

8. Maintain an appropriate perspective: clearly differentiate the complexities of the ‘here and now’ from the idealised ‘potential’ of any new technology.

Research and evaluation studies need to be open to forms of inquiry that are appropriate for the particular educational context and innovation being investigated. All aspects of the educational transaction need to be considered, not just the technology being utilised for teaching and learning. There are two major drawbacks when technology itself is taken as the focus of an investigation.

First, there is a tendency to consider the technology as the agent of any changes observed, rather than the agent being the design of teaching/learning activities and how use is made of the technology. A technology might seem to be highly effective in helping achieve the desired goals in one particular context where students with a certain set of characteristics undertook specific learning tasks. It does not follow that positive outcomes will necessarily arise when the same technology is used by different types of student when engaged with learning tasks of a dissimilar nature. The key is how teachers design learning activities appropriate for their students to enable them to achieve particular educational outcomes or goals. There are always dangers involved in trying to generalise from one specific context to another.

Second, it is always important to consider what innovative role any technology is playing. Is it providing a new means of delivering existing pedagogy (replicating or supplementing existing teaching), or does it contribute to new pedagogical approaches and changes in what and how students learn (transforming the learning experience)? If the former is the case, then it is essential to determine what is already known: the findings from relevant studies of delivery technologies should be considered. Often teachers and researchers are so enthralled by the potential of new technologies that their sense of perspective is impaired. Many investigations fail to take account of and build upon lessons learned from research into the use of educational media and technologies conducted over previous decades, much of which remains highly relevant.

Conclusions

We contend that research and evaluation studies of learning technologies should be conducted with greater rigour and validity. However, it is not a matter of simply following prescriptions about adopting specified research methods or approaches to achieve ‘scientific’ rigour. It is more about proceeding in a scholarly way, investigating the aims and goals of an intervention in order to pursue all relevant aspects of the educational situation and circumstances. It is essential that explicit consideration be given to the assumptions and epistemological models underpinning both the approach to teaching and learning being adopted and the anticipated research methods. The investigation, including any literature review to determine what is already known, should not be focused primarily on the specific technology being used, but on all relevant aspects of the educational context. All conclusions and recommendations must be supported by evidence and not exaggerated in their claims for applicability in other contexts.

If the guidelines presented here are followed, it should contribute to research and evaluation studies achieving higher quality and validity and to results and conclusions that avoid many of the pitfalls and shortcomings that we – and many others – have identified. Consequently, the potential for determining valid judgements about impact can be realised.

References

  1. Biesta, G. (2007). Why “what works” won’t work: Evidence-based practice and the democratic deficit in educational research. In Educational Theory, 57(1), (pp. 1-22).
  2. Brown, G. (1997). Assessing Student Learning in Higher Education. London, Routledge.
  3. Clegg, S. (2005). Evidence-based practice in educational research: A critical realist critique of systematic review. In British Journal of Sociology of Education, 26, (pp. 415–428).
  4. Cook, T. D. (2002). Randomized experiments in educational policy research: A critical examination of the reasons the educational evaluation community has offered for not doing them. In Educational Evaluation and Policy Analysis, 24(3), (pp. 175-199).
  5. Cox, M. J. and Marshall, G. (2007). Effects of ICT: Do we know what we should know? In Education and Information Technologies, 12(2), (pp. 59-70).
  6. Havnes, A. (2004). Examination and learning: an activity-theoretical analysis of the relationship between assessment and educational practice. In Assessment & Evaluation in Higher Education, 29(2), (pp. 159-176).
  7. Howe, K. R. (2009). Positivist dogmas, rhetoric, and the education science question. In Educational Researcher, 38(6), (pp. 428-440).
  8. Kirkpatrick, D.L. (1994). Evaluating training programs. San Francisco: Berrett-Koehler Publishers.
  9. Kirkwood, A. (2013). ‘Media are “mere vehicles” – Under-substantiated claims from comparative studies. A review of Richard E. Clark (ed.) Learning from Media: Arguments, Analysis, and Evidence, 2nd Edition’. In Open Learning, 28(2), (pp. 153-163).
  10. Kirkwood, A. and Price, L. (2008). Assessment and student learning – a fundamental relationship and the role of information and communication technologies. In Open Learning, 23(1), (pp. 5-16).
  11. Kirkwood, A. and Price, L. (2013a). Examining some assumptions and limitations of research on the effects of emerging technologies for teaching and learning in higher education. In British Journal of Educational Technology, 44(4), (pp. 536-543).
  12. Kirkwood, A. and Price, L. (2013b). Missing: Evidence of a scholarly approach to teaching and learning with technology in higher education. In Teaching in Higher Education, 18(3), (pp. 327-337).
  13. Kirkwood, A. and Price, L. (2014). Technology-enhanced learning and teaching in higher education: What is ‘enhanced’ and how do we know? A critical literature review. In Learning, Media and Technology. 39(1), (pp. 6-36).
  14. Lindblom-Ylänne, S.; Trigwell, K.; Nevgi, A.; Ashwin, P. (2006). How approaches to teaching are affected by discipline and teaching context. In Studies in Higher Education, 31(3), (pp. 285-298).
  15. Oliver, M. (2011). Technological determinism in educational technology research: some alternative ways of thinking about the relationship between learning and technology. In Journal of Computer Assisted Learning, 27(5), (pp. 373-384).
  16. Price, L. and Kirkwood, A. (2014). Using technology for teaching and learning in higher education: A critical review of the role of evidence in informing practice. In Higher Education Research & Development, 33(3), (pp. 549-564).
  17. Reeves, T. C. (2011). Can educational research be both rigorous and relevant? In Educational Designer, 1(4). Available from: http://www.educationaldesigner.org/ed/volume1/issue4/article13
  18. Rowbottom, D.P. and Aiston, S.J. (2006). The myth of ‘scientific method’ in contemporary educational research. In Journal of Philosophy of Education, 40(2), (pp. 137-156).
  19. Rust, C. (2002). The impact of assessment on student learning. In Active Learning in Higher Education, 3(2), (pp. 145-158).
  20. Sambell, K. and McDowell, L. (1998). The construction of the hidden curriculum: messages and meanings in the assessment of student learning. In Assessment and Evaluation in Higher Education, 23(4), (pp. 391-402).
  21. Scriven, M. (2008). A summative evaluation of RCT methodology: and an alternative approach to causal research. In Journal of MultiDisciplinary Education, 5(9), (pp. 11-24).
  22. Simons, H. (2003). Evidence-based practice: Panacea or over promise? In Research Papers in Education, 18(4), (pp. 303-311).
  23. Selwyn, N. (2012). Editorial: Ten suggestions for improving academic research in education and technology. In Learning, Media and Technology, 37(3), (pp. 213-219).
  24. Slavin, R.E. (2002). Evidence-based education policies: Transforming educational practices and research. In Educational Researcher, 31(7), (pp. 15-21).
  25. Slavin, R.E. (2003). A reader’s guide to scientifically based research. In Educational Leadership, 60(5), (pp. 12-17).
  26. Tamim, R.M.; Bernard, R.M.; Borokhovski, E.; Abrami, P.C. and Schmid, R.F. (2011). What forty years of research says about the impact of technology on learning: a second-order meta-analysis and validation study. In Review of Educational Research, 81(1), (pp. 4-28).
  27. Thorpe, M. (2008). Effective online interaction: Mapping course design to bridge from research to practice. In Australasian Journal of Educational Technology, 24(1), (pp. 57-72).
  28. Torgerson, C.J. and Torgerson, D.J. (2001). The need for randomised controlled trials in education research. In British Journal of Educational Studies, 49(3), (pp. 316-328).
  29. Watkins, D.; Dahlin, B. and Ekholm, M. (2005). Awareness of the backwash effect of assessment: A phenomenographic study of the views of Hong Kong and Swedish lecturers. In Instructional Science, 33(4), (pp. 283-309).
  30.  

Tags

e-learning, distance learning, distance education, online learning, higher education, DE, blended learning, MOOCs, ICT, information and communication technology, collaborative learning, internet, interaction, learning management system, LMS,

Current issue on Sciendo

– electronic content hosting and distribution platform

EURODL is indexed by ERIC

– the Education Resources Information Center, the world's largest digital library of education literature

EURODL is indexed by DOAJ

– the Directory of Open Access Journals

EURODL is indexed by Cabells

– the Cabell's Directories

EURODL is indexed by EBSCO

– the EBSCO Publishing – EBSCOhost Online Research Databases

For new referees

If you would like to referee articles for EURODL, please write to the Chief Editor Ulrich Bernath, including a brief CV and your area of interest.