NEW


Best of EDEN 2018 Special issue
 

There are not any recent contributions.

There are not any recent contributions.

Archives

EURODL Mailinglist

2851 subcribers
 

EURODL Visitors

back

Feedback on Academic Essay Writing through pre-Emptive Hints: Moving Towards “Advice for Action”

Denise Whitelock, Alison Twiner, John T. E. Richardson, The Open University, United Kingdom, Debora Field, Stephen Pulman, University of Oxford, United Kingdom

Abstract

This paper adopts an “advice for action” approach to feedback in educational practice: addressing how provision of “hints” to participants before they write academic essays can support their understanding and performance in essay-writing tasks. We explored differences in performance by type of hint, and whether there was a transfer of better performance in subsequent essays. Fifty participants were recruited, consisting of eight men and 42 women aged 18-80. Participants were assigned in rotation to four groups, and asked to write two essays. Groups 1 and 3 received hints before Essay 1, whilst Groups 2 and 4 received hints before Essay 2. Groups 1 and 2 received essential hints; Groups 3 and 4 received helpful hints. Essays were marked against set criteria. The results showed that an “advice for action” approach to essay-writing, in the form of hints, can significantly improve writers’ marks. Specifically higher marks were gained for the introduction, conclusion and use of evidence: critical components of “good” academic essays. As the hints given were content-free, this approach has the potential to instantly benefit tutors and students across subject domains and institutions and is informing the development of a technical system that can offer formative feedback as students draft essays.

Keywords: assessment, essay writing, feedback, hints

Introduction

Feedback is a common feature of educational practice (e.g. Black & Wiliam, 1998), and one that has been widely researched but not necessarily implemented or understood to its full potential in practice. This has led to a large amount of research attempting to define what feedback is, when it should be used, and how it could be made more beneficial for students and tutors. Beaumont, O’Doherty and Shannon (2011) for instance identify the “fundamental aim of feedback practice, which is to progressively and explicitly develop students’ self-evaluative skills through engagement in the process” (p.683). From this we can see that feedback should have the intention not just of reporting back on finished work, but also of offering advice to self-motivated learners on where they can improve in future work.

This paper reports a study on the computerised provision of “hints” to participants on how to write academic essays, before they begin their essays. We will address this with a view to how this pre-emptive feedback, or “feed-forward” (e.g. Hattie & Timperley, 2007; Price, Handley & Millar, 2011), can have a significant positive impact on participants’ work. This study was used to inform and reinforce feedback features being developed for a technical system that could provide an appropriate level of formative feedback on draft academic essays. The topic of the present paper is a response to our overall research question: how does the provision of hints affect the essay being written and essay writing in the future?

As Evans (2013) explained, “Even when “good” feedback has been given, the gap between receiving and acting on feedback can be wide given the complexity of how students make sense of, use, and give feedback (Taras, 2003)” (p.94). Therefore feedback needs to be viewed by tutors and students as an ongoing activity within the cycle of course learning, which feeds into further learning, rather than as an add-on or end point of summative assessment: the aim is that feedback should be seen as “advice for action” (Whitelock, 2010). This is the concept that other researchers have referred to as “feed-forward” (Evans, 2013; Hattie & Timperley, 2007).

Hattie and Timperley (2007) elaborated on this re-framing of feedback as feedforward:

To be effective, feedback needs to be clear, purposeful, meaningful, and compatible with students’ prior knowledge and to provide logical connections. It also needs to prompt active information processing on the part of learners, have low task complexity, relate to specific and clear goals, and provide little threat to the person at the self level. (p.104)

Thus feedback must be presented in a way that participants can understand, and that they can interpret in terms of where improvements can be made in the future. Hattie and Timperley argued that feedback must be a follow-up to information given to learners, so that they are aware of task requirements before their work is judged against them:

It is important to note, however, that under particular circumstances, instruction is more effective than feedback. Feedback can only build on something; it is of little use when there is no initial learning or surface information. Feedback is what happens second, is one of the most powerful influences on learning, too rarely occurs, and needs to be more fully researched by qualitatively and quantitatively investigating how feedback works in the classroom and learning process. (p.104)

Therefore, feedback is a central part of the teaching and learning process, but one that must follow task instruction and be followed by space for reflection and scope to implement suggestions. In this regard, Narciss (2013) identified the functions of feedback as cognitive, metacognitive and motivational. Nelson and Schunn (2009) also claimed that feedback involved motivation, reinforcement and information. These collective functions of feedback may be particularly important for students who are returning to study after a period of time in employment, who may find it more difficult to understand and access Higher Education study discourses (Scott et al., 2011).

In terms of the purpose of feedback, Chickering and Gamson (1987) outlined seven principles of good practice for undergraduate education, of which the third was “encourages active learning”. Likewise, Nicol and Macfarlane-Dick (2006) stated that students should be urged to be proactive rather than reactive with regard to feedback, using it as a springboard for improvement rather than a stop point. Therefore, feedback or tutor input must do more than just identify misconceptions in students’ work. It must motivate learners to engage with the topic and the task, so that their work comes from and demonstrates understanding rather than just doing enough to get a mark. Pursuing this point, Nicol and Macfarlane-Dick concluded that too much focus on final marks could be demotivating for students and encourage effort to be placed just on passing and looking good rather than understanding the subject.

In a similar vein, Graesser and McNamara (2010) concluded that metacognition – awareness of one’s own knowledge, abilities and learning strategies to approach a task (drawing on Quintana, Zhang & Krajcik’s, 2005 definition) – was important for learning. This means that in practice students need to be supported to reflect on their current understanding of a topic, and how they can best fulfil task requirements. Through this they can direct their learning and task activity more optimally, and feel for themselves whether they are on the right lines.

Following a sociocultural perspective, learning can be considered as a cultural process, using cultural tools. In this sense, metacognition includes an interpretation of cognition which is “distributed and mediated by the world in which we live through voices, books, papers, computers, rules and other cultural artefacts” (Baggetun & Wasson, 2006, p.453). With this in mind, as well as considering the task and type of feedback it is important to address the medium in which tasks are presented to students.

For some years now, many courses and universities have made increasing use of technology to support assignment delivery and submission, as well as the medium for offering feedback. Learning has become radically more open and self-regulated, as well as hugely evolved with the innovative uses of new technology. As Steffens (2006) highlighted, “In parallel to the rising interest in self-regulation and self-regulated learning, the rapid development of the Information and Communication Technologies (ICT) has made it possible to develop highly sophisticated Technology-Enhanced Learning Environments (TELEs)” (p.353).

Computer-provided feedback and assessment has some way to go to catch up with these innovations, particularly where courses cater for large numbers of students. The ability to offer automated guidance and feedback at the point of student need to large numbers could help to revolutionise the experience and performance of teaching and learning in higher education. This is particularly pertinent as many universities, including the institution where the study reported in this paper took place, are increasingly catering for distance and round-the-clock learners, many of whom are out of the practice of academic writing.

Chi et al. (2001) also assert that “suggestive feedback” is helpful to learners, by highlighting an area that may be in need of work and so encouraging students to reflect on their work without directly giving the answer. The need to avoid simply giving the right answer, and the potential for plagiarism, is particularly important within computer-based learning environments. This view is reinforced by Banyard, Underwood and Twiner (2006), who state that “enhanced technologies provided enhanced opportunities for plagiarism” (p.484). Therefore in many instances use of technology makes plagiarism easier: more users have access to information that is portable (easy to “copy and paste” without attribution to a source), but if they do not understand what they find or are not motivated to cite or process it, this may not necessarily help them to use it appropriately. Thus students need guidance and support on how to make appropriate use of the sources of information they find – the cultural tools around them.

Within the study to be reported here, the hints given to participants prior to their essay-writing refer to general guidance on how to structure an academic essay. The hints provided were content-free, and so broadly appropriate to all academic writing in any subject without extra strain and time demands for tutors. This has the advantage that they can be shared with large numbers easily, but the disadvantage that they are not tailored to learners’ current subject understanding and individual learning needs.

In other research, hints have been used but have been given as responsive prompts, when students have requested help for a certain task or problem (e.g. Aleven et al., 2010), rather than as broad supportive information before starting tasks. In the study by Aleven and colleagues, the researchers focused on “help-seeking behaviour”, in considering when students requested the hints in order to gradually arrive at the answer, compared with those who were using hints to understand the question and how best to respond.

Work with secondary-school-aged pupils by Narciss (2013, 2014) reported a randomised control trial on the automated provision of hints within short Maths tasks. In her research, hints were offered to pupils after errors had been made in a task, but prior to a further attempt at the same task. The hints were therefore pre-emptive, to support future performance and learning, but were also a direct response to an error. In doing this work Narciss recognised that there is little research, theoretical or empirical, on “automatic feedback adaptation”, which is similar to our interpretation of the existing literature. Given the nature of the tasks tested within Narciss’ studies, being in the Maths domain and specifically working with fractions, students’ responses were relatively easy to identify as correct or incorrect. As Narciss acknowledged, this is not the case in less-structured tasks such as essay writing, the context we address in our work, so the nature of feedback needed is significantly different.

In the study to be reported here, we uniquely offered broad macro-level guidance to participants on how to write a “good essay” before they wrote their essay, rather than focusing on the aspects that might identify their work as a “bad essay”. Participants each wrote two essays. For one essay they were given hints before writing. Half of the participants received “essential” hints before writing one of the essays (and no hints before writing the other). The other half received “nonessential” or “helpful” hints before writing one essay (again receiving no hints before writing the other essay). Participants’ performance was marked against set criteria. This enabled us to explore whether there was an effect of giving hints for the immediate essay, and also whether there was a lasting effect of this provision.

To explore this context, we investigated the following research questions:

  1. Is there a difference between participants’ performance due to giving or not giving hints?
  2. Is there a difference between participants’ performance due to the type of hint given?
  3. Is there transfer evident in participants’ performance due to the point at which hints are given?

Method

Participants

Fifty participants were recruited from a subject panel maintained by colleagues in the Department of Psychology consisting of people who were interested in participating in online psychology experiments. Some of them were current or former students of the University, but others were just members of the public with an interest in participating in psychological research. The 50 participants consisted of eight men and 42 women, who were aged between 18 and 80 with a mean age of 43.1 years.

Procedure

The participants were assigned in rotation to one of four groups. Each participant was asked to write two essays, and in each case they were allowed two weeks for the task. The first task was: “Write an essay on human perception of risk”. The second task was: “Write an essay on memory problems in old age”. Participants who produced both essays were rewarded with an honorarium of £40 in Amazon vouchers.

Groups 1 and 3 were provided with hints for Essay 1 but not for Essay 2. Groups 2 and 4 were provided with hints for Essay 2 but not for Essay 1. Groups 1 and 2 were provided with essential hints. Groups 3 and 4 were provided with helpful hints (see Table 1). Appendix A shows the essential and helpful hints. Otherwise, the participants were provided with no feedback on their essays.

Table 1:   Research design

 

Group 1

Group 2

Group 3

Group 4

Essay 1

Essential hints

No hints

Helpful hints

No hints

Essay 2

No hints

Essential hints

No hints

Helpful hints

 

Two of the authors who were academic staff with considerable experience in teaching and assessment marked the submitted essays using an agreed marking scheme and without reference to the groups to which participants had been assigned. The marking scheme is shown in Appendix B. If the difference between the total marks awarded was 20 percentage points or less, essays were assigned the average of the two markers’ marks. Discrepancies of more than 20 percentage points were resolved by discussion between the markers.

Data analysis

A mixed-design analysis of variance was carried out on the final marks that were awarded to participants who submitted two essays. This employed the within-subjects variables of hints (hints versus no hints) and marking criteria (1–10) and the between-subjects variables of hint type (essential versus helpful) and hint order (hints on Essay 1 versus hints on Essay 2). Post hoc tests were carried out to identify the marking criteria on which any significant changes in marks had arisen as a result of providing hints.

Values of partial h² (eta squared) were calculated as measures of effect size. These represent the proportion of variance in the dependent variable that is explained by each independent variable or interaction when the effects of other independent variables and interactions have been partialled out (see Richardson, 2011). Cohen (1988, pp.285–287) suggested that values of partial h² of 0.0099, 0.0588 and 0.1379 would constitute small, medium and large effects, respectively.

Results

All 50 participants submitted Essay 1, although only 45 participants submitted Essay 2. The correlation coefficients between the marks initially awarded by the two markers were .81 for Essay 1 and .77 for Essay 2. In six cases, the discrepancy between the two markers was more than 20 percentage points, and these discrepancies were resolved by discussion between the markers. The mean final mark for Essay 1 was 56.9 (SD = 15.1), and the mean final mark for Essay 2 was 54.5 (SD = 15.9). Table 2 shows the mean marks awarded for essays with and without essential and helpful hints.

Table 2:   Mean marks with and without essential and nonessential hints

 

n

No hints

Hints

Essential hints

23

54.8

56.5

Nonessential hints

22

53.6

60.0

Overall

45

54.2

58.2

 

The main effect of hints was statistically significant using a directional test (equivalent to a one-tailed Student’s t test), F(1, 41) = 3.23, p = .04, partial h² = .07. Table 2 shows that on average essays written with hints received 4 percentage points more than essays written without hints. This constituted a “medium” effect based on Cohen’s (1988) benchmarks.

There was no significant effect of hint type, F(1, 41) = .08, p = .78, partial h² = .00, and no significant interaction between the effects of hints and hint type, F(1, 41) = 1.09, p = .30, partial h² = .03. Thus, there was no difference between the benefit of essential hints and that of helpful hints. In fact, Table 2 shows that if anything the benefit of helpful hints tended to be greater than the benefit of essential hints.

There was no significant effect of hint order, F(1, 41) = 1.24, p = .27, partial h² = .03, and no significant interaction between the effects of hints and hint order, F(1, 41) = 1.68, p = .20, partial h² = .04. In other words, there was no difference between the benefit of hints provided for Essay 1 and the benefit of hints provided for Essay 2. This in turn implies that there was no transfer of the effect of hints provided for Essay 1 on the writing of Essay 2.

There was no significant interaction between the effects of hint type and hint order, F(1, 41) = 0.94, p = .34, partial h² = .02, and no significant three-way interaction between the effects of hints, hint type and hint order, F(1, 41) = .09, p = .76, partial h² = .00.

The main effect of criteria was statistically significant, F(9, 369) = 20.86, p < .001, partial h² = .34, which is unsurprising since different numbers of marks were awarded against the ten criteria. However, there was a significant interaction between the effect of hints and the effect of criteria, F(9, 369) = 2.25, p = .02, partial h² = .05. Thus, the benefit of hints varied across the ten criteria. This too constituted a “medium” effect based on Cohen’s (1988) benchmarks.

Post hoc tests were carried out to identify where the increase in marks as a result of providing hints had arisen. Directional tests showed that there was a significant increase in marks on Criterion 1 (introduction) from 5.43 to 6.77 out of 10, F(1, 41) = 4.59, p = .02, partial h² = .10, a significant increase in marks on Criterion 2 (conclusion) from 6.10 to 7.43 out of 10, F(1, 41) = 12.50, p < .001, partial h² = .23, and a significant increase in marks on Criterion 4 (evidence) from 8.00 to 9.03 out of 20, F(1, 41) = 3.22, p = .04, partial h² = .07. These constituted medium or large effects on Cohen’s (1988) benchmarks. Otherwise, there were no significant differences between the marks awarded to essays written with and without hints.

Discussion and conclusions

In reviewing our findings and their implications we return to our overall research question: how does the provision of hints affect the essay being written and essay writing in the future? In this context hints are contained within the broader category of “feedback”, which has been widely researched and reviewed. We particularly draw on research regarding feedback that has a proactive and forward-looking agenda, viewing feedback as “advice for action” (Whitelock, 2010). From this we build on the view that feedback works best when given before submission of a piece of work, as “feed-forward” (Hattie & Timperley, 2007; Price, Handley & Millar, 2011).

Such feedback can be provided either before starting a task or during task activity, so that it can be utilised straight away (e.g. Butler & Winne, 1995; Hattie & Timperley, 2007; Nicol & Macfarlane-Dick, 2006).The aim here is that the advice can be incorporated by participants within subsequent actions, to bridge the gap between expectations or goals, and performance. Such a conception corresponds well with notions of self-regulated learning and metacognition, requiring participants to set their own goals for learning, and to monitor their progress toward these goals (Quintana, Zhang & Krajcik, 2005).

As mentioned, one way that such pre-emptive feedback can be given is through the provision of hints. Hints can identify where goals need to be set, where participants may need to direct extra learning and research, and enable participants to focus their monitoring in reducing this gap between goals and performance, through learning and understanding. Effective use of technology is one vital resource available to participants in reducing this gap.

In responding to suggestions from previous research that feedback can have both positive and negative effects, it was important in the current study to experimentally observe and analyse the effects of provision of hints for academic essay writing. Thus, we needed to rigorously assess whether hints had the potential to support participants in setting goals for their task, and offering guidance on how to work toward these goals, and in doing so gain higher marks (drawing on Hattie & Timperley, 2007).

A crucial difference between the current study and previous research on provision of hints includes that we gave hints prior to essay writing, whereby the hints were framed as positive aims rather than reports of error. In this approach there was no perception of participants having got it wrong before receiving input. This is in contrast to Aleven et al.’s (2010) work, where hints were provided as requested by students, with the aim of supporting reflection, but often as short responses to mistakes, omissions or misunderstandings which potentially allowed students to progressively guess their way toward an answer. Within our study hints were also content-free, making them relevant and potentially transferable to all contexts of academic writing. This design is advantageous for participants studying a range of subjects and modules, and also for tutors and courses to resource a variety of subject and assignment areas.

Furthermore, our study incorporated an experimental trial of the effects of providing hints prior to essay writing and in terms of transfer to subsequent essay writing. The effects of such provision have been queried and conjectured by many researchers, but have not been investigated and reported as statistically significant in this way before. This was a crucial addition within our research design. This enabled us to reach the confident conclusion that there was a positive, significant effect on performance of giving hints for the immediate essay being written, evidencing that a “feed-forward”, “advice for action” approach to feedback can indeed positively influence performance.

We also report that higher performance was noted regardless of the order of hint provision, meaning that there was no evidence of transfer of improved performance on the second essay for those who received hints on the first essay. Further research is therefore needed to investigate how transfer of this higher performance to subsequent academic writing activities can be supported, to facilitate a greater longevity of “advice for action”.

When considering where the higher marks were gained for essays written with hints, we did find a significant difference accorded to the individual marking criteria. Specifically higher marks were recorded for the criteria concerning the introduction, conclusion and use of evidence. This is of vital importance in terms of the quality of academic writing, as good essays require a strong beginning, a middle that is coherent and an end that brings the whole essay together, and the statistics reported here reflect that these elements significantly improved with provision of hints.

Crucially these results have fed into the development of a technical system to support the drafting of academic essays, as part of the same research project. As we found that providing pre-emptive hints supports the achievement of better marks overall, and specifically on the introduction, conclusion sections and use of evidence, a system interface has been designed and trialled with different textual and visual representations that offer guidance to users to reflect on how connected and progressive the concepts raised in these sections are. The system is still designed to be used on draft essays (and so before users’ work is submitted and graded), so that suggestions from system representations can be implemented in the current work. The system, as with the hints provided in the study reported here, is also designed to be content-free, and so usable within any subject domain that requires the writing of academic essays. Analysis of system usage is the subject of another paper, but its design has been largely informed by the key empirical finding from this paper: that providing “advice for action” on how to write a good academic essay can significantly improve participants’ performance on the current task. This finding has broad implications for feedback practice and research, which has the potential to benefit tutors and students across subjects and institutions.

References

  1. Aleven, V.; Roll, I.; McLaren, B.M.; Koedinger, K.R. (2010). Automated, unobtrusive, action-by-action assessment of self-regulation during learning with an intelligent tutoring system. In Educational Psychologist, 45(4), (pp. 224-233). doi:10.1080/00461520.2010.517740
  2. Baggetun, R. and Wasson, B. (2006). Self-regulated learning and open writing. In European Journal of Education, 41(3-4), (pp. 453-472). doi:10.1111/j.1465-3435.2006.00276.x
  3. Banyard, P.; Underwood, J. and Twiner, A. (2006). Do enhanced communication technologies inhibit or facilitate self-regulated learning? In European Journal of Education, 41(3-4), (pp. 473-489). doi:10.1111/j.1465-3435.2006.00277.x
  4. Beaumont, C.; O’Doherty, M. and Shannon, L. (2011). Reconceptualising assessment feedback: A key to improving student learning? In Studies in Higher Education, 36(6), (pp. 671-687). doi:10.1080/03075071003731135
  5. Black, P. and Wiliam, D. (1998). Assessment and classroom learning. In Assessment in Education, 5(1), (pp. 7-74). doi:10.1080/0969595980050102
  6. Butler, D.L. and Winne, P.H. (1995). Feedback and self-regulated learning: A theoretical synthesis. In Review of Educational Research, 65(3), (pp. 245-281). doi: 10.3102/00346543065003245
  7. Chi, M.T.H.; Siler, S.A.; Jeong, H.; Yamauchi, T.; Hausmann, R.G. (2001). Learning from human tutoring. In Cognitive Science, 25, (pp. 471-533). doi:10.1016/S0364-0213(01)00044-1
  8. Chickering, A.W. and Gamson, Z.F. (1987). Seven principles for good practice in undergraduate education. In American Association of Higher Education Bulletin, 39(7), (pp. 3-7). Available online at http://www.aahea.org/aahea/articles/sevenprinciples1987.htm
  9. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York: Academic Press.
  10. Evans, C. (2013). Making sense of assessment feedback in Higher Education. In Review of Educational Research, 83(1), (pp. 70-120). doi:10.3102/0034654312474350
  11. Graesser, A. and McNamara, D. (2010). Self-regulated learning in learning environments with pedagogical agents that interact in natural language. In Educational Psychologist, 45(4), (pp. 234-244). doi: 10.1080/00461520.2010.515933
  12. Hattie, J. and Timperley, H. (2007). The power of feedback. In Review of Educational Research, 77(1), (pp. 81-112). doi:10.3102/003465430298487
  13. Narciss, S. (2013). Designing and evaluating tutoring feedback strategies for digital learning environments on the basis of the Interactive Tutoring Feedback Model. In Digital Education Review, 23, (pp. 7-26). Available online at http://revistes.ub.edu/index.php/der/article/view/11284
  14. Narciss, S.; Sosnovsky, S.; Schnaubert, L.; Andrès, E.; Eichelmann, A.; Goguadze, G.; Melis, E. (2014). Exploring feedback and student characteristics relevant for personalizing feedback strategies. In Computers & Education, 71, (pp. 56-76). doi:10.1016/j.compedu.2013.09.011
  15. Nelson, M.M. and Schunn, C.D. (2009). The nature of feedback: How different types of peer feedback affect writing performance. In Instructional Science, 37(4), (pp. 375-401). doi:10.1007/s11251-008-9053-x
  16. Nicol, D.J. and Macfarlane‐Dick, D. (2006). Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice. In Studies in Higher Education, 31(2), (pp. 199‐218). doi:10.1080/03075070600572090
  17. Price, M., Handley, K., & Millar, J. (2011). Feedback: Focusing attention on engagement. In Studies in Higher Education, 36(8), (pp. 879-896). doi:10.1080/03075079.2010.483513
  18. Quintana, C.; Zhang, M. and Krajcik, J. (2005). A framework for supporting metacognitive aspects of online inquiry through software-based scaffolding. In Educational Psychologist, 40(4), (pp. 235-244). doi:10.1207/s15326985ep4004_5
  19. Richardson, J.T.E. (2011). Eta squared and partial eta squared as measures of effect size in educational research. In Educational Research Review, 6(2), (pp. 135-147). doi:10.1016/j.edurev.2010.12.001
  20. Scott, D.; Evans, D.; Walter, C.; Hughes, G.; Burke, P.J.; Walter, C.; Stiasny, M.; Bentham, M.; Huttly, S. (2011). Facilitating transitions to masters-level learning: Improving formative assessment and feedback processes. Executive summary. Final extended report. London, UK: Institute of Education. Available online at http://www.jisctechdis.ac.uk/assets/Documents/ntfs/projects/Final_Report_v3_06-02-12.pdf
  21. Steffens, K. (2006). Self-regulated learning in Technology-Enhanced Learning Environments: Lessons from a European review. In European Journal of Education, 41(3-4), (pp. 353-380). doi:10.1111/j.1465-3435.2006.00271.x
  22. Taras, M. (2003). To feedback or not to feedback in student self-assessment. In Assessment and Evaluation in Higher Education, 28(1), (pp. 549-565). doi: 10.1080/0260293032000120415
  23. Whitelock, D. (2010). Activating assessment for learning: Are we on the way with Web 2.0? In M.J.W. Lee & C. McLoughlin (eds.), Web 2.0-based-E-Learning: Applying social informatics for tertiary teaching, (pp. 319-342). Hershey, PA: IGI Global. doi:10.4018/978-1-60566-294-7.ch017

Appendix A

Six essential essay writing hints

  1. Read the question carefully and underline keywords in the question to focus on the main areas that you need to address for the essay.
  2. Make a plan for your essay. For example, create a list of salient points that will address the key points from hint number 1.
  3. Remember, an essay is telling a story. A good story has a beginning, middle and an end. These are also known as introduction, discussion points and conclusion. Ensure this structure is explicit in your answer.
  4. The introduction should set out a basis for your discussion/argument.
  5. The discussion section picks up on the introduction, elaborates upon it and provides evidence for the points mentioned within it.
  6. The conclusion should summarise the discussion points and ends with a decisive stance towards the essay topic that you’ve been asked to write about.

Six helpful essay writing hints

  1. When you have written your first draft, pick out 10 words or phrases that you think are the most important ones in your essay. Do you think they convey the ideas you want to express in this essay?
  2. Topic sentences are those that give an outline of the contents of a paragraph. Do you have topic sentences to cue the reader into the major points you are trying to make in this essay?
  3. Read your draft and identify any supporting sentences. Their function is to cue the reader into details of one of the arguments in a paragraph.
  4. Ensure that your conclusion is a summary of the main argument of the essay. The conclusion may often have an opinion or a recommendation too.
  5. Check your word count. If you have too many words, see if any of the paragraphs in your essay discuss things that aren’t directly relevant to your assignment question. If so, delete them.
  6. Are any of the paragraphs in your essay longer than 7 sentences? If yes, consider carefully whether all the sentences are necessary for you to clearly make your point.

Appendix B

Marking criteria

Criterion

Definition

Maximum marks

1. Introduction

Introductory paragraph sets out argument.

10

2. Conclusion

Concluding paragraph rounds off discussion.

10

3. Argument

Argument is clear and well followed through.

10

4. Evidence

Evidence for argument in main body of text.

20

5. Paragraphs

All paragraphs seven sentences long or less.

5

6. Within word count

Word count between 500 and 1000 words.

5

7. References

Two or three references
Four or more references

5
10

8. Definition

Provides a clear and explicit definition of risk or memory.

10

9. Written presentation

Extensive vocabulary, accurate grammar and spelling.

10

10. Practical implications

Understanding of practical issues, innovative proposals.

10

Maximum total marks

 

100

 

Acknowledgements

This work was supported by the UK Engineering and Physical Sciences Research Council (grant numbers EP/J005959/1 & EP/J005231/1).

 

Tags

e-learning, distance learning, distance education, online learning, higher education, DE, blended learning, MOOCs, ICT, information and communication technology, collaborative learning, internet, interaction, learning management system, LMS,

Current issue on Sciendo

– electronic content hosting and distribution platform

EURODL is indexed by ERIC

– the Education Resources Information Center, the world's largest digital library of education literature

EURODL is indexed by DOAJ

– the Directory of Open Access Journals

EURODL is indexed by Cabells

– the Cabell's Directories

EURODL is indexed by EBSCO

– the EBSCO Publishing – EBSCOhost Online Research Databases

For new referees

If you would like to referee articles for EURODL, please write to the Chief Editor Ulrich Bernath, including a brief CV and your area of interest.