I asked students to write at least two comments on the blog course, dealing with the issues that were being discussed - I asked that the comments be 'meaningful'; demonstrate a critical review of information presented; demonstrate reference to appropriate literature, and demonstrate reference to midwifery practice experience. The assignment was to be completed by the end of the course, which was seven weeks long.
The reason I attached assessment to the comments was to ensure that the students communicated with each other and got some sort of critical dialogue going. It has been my experience in the past that motivating students to engage with each other with computer-mediated communication is difficult, especially in a learning management system such as BlackBoard.
The reason I am feeling uncomfortable is that I am not at all sure that assessing blog comments is actually achieving anything - does 'forcing' students to comment facilitate learning? Will they write meaningful posts or just the 'bare minimum' in order to pass the assessment? How do you assess someone else's reflections? What makes it a good reflection, or not, as the case may be? If reflection is all about personal learning, what right have I to come along and put a mark to it?
On the other hand, if the students do not communicate with each other and start the networking process, the course will be extremely dry and flat. Indeed, one criticism of connectivism is that it favors self-directed learners (Stack, 2008), so people who need more structure and guidance are less likely to engage without some sort of carrot and stick.
Clearly there will always students who recognise the value of networking and participation for their learning, but a number will only take part in activities if they are aligned to assessment - I know this - I've done it myself. No doubt there are a number of reasons for this, not least time constraints, prioritizing with other assessments, and lack of interest in the content.
Assessing online activity is difficult because of uneven rates of engagement of students (Goodfellow, 2001 cited by MacDonald, 2002) . And then there are the lurkers, the people who not participate - how do you 'measure' their learning? Just because they do not participate doesn't mean they are not learning (MacDonald et al, 2003).
So what is the best way of going about assessing online activity and learning? Helcat (2008) writes "we must spend some time rethinking assessment in order to create assessment centered classrooms that foster learning rather than simply measure it".
There appears to be a lot of agreement that assessment needs to be integrated into an online course, especially formative assessment because that produces valuable feedback for students as they progress through the course (Caplan & Graham, 2008). Feedback not only gives students an idea of how they're doing, but it also acts as motivation to keep going (MacDonald, 2002). At the same time, teachers need to get a balance between being driven to give instantaneous feedback to students and setting realistic time frames so they have a life of their own (Anderson, 2008).
Assessment should also be congruent with the activity it is based around. It's no good me setting an assessment that involves demonstrating how to deliver a breech baby, when I have been teaching shoulder dystocia (MacDonald, 2002). However, networked learning can be chaotic and distributed, so the learner may not have learned what I think she should learn. Nevertheless, it is valuable learning for her because it meets her own particular needs. Jenny Mackness makes a similar point in her blog post 'Intervention in students' learning'. In this instance it would pay to be flexible in one's approach to assessment in order to capture that 'distracted' learning. And it may be more appropriate to use a reflective assessment framework as opposed to a more rigid assessment criteria (Anderson, 2008).
Assessment of networked learning should be done carried out in a way that reflects the way that knowledge is produced. By that I mean that assignments should encourage student participation (McCloughlin & Luca, 2001). MacDonald (2002) advises
If students are to be given greater autonomy in their networked study, then assignments which encourage greater student participation may help them to develop a self directed approach. Networks can be employed to deliver enhanced versions of innovative assignments used in face to face situations. For example, electronic scrapbooks, online peer review and iterative assignment development.
Helcat (2008) suggests using a range of tools and strategies from researching and writing collaborative reports in Google documents and wikis, to reflecting in blogs, discussing in Ning and evaluating resources in social bookmarking sites such as Delicious. And then all the artifacts generated during this learning and assessment can be deposited in an ePortfolio.
Requiring students to participate by commenting for course marks appears to be common practice. But if you are going to use that strategy, you have to be clear to students what exactly is required and how it is to be presented. A number of rubrics have been developed over the years (Anderson, 2008). I think it is really important to know what you are aiming to achieve when you develop a marking rubric. Is it just participation you are wanting which may mean anything from a supportive statement from one student to another or exchange of resources, or do you want specific reflective statements or critical examination of evidence/literature.
If you are wanting to see assessment of online learning and get some ideas, have a look a few open courses that are currently running:
- Connectivism and Connective Knowledge
- Facilitating online communities
- Designing for flexible learning practice
If you are a teacher using online courses, what assessment strategies work for you? What assessment rubrics or criteria do you use? If you are a student, how do you feel about assessment? What works for you?
Anderson, T. (2008). Teaching in an online learning context. In T. Anderson (Ed.), The theory and practice of online learning. (pp 245-263). Athabasca: Athabasca University Press. Retrieved 18 November, 2008, from http://www.aupress.ca/books/Terry_Anderson/anderson3.pdf
Caplan, D. & Graham, R. (2008). The development in online courses. In T. Anderson (Ed.), The theory and practice of online learning. (pp 245-263). Athabasca: Athabasca University Press. Retrieved 18 November, 2008, from http://www.aupress.ca/books/Terry_Anderson/caplan.pdf
MacDoanld, J. (2002). Developing competent e-learners: the role of assessment.
Learning Communities and Assessment Cultures Conference, University of Northumbria, 28-30 August 2002. Retrieved 18 November, 2008, from http://www.leeds.ac.uk/educol/documents/00002251.htm
MacDonald, J., Atkin W., Daugherity F., Fox, H., MacGillivray, A., Reeves- Lipscomb, D., Uthailertaroon, P. (2003) Let's get more positive about the term 'lurker', CPsquare Foundations of Communities of Practice. Retrieved 18 November, 2008, from http://www.groups-that-work.com/GTWedit/GTW/lurkerprojectcopworkshopspring03rev.pdf
McCloughlin, C. & Luca, J. (2001). Quality in online delivery: what does it mean for assessment in e-learning environments? Retrieved 18 November, 2008, from http://www.ascilite.org.au/conferences/melbourne01/pdf/papers/mcloughlinc2.pdf
Helcat. (2008). Rethinking assessment. Retrieved 17 November, 2008, from http://helcat.org/wordpress/?p=76
Stack, R. (2008). Curriculum as Connectivism. Retrieved 17 November, 2008, from http://hent.blogspot.com/2006/06/curriculum-as-connectivism.html
Image: 'Nathan Setting The Tone For The Exam' rileyroxx