Friday, November 19, 2010

How to evaluate the learning outcomes of the Virtual Birth Unit in Second Life?


I have just started looking at the Virtual Birth Unit (BU) and normal birth scenario with midwifery educators in the USA and UK. They want to use the BU with their midwifery students. I suggested we carry out a formal research project to evaluate how the BU impacts on the midwifery students' learning in relation to the learning outcomes of the courses they are taking.

The stage we are at is trying to decide what outcomes we want the students to achieve and how we will evaluate them. Here are some questions that oen of the educators I am working with came up with.
  • How do we assess outcomes without a comparison group?
  • What outcomes are reasonable to measure?
  • Do we measure pre and post simulation?
  • How can we best document the value of the simulation and debriefing?
  • How do we keep bias out when we are the ones involved in the research and teaching?
I'd love to hear from anyone who has used simulation to teach students, especially health students...in Second Life. Or anyone who has experience of evaluating the impact of a specific intervention on students' learning. What methodology do you think is appropriate to use? How would you advise we measure or assess how the Virtual Birth Unit has impacted on students' learning?


Image: Virtual birthing unit. SLENZ tour
http://www.flickr.com/photos/kerryank/3957372087/

8 comments:

Anne Marie Cunningham said...

I'm going to share this and look forward to responses. I am struggling with being a practitioner/researcher at the moment!
Thanks
AM

catkins_in_nz said...

You might find it useful to look at Design Research. My understanding is that is used quite widely in education research for the evaluation of 'interventions'. There is some expertise in the VLENZ group via our University of Canterbury links, if you want to contact them.

Malcolm Lewis said...

How do we assess outcomes without a comparison group?

To prove this method of teaching is as good as or better than other options, you will need to have a randomised control group to prove it is better than usual methods. Thus students need to opt into the research and then be allocated randomly to the Second Life group or the usual practice control group. Had but not impossible to do in the real world.
What outcomes are reasonable to measure?
Seems to me their might be a mix of quantitative and quantitative outcomes such as knowledge, skills, attitudes, self-efficacy. Look for established scale but also use qualitative methods. Text analysis of blogs, posts to e-groups and what is said in the training sessions may reveal differs in learning outcomes.

Do we measure pre and post simulation?
If you can.

How can we best document the value of the simulation and debriefing?
With mix of quantitative and quantitative methods.- Graphs and words.

How do we keep bias out when we are the ones involved in the research and teaching?
You can’t keep the bias out. You will need independent evaluators.

***

Another point that seems important to me would be pre-surveying to see if the results are different for those who are newbie to second life and this who are old hands.

It could be that, in a few years, most students will be old hands in virtual environments. If learning is undercut by students learning the new tools, then it may not be a fair comparison.

Hope this helps.

Sarah Stewart said...

The Researcher’s Toolbox issue of the Journal of Virtual Worlds Research is online at http://jvwresearch.org. This issue, guest edited by Tom Boellstorff, Celia Pearce, Dmitri Williams, Thomas Malaby, Elizabeth Dean, and Tracy Tuten, presents peer reviewed papers on research methodologies and case studies of how the particular methods are being developed and used in virtual worlds research both in academia and industry.

Sarah Stewart said...

Here are a few thoughts I have had in response to Lisa's questions

* how do we assess outcomes without a comparison group?

I think you could say it is unethical to deny one group of students a learning experience. Beside, the BU is open so the you cannot stop students accessing it. I think you'd be better off comparing a 'before' and 'after' measurement...maybe you have a rubric that they fill out...or they do a self evaluation about confidence, ability, skills, knowledge etc...

* What outcomes are reasonable to measure?

Depends what learning outcomes you want to achieve. Will this experience be embedded into a course or extra. Have a look at the self-evaluation the students are asked to do about each part of the scenario...maybe we can use that as a framework (the questions are based on the OSCE exams that are used at Otago Polytechnic, New Zealand): http://wikieducator.org/The_virtual_birthing_unit_project/Phone_call/Midwife%27s_self-assessment_form

* Do we measure pre and post simulation?

Probably some sort of self-evaluation...with some sort of scale?

* How do we keep bias out when we are the ones involved?

I think we can use mixed methods....a survey type evaluation with the students, and maybe a reflection from them. And a reflection from us. What about comparing exam results with previous years?

What do you think? Sarah

Sarah Stewart said...

Some thoughts from Susan:

Outcome thoughts.

The comparison may come from the three universities .

I would like to look at the advantage of students meeting in the online environment from different countries. I do feel it will enhance self awareness and assertiveness.

I believe in the pre and post so, would be good to see if there have been any changes in these areas.

I think we will need a online survey to record the learners findings of using it.

Could debriefing occur in second life?

Bias is difficult to control but I do feel if it is online and student-led then bias is reduced.
I think if we keep it small to start with e.g. pilot that will help or we could do it as an evaluation rather than research.

I am not sure how we would get through the ethics committee with an international project but I am willing to give it a go.

Sarah Stewart said...

More thoughts from Susan:

In answering the question on ethical issues the key issue is as always the informed consent. It is important to question how the students, would feel and what harm would be involved. The main point would be to ensure students were comfortable using the environment. The potential benefits and problems should be discussed prior to the study. so I would agree with a pre and post survey .

I also agree with Sarah that it would not be possible to have a true comparison group but what we could do is get some students to undertake the scenario and others to meet together in the meeting area. We could evaluate the success of both in terms of students learning.

Using second life as a teaching method is brilliant for blended learning approaches but it may also be an issue for us in sharing the results. The primary responsibility I feel is to the students and when sharing the results we have to be honest in our conclusions and sensitive to the effects of the research on the student so confidentially must be preserved when we are thinking about the outcomes

In second life it is possible for the researcher to be involved in communication and able to reveal personal aspects of them self, so trust is important as part of the assurances of confidentiality. I think it can be argued that the researcher’s job is to record and later analyse, not pass judgement on the students in the virtual environment which is a safe area to make mistakes and learn from them. Integrity is therefore important methodologically and this will apply to us as well as the students we will have to acknowledge in the results where we have been involved. However if we are looking at students outcomes we should be considering the value added of talking to overseas students if we manage to get them together

Sarah Stewart said...

Hello Anne Marie, Catkins and Malcolm

Thanks for your comments and advice. I haven't got around to thanking you because I am still thinking about this issue. I will contact VLENZ, catkins...thanks for reminding me.

My main concern with comparison groups is I do not think it is ethical to withhold a resource like this from students, although I think it is a good way of looking at results. I also do not think we can compare students from different universities because the teaching/learning contexts will be different. So am still thinking about this....