Wednesday 10 October 2012

More interviewing. More thoughts.


Three more interviews today with participants from both the intervention and control group.  Haven't transcribed yet (obviously) but some preliminary thoughts:

  • Knowing they were going to be assessed afterwards was probably the main focus for all of them and this strongly influenced their approach to how they used the VRE/plastic model.
  • They adopted subtly different approaches to how they used it but it was systematic in all cases and focused around the forthcoming assessment of spatial knowledge. I will look at whether specific learning strategies inherent in these participants correlates with this assessment focus. That's a good thing of the Vermunt ILS: You can look at sub-scale scores!
  • First thing that they all did was use the model to strengthen their knowledge of anatomy (not necessarily spatial knowledge) based on the pre-intervention MCQ - i.e. what they thought they didn't know.
  • After identifying structures they seemed to use the model to see how structures related to each other spatially. This was done differently by different participants. One (control group) went systematically sup to inf, ant to post and medial to lateral, self-testing as they went. There was relatively little rotation of the model during this. One (intervention) rotated the model freely to see how structures related to each other. They commented that this was really helpful to them. This participant had a relatively low spatial ability score compared to the other interviewees today. I wonder if this is important in relation to how they manipulated the model? The final interviewee (intervention) did not rotate the model at all while learning spatial relationships. He commented that he had a clear 3D mental image of the brain and thus didn't need to rotate it. His spatial ability score was relatively high. Again this is potentially of interest.
  • The self-testing theme definitely seems to be emerging. They all commented that they consciously did this.

Tuesday 9 October 2012

Thoughts on interviewing in the research

Have done a couple of the post-experience interviews now and 3 more tomorrow (10/10/12). Was a bit uncertain about how these would go as it isn't something I feel particularly comfortable about. I'm definitely someone with more of a quantitative bent. So what are my initial thoughts? What follows here is all a bit 'stream of consciousness' but I suppose it's good to get these thoughts down now:
  • Videoing the participants' interaction with the model (control or intervention) is definitely a good idea. It avoids problems with recall, allows them to articulate their thought processes while watching themselves and provides a great basis for the early, unstructured part of the interview.
  • It might be possible to analyse and code the videos themselves. I'm not sure this is worthwhile but I'll definitely reflect a bit more on this.
  • Starting off by getting the participants to view the video and articulate what they were thinking and doing seems to have worked/is working well. It has already raised a couple of issues that I definitely didn't expect and may not have got without this approach - e.g. using self-testing and seeking feedback on knowledge, adopting a systematic approach to the self-directed tutorial knowing that they would be tested again afterwards.
  • With regards this last point; I wonder if this is a positive or negative issue? Does the fact that they might be preparing themselves for 'a test' influence how they interpret and use the model? Might this be an example of some sort of 'Hawthorne effect' - i.e. the participants are adjusting their behaviour in relation to how they use the model because they know they are involved in research and will be tested afterwards? Is it the 'post-test' that is foremost in their mind rather than learning about the spatial relationships of the anatomical structures. I'm not sure you can separate learning and assessment anyway so maybe it isn't important.
  • Is there a danger that I will place too much emphasis on these unexpected issues at this early stage and identify them as important themes? I should test these out in subsequent interviews but maybe it is too early to do that. Purposive sampling at the moment. Theoretical sampling later?
  • Trying to get my head around Grounded Theory (GT) is difficult. The language used in relation to it seems unecessarily complex and, in many cases, not intuitive at all. It is driving me nuts.
  • A discussion with a colleague about GT has helped somewhat and I have a clearer idea about how to go about coding the transcriptions and managing the data.
  • I need to start transcribing SOON while it is fresh in my mind. I intend to transcribe at least the first few myself to help with some immersion in the data but I know it is time consuming and mind numbingly boring so have been putting it off. I have a 'free' day this week and I WILL get on and do it. Honestly.
  • I'm not sure exactly how to 'link' the qualitative data with the quantitative data yet - e.g. emerging themes/categories with characteristics of the participants such as spatial ability/learning styles/baseline knowledge etc. I think I may start by mapping these on flip chart paper but not sure. Will need to discuss further with supervisor.
  • How many interviews??? At the moment it's early days and I'm pretty much interviewing all participants who volunteer but as the number increases I will be able to monitor whether all important characteristics within participants are being included - e.g. good and poor spatial ability, those with very strong/divergent learning styles, male/female, range of ages etc. and continue the purposive sampling until I start to see saturation of themes.
  • Wonder if it might be a good idea to use a focus group or two after this to test out emerging propositions?
Give me statistics any day of the week!

Tuesday 2 October 2012

Potential selection bias problems

Oh bugger. I've stumbled across a problem that I hadn't anticipated. One that will introduce selection bias if I'm not careful.

The baseline assessment of spatial ability is being done via 2 tests: The Vandenburg and Kuse Mental Rotation Test and the CEEB Mental Cutting Test. Both are reliable, valid and well suited to the study. Example problems from both tests are here:

The top one is an example from the MRT (A) battery. The bottom one is an example from the MCT.

I got a couple of large groups of participants to do these tests immediately after consent to participate. The rationale for asking them to do this was as follows:
  1. They need to be timed and invigilated.
  2. I need the data to add to the minimisation algorithm before allocation.
  3. It saves a lot of time on the main data collection days that they attend.
Both tests are quite 'tough' and yesterday I noted about 5 participants give up after a couple of minutes because they couldn't do them (easily?) and deciding to then not participate in the research because of this. Clearly these are those who are particularly low on the spatial ability (SA) continuum and ones who I would ideally like to include in the research.  Through trying to be organised I have introduced some selection bias. I'm not sure that it would be ethical to follow these individuals up and encourage them to have another go and participate.

Possible solutions?
  • Do these tests after other aspects of data collection. This would mean using simple randomisation  instead of minimisation (actually someone from the research centre wondered why I wasn't doing this anyway, suggesting that it would be perfectly acceptable) but I could still end up with attrition later in the study if they give up on the SA tests later rather than sooner?
  • Explain to other prospective participants that the SA tests are quite tough and that low scores are not something to worry about. Essentially try to encourage them to give them a go and continue with participation.
  • Accept that attrition will occur as a result of this, accept it and discuss the limitations in the thesis.
Not sure what to do yet.