Thomas’ defense of the study of artifacts really made sense to me this week. Typically, it is used as an alternative when “real” or “direct” research can’t be done, or as a preliminary step to set-up the “real” research. Her retort is, “how direct is any method,” and I agree totally. Studying people wholly directly doesn’t make sense. We can look at people’s behaviour, and find out what they think they know or what they think they think, but why should these methods be any more direct than looking at what they create and what they leave behind? We look at everything through a number of lenses. Some of them are ours – our experiences, theories, desires. Some of them belong to the people we’re studying, and some to the society that we’re all stuck in. A person’s behaviour can be just as removed from what we’re trying to study as an artifact they leave behind.
I liked Knight’s analogy to history research, and how it’s always coloured by the prevailing theory of the day. There isn’t a truth, there are many truths, depending on how you want to look at it. I think there is value in finding all of the truths in something, and I like the idea of applying this to the social sciences as well. I think that especially when it comes to human behaviour, we are going to find so many more truths than we expected, because every participant or person we study brings their own to the table. Content analysis may be easier and less expensive than studies directly involving human participants, but that doesn’t make it any less legitimate or direct. It’s just another method of finding a particular truth.
First, I just have to mention that this week I am getting the same message from Luker about interviews as I am in my library advocacy class: don’t talk about what you want, tell them what they want to hear! I guess this is good advice in general when you want something from anyone…be it a larger budget or for someone to open up to you.
Luker’s approach is growing on me. I’ve always viewed interviewing in the canonical way, where you have to ask every interviewee the same question in the same order. My eyes were opened to the joys (and frustrations) of ethnographic research in the workshop I took with Jenna Hartel – there, I had images of reams and reams of data floating about in my head, all unorganized but telling a really good story! In this class I am trying to marry these two ideas into something that makes sense to me as a person. In my undergrad I was trained in the typical psychological scientific method, but I also chose not to go into psychology for my masters because I wasn’t really comfortable with the research style. Someday I’ll settle into a style that fits me perfectly!
Focus groups. Okay. The main thing that came into my mind when I started reading Lunt’s article was Solomon Asch and his studies on conformity. Basically he would put a participant in a room with a bunch of confederates, and ask them all which of the three lines displayed is the longest (and other questions of that sort). The confederates would start off answering correctly, then would start all answering incorrectly – sometimes very obviously incorrectly. Over 33% of the participants changed their answers to match consensus, even if they knew it was the wrong answer! Peer pressure can be very strong, and I was skeptical about the usefulness of getting a bunch of people together to say what they think. Out loud! In front of each other!
It’s clear to me now that I was thinking from a place of quantitative bias. I really enjoyed Lunt’s explanation of focus groups from a more ethnographic view, I guess. They got way more interesting and less problematic for me when I started thinking about focus groups not only as a way to gather information, but a way to watch how people form their opinions. Being able to see how people react to something, and then how they decide what they think about it based on what other people are saying about it, that’s the cool part. When it comes to people’s opinions, it is impossible to avoid bias. Sure, you can ask individual people about something before they get a chance to discuss it with anyone else, but that’s not going to be very helpful because nobody forms an opinion without discussing it first. What I think about different movies and tv shows is based on my conversations about them with my friends. What I think about various products is informed by what strangers are saying about it on amazon.com. It seems obvious now that incorporating this formation process into the study of what people think about whatever you want them to think about is really useful, and the insight gleaned about the factors people do use to form their opinions is very important.
Luker put me a little bit at ease this week. Depending on the topic, I used to enjoy doing lit reviews – it’s interesting to try and scrape the edges of the research container. This may be easier in psychology (my background) than in sociology, and all this talk of info-glut was starting to make me panic! Pulling together all the studies that include the Stroop task, or all of the MRI studies of the hippocampus, for example, seems much less daunting than trawling the “literature” (in quotes because of how non-specific that word actually is) for bits and pieces that might have something to do with a research question that probably won’t actually be formulated until after many iterations, each including its own lit review…phew. Luckily, Luker has included some tips to help narrow the focus of my research; though truthfully, some of them I’ve already been doing forever – “Harvarding” sounds to me like “undergradding” in general, and some were just searching tips I’d be embarrassed not to know about in this program! It was nice to have some of my research habits validated, though. Getting started on my daisy will help to narrow things down as well. All of these exercises are helping me (and hopefully all of us) to slowly chip away at the monolith that is research methods, and I feel that by the time we get to the end of them, it won’t look quite as large as we thought it did!
To be honest, Luker’s take on research scares me a little bit. Through my undergraduate degree in psychology, all of my courses were steeped in the “canonical social science” methodology. The scientific method was held up as the be-all and end-all of knowledge discovery and creation, and I ate it up. I gravitated more towards the biology than the sociology, taking courses in neuroscience and behavioural pharmacology, surrounding myself with “real” scientists doing “real” scientific research. It wasn’t until I took a course in the history of psychology that I started to really question this view. We picked apart the portrayal of psychology “heroes”, the founders and trailblazers of the field, in our introductory textbooks. So many studies are blindly presented as classics that would be considered junk using the scientific method today.
This class introduced to me what Luker touches on in chapter 1 – the pursuit of truth is important, even if it’s potentially impossible to find; also, because the definition of truth is so elusive, we can use whatever method we want to try and find it. Really, what’s so wrong about getting at the truth by telling a good story? Novelists and poets do it all the time!
Applying this to research scares me a little bit, because of what was drilled into me in my undergrad. I have linearity in my training, though I am more comfortable than Luker with the non-linearity of the internet. This comparison intrigues me; I think it would be a lot of fun to do research in the same way that I surf the web – as long as I can silence the voice in the back of my head yelling, “this is not scientific!” In chapter 1, Luker says that the social world is changing, along with how we perceive it, yet our methods of studying it are set firmly in the past. I see myself sitting exactly there – with a new mindset of society and how we fit into it, but lacking the tools to explore it. Hopefully this book, and this course, can give all of us some new tools to answer our burning questions!