I found the article “The Challenge of Qualitative Content Analysis” very interesting (particularly once I noticed that it was published 60 years ago – although I’m not sure whether the fact that debates that were taking place when my parents were in primary school are still relevant is more fascinating or depressing). In it, Kracauer emphasizes something we’ve discussed throughout this course – the importance of self-awareness when conducting research. As he points out, quantitative content analysis can be extremely subjective when it comes to questions of how particular elements within a text are coded. The difference between quantitative and qualitative content analysis is that the subjectivity inherent in quantitative analysis may be subsumed and the resulting numerical data presented as objective truth (with qualitative content analysis, that subjectivity generally remains in the foreground). In order for results to be properly situated, it is important for researchers conducting quantitative content analysis to be aware of and honest about the subjective choices that have gone into the creation of their data.
Yesterday’s workshop made me think about the similarities between peer reviewing (which is new to me) and editing (which is how I’ve made my living for most of the last decade). Perhaps foremost is the importance of remembering that another human being will be reading your comments, and resisting the temptation to be overly critical or unkind. For example, author queries should always be phrased in the most diplomatic way possible – “I’m finding this part a bit confusing,” rather than “This makes no sense.” Another similarity that I found between peer review and editing is the fact that it’s not about you. It’s the author’s name that will appear on the finished product, and your work serves no one if you are trying to turn the manuscript into one you would have written, rather than helping to make it the best possible version of the manuscript that the author has written. To that end, it’s important to phrase recommendations as just that – recommendations – not imperatives. For example, authors generally respond much better to “Perhaps you might consider…” rather than “You should…” Ultimately, the decision of whether to make any changes is the author’s alone (that decision may have implications for whether the work will be published, but that question is separate from your work).
I wasn’t sure what to write about in my blog post this week because there weren’t any assigned readings for this week. Over the past week I’ve been thinking about the Peer Review. I’m not really sure how to do a Peer Review but today’s class did help to give me an idea of how I should go about reviewing the article. I plan on peer reviewing “Privacy and Modern Advertising”. If we are reviewing the research methods I will want to find out more about surveys. I found a few articles on conducting online surveys and the methodology behind creating an effective online survey. I plan on reading these articles and keeping them in mind when I read “Privacy and Modern Advertising”
Over the weekend, I has a chance to take a survey for the first time since starting this class, and I found myself analyzing it along the way. The timing was great because I’m planning to do my peer review assignment on the podcasting article with a survey methodology. I used this experience as kind of a trial run at evaluating survey methods.
I took the survey to help out a friend who is doing a project as part of an MBA program. I think the survey was designed to gather market research; the students’ project involves evaluating a new business idea. The survey was brief (10 questions) and collected demographic information as well as reactions to the business idea. Here are a few of my observations about the survey:
- The survey design was descriptive – it gathered information about the respondents and their attitudes toward the business idea.
- The sampling method was based on convenience, not probability. I received an invitation to take the survey in an email blast (presumably, sent to everyone my friend knew who might be coerced into taking it!) and the link was also posted on Facebook.
- The multiple choice options for responses to some of the questions were very specific, to the point where I felt that anonymity could be compromised – especially because the researchers sought out people they know to take the survey rather than a random sample. For example, some of the age ranges only cover 4 years:
- In contrast, the wording of the questions was at times vague and could have been made more specific, particularly considering the possible responses to some questions. For example, one of the question asked about marital status and specifically allowed for responses that corresponded with cohabitation, but the question regarding salary only asked to “indicate your income level” but did not specify whether this should individual or household income. As a student, I (individually) fall into the lowest income bracket on the survey, but that skews the results because my household income is actually in a higher bracket due to my spouse’s income. Considering that the proposed business venture is an after-hours spa, and the intended market would likely include mothers who do not work but still come from higher-income households, I think the survey would have benefited from more specific wording in this question.
All in all, this was an interesting experience at a convenient time! It definitely helped me start thinking about issues of sampling, reliability, and much more…
Not sure if we were supposed to share this week, but I’d thought I’d post a link that I stumbled across.
A new study led by UofT researchers tried to determine what a people’s display pictures say about their happiness with their relationship.
The news item I read about it said the researchers used the following methods:
In the first study, 115 people were asked about their shared Facebook photos in comparison to how they rated their satisfaction with personal relationships. The second study examined levels of relationship satisfaction among 148 people and tracked photos posted over the course of a year. The final study involved 108 couples keeping daily diaries which were compared with their online postings about their relationship.
In the article’s abstract, the conclusions were as follows:
we found that individuals who posted dyadic profile pictures on Facebook reported feeling more satisfied with their relationships and closer to their partners than individuals who did not. We also found that on days when people felt more satisfied in their relationship, they were more likely to share relationship-relevant information on Facebook.
After just reading these conclusions, I asked myself, “isn’t that kind of obvious?” I was missing the answer to the now infamous question, “so what?”
I think I gained some insight once I read the full report. The researchers describe how “it is possible that people who are less satisfied in their relationships would post dyadic profile pictures as a self-presentation strategy to appear happier in their relationships to other people,” but that their research seems to contradict that notion.
However, I’m still a little suspicious of the methods used in the study. The second part, where they measured the initial relationship satisfaction and closeness of participants and then studied and coded their profile pictures three times in one year period, seems especially problematic to me. First, I felt this kind of assumed that the couples’ levels of happiness would be a constant, when in reality, these things change daily for most people. In this part of the study the researchers also tried to control for participants’ personalities and personal happiness using equations and inventories to make sure their relationship satisfaction wasn’t just a result of their being “happy people” in general.
I guess with my lack of experience, I just don’t know how the methods in this section could really work. I fail to see how they can provide such a straightforward, “clean” picture of such an emotionally-fraught and “messy” topic.
Let me know what you think if you get a chance to look over the study!
See you in class,
Seems I forgot the itty-bitty detail of posting the blog entry I drafted late last week, but at least I got to discuss my ideas in relationship to some of your blog posts today in class.
Then, wouldn’t you know it, just as I was about to publish my post tonight with a few highlights of my thoughts from this week, I accidentally hit “back” and lost the post in its entirety.
Here’s my third stab.
As I read Luker’s chapter this week, I stumbled upon a passage that almost made me break out in a cold sweat. In her discussion of interviews as an ethnographic research method, Luker says she’s okay with using “leading” questions as long as you’re aware of what you’re doing and you’ve built up a rapport with your interviewee that you feel would allow them to feel comfortable in telling you if your question was “way off.”
After four years of journalism school, I’ve come to see the “leading question” as a BIG “no-no.” We were taught that leading questions were evidence of unethical journalism and sloppy reporting — the kind of technique no self-respecting journalist would need to employ. This was linked to the ideals of “objectivity” and leaving your bias out of the interview we constantly discussed in my program.
Though I’ve never really believed in “objectivity” and feel that bias is something to be embraced and pointed out in research rather than swept under the rug, I just could not get on board with Luker on this one.
Is it OK to ask a leading question as long as you’re aware that this is merely a technique to get at their true feelings? Does fostering a respectful, comfortable relationship with your interviewee really mean they’ll respond to the leading question in the way you anticipate?
Maybe, but it’s definitely going to take some time before I try this tool on for size.
What do you guys think? Would you feel comfortable employing a leading question in a research interview, why/why not?
I’d love to hear your thoughts!
P.S. Had a wonderful opportunity last week to hear the American researcher and professor Dr. Patti Lather give a talk on the transformations that have taken place in Qualitative research in recent years and how to situate one’s research in “the afterward.” I encourage you guys to check out her website. I’m currently trying to track down an article of hers she mentioned on “the validity of tears”, which connects to our discussions about how “close” researchers need to be to their subjects for others to view their research as valid.
In his article, Stebbins talks about the “participant-as-observer”. For my research proposal I’m
proposing to use this method of research. One aspect that I find difficult is becoming a new comer,
which Stebbins addresses, “Members of the setting are unlikely to welcome or even tolerate in their
midst for long anyone who threatens them. The participant-as-observer blunts this initial threat by
striving to fit in as soon as possible” . In my situation it will be even more difficult to fit in because I
want to look at people who critique videogames in forums, specifically at people who want to stop
feminist critiques. Being a woman, it will be harder to become a participant in these discussions
because I will already be an outsider, but on top of that I want to look at the environment under a
microscope, so to say, and shed light on the situation. Stebbins says that “To fit in means among other
things, to learn the values, lore, codes of behaviour, hopes and fears, costs and rewards, sense of
involvement,… and the like of another social world” . I think that this trying to fit in will be the most
difficult part, because for some reason, some gamers already feel threatened, and it is my hope that
I will be able to build some rapport with some of these gamers to get a better understanding of their
hopes and fears.