To name, or not to name..

I can’t help but feel the same way as mmbruno, when thinking about how research pursuits are compromised in order to comply with research ethics guidelines,  especially in regards to my research with children/young people. Last week, Dean Sharpe spoke about the allowances in the research protocol for justification as to why you’d want to publish a participant’s name in your thesis/writeup. I’ve always wondered about this in terms of how childrens’ right to recognition for their participation might clash with our responsibilities as researchers to protect their identities. For my specific research project, revealing the identity of the participants wouldn’t really be that useful, but I could see a situation where, if I was using more participatory methods, like having the young people do and publish journalism online, I might want to give them individual credit for their work. I think in that situation I might come up against some conflict about whether or not that was ethically “OK”. This is certainly a topic for further exploration on my part!

Advertisements

Week 12 — Mini Online Research Assignment — Getting the “Gist” of daily emotion.

Prompt 1.

The social media platform  Grace in Small Things” (GiST) is a great example of an online environment tailored to emotional information seeking because it asks users to share emotional experiences from their daily/current life with others. It would be a particularly interesting site to use for this case study, because in direct opposition to large-scale commercialism, as it asks users to share about simple moments or experiences that make them grateful. However, it is interesting to note that some users, including myself, often end up finding daily grace in “small things” that are consumer products, and posts about these items may inspire public dialogue with other users about the products that can verge on promotional. (i.e. “Today I am grateful for waking up to fresh coffee straight from the Bodum press.”)

What it is: GiST is like Facebook and WordPress rolled into one. Its express purpose is to inspire gratefulness and positive thinking in an online community by asking users to share about the good and little things in life. Many users (as I have done) post a list every day of the things they are grateful for or have impacted them that day. There is also now an app being developed for iOS and android to compliment the site.

Who it is tailored to: GiST has almost 2,000 users and is open to everyone, but I have never encountered a male user personally (although they do exist). The majority of users seem to be middle-aged, (upper-and lower-) middle-class  women, but there are also younger women like myself who use it. I would assume that the site’s main user  group is very similar to the major user group of Pinterest.

What emotional relations might exist: As a user, I have experienced only postitive, supportive emotional relations on GiST. This also makes it an interesting case study because it is one example that goes against the notion many observers of Internet culture have put forward that the anonymity of the Internet fosters inherently negative and aggressive social interactions. Comments on the site are usually posted in the form of encouragement as in “Good for you for making time for yourself today!” etc.

Why it would make an ideal case study for the larger project: see post introduction. This would probably best when used in conjunction with other case studies as well, given the uniquely positive nature of this site.

Potential methodological challenges to be addressed: because there are not many users of GiST compared to other online communities like Facebook, and because the nature of posts on GiST is usually so personal, it would perhaps be more difficult to protect participants’ anonymity,  especially from other GiST users, upon publication.

–Averie


Constructing “the field”

I’ve always looked at the concept of “the field” in research somewhat suspiciously. It is often propped up as this great treasure trove where all your answers lie, just waiting to be discovered. In reality, in my own journalistic research, I’ve constantly  come up against the fact that we are always already in the field, or at least implicated in it and that our positionality as such often makes “the field” a space that raises more questions than it provides answers.

So imagine my delight when I read the quote from Amit at the beginning of the Hines article this week which proposed that,

“in a world of infinite interconnections and overlapping contexts, the ethnographic field cannot simply exist, awaiting discovery. It has to be laboriously constructed.” (2000)

I was really interested in Hines’ approach throughout her piece (although I’ll admit I didn’t feel I really grasped her subject matter when she described her own project at the end). In terms of doing Internet research, she seems to argue that since “the field” is one of infinite possibilities for exploration, it’s really up to the ethnographer to strike a balance between limiting his/her scope while still being flexible and open enough to allow it to be broadened when necessary. It seems like an easy enough concept, but I felt that Hines really showed how complex a struggle it really is to stay that “open” and not fall into some kind of research “abyss”.

Though my research isn’t really Internet-based (as of yet), as Hines said, the ideas she explores in this paper might really be applied to any type of qualitative research. They’re comforting to me, as I often get wrapped up in whether my “bias” is intervening too much, or whether I’m limiting my scenario too much — am I being representative in my research design etc. I feel like Hines is saying while it’s good to have this in mind, we have to also keep in mind that every space of inquiry is essentially a construction, and that it’s our ability to stray within and around the borders of that space that are really the key issue for comprehensive research.

–Averie


Bingo!

When I got my SSHRC proposal back a few weeks ago, Professor Grimes wrote SEE YIN RE CASE STUDIES in big letters on the top. Apparently, what my research involved was actually very similar to the case-comparison approach that Yin details in our reading for today, I’d just been calling it something else. (Luckily, I got around to reading the paper and was able to edit my program of study before the OISE departmental deadline — which was today!!!). I am now super curious about Yin’s book, which was mentioned in lecture today. My plan for my thesis research is to engage with young people in three Toronto classrooms — taking my “grandiose idea” about how young people define “news” in the digital age and making it into a manageable project. Like many others in the class, I too was worried about whether my work would be criticized for “simply telling stories” — as there’s no possible way that my data will be generalizable for all Canadian youth. So I think Yin’s definitions and suggestions will really help me build up my proposal, proving that, thoughnot generalizable, my results are still relevant and important.

I think the thing I liked most from Yin’s article for today was his suggestion that case studies could be written up in an open-ended Q&A format instead of more like a term paper (60). I think this could be a really interesting and engaging way to present my research, and could help me avoid my constant temptation to “tell a proper story” instead of presenting “just the facts ma’am”.

Basically, I feel like shouting “BINGO!” because this lecture has introduced me to a source that will probably be vital to constructing my longer research proposal. 🙂

–Averie


Content Analysis Qualms

Though I’m used to doing a close textual analysis on various news media sources, I’ve never done anything close to coding or statistical analysis. I didn’t have a clue just how scientific content analysis can get! However, after this weeks readings, I’m not sure I really believe that content analysis needs to be all that “scientific” to be valid or sound research. As Kracauer puts it, “As currently practiced, quantitative analysis is more “impressionistic” than its champions are inclined to admit” (636).

After reading Kracauer’s piece, I’m not overly convinced that we should even be looking at qualitative and quantitative content analysis as two different approaches. They seem to me to be more or less two points on the same spectrum.

Take the content analysis piece that I’m reviewing for our upcoming assignment, for example (the one on Cuban-American news coverage). It uses frequencies and analysis involving this little guy: χ² (something to do with a chi square? If ANYONE knows ANYTHING about statistics, I could really use some help with that stuff). However, the piece also discusses how these frequencies might be interpreted, and what the varying coverage  of Cuba and US-Cuba politics might mean in terms of the attitudes of the different papers’ readers. So is this a quantitative or qualitative content analysis?

Well, Kracauer does give a whole section of exploration to qualitative studies that rely on frequency counts (638), so maybe this is one of those?

My point is, I’m not 100% sure why it matters which camp the study is done in, as long as it’s well done. I’m guessing it’s probably best to let the research question decide whether a study should lean more towards the qualitative or quantitative end of the spectrum.

I would love for someone who actually knows what they’re talking about to set me straight on all this.

Your thoughts?

–Averie


What Facebook reveals about our relationships — new study led by UofT researchers

Hi team,

Not sure if we were supposed to share this week, but I’d thought I’d post a link that I stumbled across.

A new study led by UofT researchers tried to determine what a people’s display pictures say about their happiness with their relationship.

The news item I read about it said the researchers used the following methods:

In the first study, 115 people were asked about their shared Facebook photos in comparison to how they rated their satisfaction with personal relationships. The second study examined levels of relationship satisfaction among 148 people and tracked photos posted over the course of a year. The final study involved 108 couples keeping daily diaries which were compared with their online postings about their relationship.

In the article’s abstract, the conclusions were as follows:

we found that individuals who posted dyadic profile pictures on Facebook reported feeling more satisfied with their relationships and closer to their partners than individuals who did not. We also found that on days when people felt more satisfied in their relationship, they were more likely to share relationship-relevant information on Facebook.

After just reading  these conclusions, I asked myself, “isn’t that kind of obvious?” I was missing the answer to the now infamous question, “so what?”

I think I gained some insight once I read the full report.  The researchers describe how “it is possible that people who are less satisfied in their relationships would post dyadic profile pictures as a self-presentation strategy to appear happier in their relationships to other people,” but that their research seems to contradict that notion.

However, I’m still a little suspicious of the methods used in the study. The second part, where they measured the initial relationship satisfaction and closeness of participants and then studied and coded their profile pictures three times in one year period,  seems especially problematic to me. First, I felt  this kind of assumed that the couples’ levels of happiness would be a constant, when in reality, these things change daily for most people. In this part of the study the researchers also tried to control for participants’ personalities and personal happiness using equations and inventories to make sure their relationship satisfaction wasn’t just a result of their being “happy people” in general.

I guess with my lack of experience, I just don’t know how the methods in this section could really work. I fail to see how they can provide such a  straightforward, “clean” picture of such an emotionally-fraught and “messy” topic.

Let me know what you think if you get a chance to look over the study!

See you in class,

Averie


Taking the “Lead”

Seems I forgot the itty-bitty detail of posting the blog entry I drafted late last week, but at least I got to discuss my ideas in relationship to some of your  blog posts today in class.

Then, wouldn’t you know it, just as I was about to publish my post tonight with a few highlights of my thoughts from this week, I accidentally hit “back” and lost the post in its entirety.

Here’s my third stab.

As I read Luker’s chapter this week, I stumbled upon a passage that almost made me break out in a cold sweat. In her discussion of interviews as an ethnographic research method, Luker says she’s okay with using “leading” questions as long as you’re aware of what you’re doing and you’ve built up a rapport with your interviewee that you feel would allow them to feel comfortable in telling you if your question was “way off.”

After four years of journalism school, I’ve come to see the “leading question” as a BIG “no-no.” We were taught that leading questions were evidence of unethical journalism and sloppy reporting — the kind of technique no self-respecting journalist would need to employ. This was linked to the ideals of  “objectivity” and leaving your bias out of the interview we constantly discussed in my program.

Though I’ve never really believed in “objectivity” and feel that bias is something to be embraced and pointed out in research rather than swept under the rug, I just could not get on board with Luker on this one.

Is it OK to ask a leading question as long as you’re aware that this is merely a technique to get at their true feelings? Does fostering a respectful, comfortable relationship with your interviewee really mean they’ll respond to the leading question in the way you anticipate?

Maybe, but it’s definitely going to take some time before I try this tool on for size.

What do you guys think? Would you feel comfortable employing a leading question in a research interview, why/why not? 

I’d love to hear your thoughts!

–Averie

P.S. Had a wonderful opportunity last week to hear the American researcher and professor Dr. Patti Lather give a talk on the transformations that have taken place in Qualitative research in recent years and how to situate one’s research in “the afterward.” I encourage you guys to check out her website. I’m currently trying to track down an article of hers she mentioned  on “the validity of tears”, which connects to our discussions about how “close” researchers need to be to their subjects for others to view their research as valid.