Compromising Ethics

When talking about research ethics, I cannot help but think about the flip side of the situation: how many research pursuits have been compromised in order to comply with research ethic guidelines. I imagine that there have been many instances where the parameters of one’s area of inquiry, research questions, and/or methodology have had to be reconceptualised in order to correlate to what is considered to be acceptable practices. As such, the findings of such research may not reach the ideal conclusions. While I am not opposed to adherences to ethical practices, and very much believe that they need to be in place to protect the best interest of participants, I am left asking whether, for certain kinds of experimental/controversial/cutting edge research practices, safeguarding the parties involved have, perhaps, thwarted serious breakthroughs. On a more utilitarian/Machiavellian note, might potentially compromising a few be worthwhile if the benefits to the whole cannot be otherwise surmountable? This is a very scary line insofar as humanity is then viewed and treated as a means, which begs the larger question: to what end? 

Advertisements

Mini-Research Assignment: Wikipedia, the People’s Encyclopedia

On a very basic level, Wikipedia is an online information infrastructure serving a dual functionality as an information resource from a user perspective (i.e., a collection of linked articles and images that can be consulted, referenced, and used for individual information needs), and as a collaborative interface where one can contribute information anonymously, or create a user account and join an online community. Wikipedia can be analyzed from a technical level as a repository of articles, discussion pages, coding and tags, user accounts, rules and guidelines, all with history pages of their evolution, time-stamped with each addition made. As Star (1999) suggests, one can examine the “hidden mechanisms subtending those processes […] digging to unearth the dramas inherent in system design creating, to restore narrative to what appears to be dead lists” (p. 377). As such, studying information infrastructure is a pursuit that attempts to uncover “embedded strangeness, a second-order one, that of the forgotten, the background, the frozen in place” (Star, 1999, p. 370).

A technique I found useful for analysing Wikipedia’s infrastructure is Star’s (1999) dimension of “transparency” in which she states “infrastructure is transparent to use, in the sense that it does not have to be reinvented each time or assembled for each task, but invisibly supports those tasks” (p.381). As an online information resource, Wikipedia is consistent and straightforward in terms of its navigation from a user perspective as one can access it like any other website. However, in order to actively contribute content, one must be aware of all the policies and guidelines delineated to do so if one endeavours to have their edit last. In addition, one also requires basic knowledge of HTML and how to edit and format text from a technological standpoint. These different approaches to and uses of Wikipedia are transparent insofar as they are clear to anyone who seeks them out; however, whether they are intelligible and by whom is another story. The degree of transparency becomes clearer the more one becomes familiar with the infrastructure, thus leading to another one of Star’s (1999) dimensions: “learned as part of membership” (p.381).

While Star (1999) describes a property of “learned as part of membership” (p.381) to be how “strangers and outsiders encounter infrastructure as a target object to be learned about, [while] [n]ew participants acquire a naturalized familiarity with its objects, as they become members” (p.381), in Wikipedia, the more one immerses oneself within the collaborative community of knowledge production (i.e., [re]presentation), the more one becomes aware of the power structures embedded within it that might not be as observable or even of consequence to the average user.

Another of Star’s (1999) dimensions of infrastructure is how infrastructure is “link[ed] with conventions of practice” (p.381). As Star (1999) explains, “infrastructure both shapes and is shaped by the conventions of a community of practice” (p.381). We can see this dimension manifest in Wikipedia’s infrastructure insofar it, as an information resource, is indebted to the community of collaborative efforts. My efforts engaging with Wikipedia have been unveiling insofar as anonymous contributions  I have made have been, more the most part, heavily scrutinized and even deleted; whereas the same edits I have made under my user account have been left unquestioned by otherwise suspect Wikipedians and bots. As such, the communal facet of Wikipedia is not something to be ignored as it is an embodiment of a convention that privileges, or in extreme cases demands, some degree of ownership or accountability to that which is contributed. Therefore, this suggests that perhaps the norm is to question or target, even dismiss, the substantive contributions made anonymously as opposed to focusing efforts on ensuring the best information is made available.

Beyond the elements of infrastructure discussed, I also found Star’s (1999) notion of infrastructure as relational quite useful. The very notion reminded me of dialectic in a Marxist sense, which is a useful tool for appreciating that different aspects of a situation take on a particular meaning depending upon the relationships established between various elements. What was useful about the idea of infrastructure as relational was that it helped me to see more clearly the power relations within the Wikipedia enterprise and the subsequent imbalances between contributors and the arbiters of given contributions. From the point of view of a contributor, my naïve assumptions about what counts as a useful/legitimate contribution were clarified in light of the authority that came to bear and make such decisions.


Blinding Ideological Insight

Echoing Marx, Žižek (2008) understands ideology to be the assumed and unquestioned background we take (for granted) as our starting point for any encounter/understanding with/of the world (p. 13). This blindness to our own assumptions is what may close us off to encounters with difference/otherness and reduces our awareness to binary oppositions. Echoing these thoughts, Star (1999) outlines in her ethnography the need to identify ideologies (or “master narratives”) when referring to the relations constituting Infrastructure, lest we “freeze” their dynamism when “reading” its features (p. 384). To me, this is the grey area when it comes conducting  research in an area of interest or personal community of practice–for oftentimes, our ideology (i.e., beliefs, assumptions, biases, etc.) forms the basis of understanding, which, ultimately, focuses a targeted lens on our projected area of inquiry. As such, this makes me question whether one can ever research anything in which one is either directly or indirectly involved, and to what extent; moreover, on a larger scale, can awareness of one’s ideology serve as the means by which one can venture to conduct reliable, valid research? Alternately, does one’s personal knowledge/relationship not add another qualitative layer, potentially creating a more robust dimensionality to one’s research? Are we not naive to pretend that we can attain objectivity, as we are all, in some way shape or form, influenced by, or betrothed to, others’ or our own ideology?

Žižek, S. (2008). Violence: Six sideways reflections. New York: Picador.


Safe Yourself a Headache – Tips on Survey Design

I’ll admit it, I’ve written many terrible surveys–not because my questions were poorly worded, had embedded bias, etc., (oh no, the questions themselves were great), it was my failure to think forward about HOW I was going to analyze the data collected that made my surveys, well, a disaster. That said, I will like to draw your attention, dear reader, to some sage advice I found when I finally got fed up, bit the bullet, and decided to do some research as to why my surveys did not yield the answers I so desired. I give you Courage and Baxter’s (2005) chapter on “Surveys” in Understanding Your Users: A Practical Guide to User Requirements Methods, Tools, and Techniques. As its title suggests, this resource places particular attention on assessing user requirements and, admittedly, was a text I stumbled across when I was in the Knowledge Media Design course that “munusami” discusses in an earlier blog post; nonetheless, their subsection, “Determine Now How You Will Analyze Your Data,” addresses exactly what I was doing wrong. As such, I will shamelessly copy-paste the passage because I suspect that I am not alone in my survey-amateurism:

Those who are new to survey methodologies have a tendency to wait until the data has been collected before they consider how it will be analyzed. An experienced usability professional will tell you that this is a big mistake. It can cost you valuable time after the data has been collected and you may even end up ignoring some of the data because you are not sure what to do with it or you do not have the time required to do the analysis that is necessary.

By thinking about the data before you distribute your survey, you can make sure that your survey contains the correct questions, and that you will be able to answer some key questions for yourself.

  • What kind of analysis do you plan to perform? Go through each question and determine what you will do with the data. Are there comparisons you intend on making? If yes, document them, this can impact the question format.
  • Are there questions you don’t know how to analyze? Perhaps you should remove them, or rethink them. Alternately, if you plan on keeping them, you will know that they require further research, or perhaps the assistance from another professional.
  • Will the analysis provide you with the answers you need? If not, perhaps you are missing questions
  • Do you have the correct tools to analyze the data? If you plan to do data analysis beyond what a spreadsheet can normally handle (e.g., beyond means, standard deviations), you will need a statistical package like SPSS or SAS. If your company does not have access to such a tool and will need to purchase it, keep in mind that it may take time for a purchase order or requisition to go through. You don’t want to hold up your data analysis because you are waiting for your manager to give approval to purchase the software you need. In addition, if you are unfamiliar with the tool, you can spend the time you are waiting for the survey data to come in to learn how to use the tool.
  • How will the data be entered into the analysis tool? This question will help you budget your time. If it is to be entered manually, you will need to allot more time. If the data will be entered automatically via the web, you will need to write a script.

By asking yourself these questions early on, you will help ensure that the data analysis goes smoothly. This really should not take a lot of time, but by putting in the effort up front, you will know exactly what to expect when you get to [the analysis] stage of the process and you will avoid unnecessary headaches, lost data, or useless data. (pp 333-334)

Now, go forward, survey away, and  remember to ask yourself not only “how will the answer to this question inform our study objectives?,” but “how am I going to analyze the data collected?”


Advanced States of Play

To my chagrin (or, admittedly, strategic planning), I have yet to conduct a research paper outside the comforts of the analytical and thesis driven arguments familiar to the humanities. On a similar confessional note, this degree has forced me to use different citation styles ulterior to my beloved MLA, resulting in all kinds of anxiety sprung from having to employ the likes of APA, fearing all the while that I might be struck down by the plagiarism gods for having misplaced a comma with a period, or another inexcusable offense unbeknownst to me in foreign citation-style land. Alas, I digress; however, the point remains that, in wanting to follow the “library and information science path,” I find myself here in Research Methods, with an arm extended, inviting me to strap on a pair of stilettos on my two left feet and become a salsa dancing social scientist. As expressed below by many of my blogging-colleagues (bloggeagues?), I find this task of formulating a research question rather daunting. While I was preparing myself to wander outside on my balcony, and call out into the abyss of my abysmal research interests, “wherefore ar[en’]t thou, research question?”, I decided that it was too cold, returned to Luker, and stumbled across this passage (which I will throw caution and word limits to the wind, and cite in its entirety):

“The truth is that you do in fact need a research question, that you should put it as a high priority on your ‘to do’ list, but you should ignore the taken-for-granted assumption that is comes first. Actually, the research question often reveals itself at the end, or close to the end, of the research (this is, after all, a voyage of discovery)—but you must never forget that you need it, or you will fall into the Damnation of the Ten Thousand Index Cards” (p. 61).

Exhale.

To do:

  1. Formulate persuasive research question
  2. Read
  3. Read
  4. Read
  5. Michelangelo is an alien


Persuasively Distributing Logic

In the first three chapters of her work, Salsa Dancing Through the Social Sciences, Luker has brought to the fore a pivotal question that has lurked in the back of my mind throughout the course of my academic career: namely, the investment in the notion that things can be validated, proved, and/or argued as being True. Stemming from postmodernist theory, as we have discussed ad nauseum in ROCM, truth is a relative concept/construction that is reliant on, and shaped by, a[n individual] frame of reference. Luker queries the foundations upon which truth claims are built by delving into an investigation of “canonical social scientists’” methodologies of predicting  and quantifying the world (18), alternatively proposing that we seek out the “sweet spot”–one which necessitates skillful and nuanced footwork, dancing between “practice” and “metaphor” (1), the “macro and the micro, the quantitative and the qualitative, the logic of discovery and the logic of verification”(39), suggesting that their mutual deployment can get you “as close to the ‘truth’ as possible” (6).

Coming from a humanities background, I find Luker’s “sweet spot” theoretically seductive, and am curious as to whether those from other academic disciplines, such as the sciences, are not so easily wooed.