Sunday, November 3, 2013

That issue of rigor, and its implications

As I have been reading and viewing information about Mixed-Methods research, I see rigorous methodology emphasized over and over.  Apparently it is relatively common for people to add a half-bad qualitative component to an otherwise-normal quantitative study, call it "mixed-methods," and receive accolades.

It's obvious that this is not ideal--if what we are doing is science (that is, systematic investigation into the way things are), then we should strive for rigorous methodology, simply so that our results are as accurately-reflective of reality as possible.  Knowing this, why do why have the problem with rigor?  Why is the warning against sloth one that needs to be addressed so vehemently?  A few ideas:


  1. Humans are naturally-lazy.  Whether this is a character defect or a survival mechanism, humans tend to perform the least amount of effort to get the desired result.  Rigor in research is an effort-rich activity: the undisciplined researcher could easy find corners to cut that make more different than he reckons, in a moment of weariness.
  2. Our motives are never as pure as we like to think.  Most people I know who get into science, do research, or what have you, like for people to think that they are doing so purely for the thrill of the chase--the rush of learning something new.  Yet, I would be very surprised if most are not at least influenced by the positive stigma attached to being a scientist; the money available from grants and employers, and the adulation of one's peers.  It is not necessarily bad to be motivated by these other, more base motives, but anytime a person lies to himself, he opens himself up for a fall.  It's only when we face our true motives that we can responsibly control them.  
  3. The moment of crisis is a great clarifier.  This could be a subheading under point 2, in fact.  When the researcher has done years of work that is considered among his field as important or ground-breaking, it is a great disappointment to analyze the results, only to find that the null hypothesis is right, and that there is no difference between groups.  Lately, in optometry, the AREDS2 study was released, which showed little-to-no decrease in the development of Age-Related Macular Degeneration when certain vitamin supplements were taken.  Many publications talked about the "disappointing" findings, seeming to express dismay that the results returned as they did.  I remember when I was doing my scientific work to get my MS; there was a scandal (can't remember exactly who did this), in which a lab had falsified its data, in part to meet expectations of its results.  The point of these examples is that there is a tendency to want highly-anticipated results to return in a certain direction.  I think that, as researchers, we must try to divorce ourselves from this desire, to protect ourselves from manipulation of the data to suit our preferences.

No comments:

Post a Comment