Wednesday, November 20, 2013

What do faculty think of online learning?

I've seen some resistance among faculty members to the implementation of new teaching techniques, particularly those that flip the classroom, using Internet-based education.  Many members of faculty seem resistant to change, while others have good reasons to oppose using such techniques (at least, it seems so upon initial presentation of their arguments).

As I have studied flipped classroom techniques, I have begun to understand how they can be used in ways and situations that are not initially obvious to the uninitiated.  So, I became curious if any research had been done to assess what Faculty think about Internet-based education, and if there are techniques that can be used to effectively convince faculty to give them a try.

This presentation was given in March of this year at The SoTL Commons, a conference on the Scholarship of Teaching and Learning in Savannah, Georgia.  It can be found here.  It assesses faculty attitudes toward Internet-based education, actual faculty use of such techniques, and understanding of which courses are suitable and unsuitable for flipping.

They found that faculty tended to declare certain courses as unsuitable for online education, based mainly upon the laboratory component or assessment not being conducive to such a delivery modiality, or because the faculty member thought that student interaction was too important.  Slight majorities of faculty had favorable attitudes towards flipped education.

I was surprised that course content was not reported as a major objection.  When I was first considering such instruction, I rejected it, because of the large amount of content that I had to cover.  Though I now understand how classroom flipping can effectively cover large quantities of material, at the time, I considered that a major problem with flipped education.  Apparently my objection was an isolated one, if these findings can be effectively extrapolated to the general population.

----------
Khalid, A., Stuzzman, B., Colebeck, D., Sweigert, J., Chin, C., & Daws, L. B. (2013, March). Flipped Classroom or Flipped Out? Professors Attitudes Towards Online Learning. Paper presented at the meeting of The SoTL Commons, Savannah, GA. Retrieved on November 22, 2013, from http://spsu.edu/rlc/includes/P10.pdf

Saturday, November 16, 2013

Games in Education

Like many of my age, I grew up playing board games, video games, and the ilk, and many formative moments of my youth were based around video games; either in the play of them, or attempting to design, create, or program them.  Learning how to program games taught me a lot about logic, math, and organization.

Strangely, for a field that is stereotypically considered as being populated with isolationist loaners, games taught me a lot about people, as I could sense the ideas and preconceptions of the game designers by using their creations.

For all that I learned with video games as a motivation, I also learned how horrible "educational games" often can be.  They are created with the best of intentions, often by educators who see the power that games have over their students, and hope to harness that power for educational purposes.  Yet, something goes wrong in the process, leaving an educational game that no student wants to play volitionally.

The folks at Extra-Credits have a fine video blog, in which they discuss issues in game design and development.  One of these issues is gamification in general, and games in education, specifically, which is covered in this video.  They hold that a great problem with educational games is the way they are created and used: as a corollary to highly-controlled, instructor-driven lessons. They argue that gaming is based around play, which ceases to be play when it is controlled and mandated.

Yet, play can be an amazing learning tool, as it tends to consume our free time, even when we are not actively playing.  When we are engaged with a game, we seek to improve at it, and may find ourselves considering its strategies throughout the day.

An effective educational game would be one that encourages students to seek and explore available knowledge, by creating a competitive or reward-based framework around which such behavior is reinforced.  Games in education work best when they trust that learners will curiously seek after knowledge if they are appropriately motivated.

Wednesday, November 6, 2013

TBL FTW?

This weeks readings were reviews of common statistical techniques and thinking.  I was fortunate to take two quantitative stats courses over this past Summer, so most of the reading was a nice review.  Thus, rather than discussing some related subject, I've decided to write a little about my experiences composing a rough draft for my research paper on Team-Based Learning (TBL), which, incidentally, can be found here.

My colleague, Dr. John Mark Jackson, teaches Optics and Contact Lens classes at Southern College of Optometry, and has been using Team-Based Learning techniques for several years now.  This stoked my curiosity about the (to me) novel technique, although not enough to cause me to look into it beyond a merely cursory examination (my own attempts at implementing elements of TBL had been met with considerable resistance from the students, which certainly contributed to my gunshyness).

Having composed a paper on the subject, I can identify what I had done wrong when I attempting TBL in the past.  Core principles of TBL are team dynamics, immediate feedback, and student accountability.  I was doing none of those things.  Instead, I was using a poor imitation, asking classroom questions and having students report their prepared findings.  This added to the burden of the students, who had my lectures, and there own research to worry over.

I think I am ready to give TBL another go now that I am more prepared.  Research-wise, I would like to design a study for the literature, measuring my students grades and attitudes before and after the change.  Rest assured that, this time, all my moves will be well grounded in the literature and good study design.

Sunday, November 3, 2013

That issue of rigor, and its implications

As I have been reading and viewing information about Mixed-Methods research, I see rigorous methodology emphasized over and over.  Apparently it is relatively common for people to add a half-bad qualitative component to an otherwise-normal quantitative study, call it "mixed-methods," and receive accolades.

It's obvious that this is not ideal--if what we are doing is science (that is, systematic investigation into the way things are), then we should strive for rigorous methodology, simply so that our results are as accurately-reflective of reality as possible.  Knowing this, why do why have the problem with rigor?  Why is the warning against sloth one that needs to be addressed so vehemently?  A few ideas:


  1. Humans are naturally-lazy.  Whether this is a character defect or a survival mechanism, humans tend to perform the least amount of effort to get the desired result.  Rigor in research is an effort-rich activity: the undisciplined researcher could easy find corners to cut that make more different than he reckons, in a moment of weariness.
  2. Our motives are never as pure as we like to think.  Most people I know who get into science, do research, or what have you, like for people to think that they are doing so purely for the thrill of the chase--the rush of learning something new.  Yet, I would be very surprised if most are not at least influenced by the positive stigma attached to being a scientist; the money available from grants and employers, and the adulation of one's peers.  It is not necessarily bad to be motivated by these other, more base motives, but anytime a person lies to himself, he opens himself up for a fall.  It's only when we face our true motives that we can responsibly control them.  
  3. The moment of crisis is a great clarifier.  This could be a subheading under point 2, in fact.  When the researcher has done years of work that is considered among his field as important or ground-breaking, it is a great disappointment to analyze the results, only to find that the null hypothesis is right, and that there is no difference between groups.  Lately, in optometry, the AREDS2 study was released, which showed little-to-no decrease in the development of Age-Related Macular Degeneration when certain vitamin supplements were taken.  Many publications talked about the "disappointing" findings, seeming to express dismay that the results returned as they did.  I remember when I was doing my scientific work to get my MS; there was a scandal (can't remember exactly who did this), in which a lab had falsified its data, in part to meet expectations of its results.  The point of these examples is that there is a tendency to want highly-anticipated results to return in a certain direction.  I think that, as researchers, we must try to divorce ourselves from this desire, to protect ourselves from manipulation of the data to suit our preferences.