Wednesday, November 20, 2013

What do faculty think of online learning?

I've seen some resistance among faculty members to the implementation of new teaching techniques, particularly those that flip the classroom, using Internet-based education.  Many members of faculty seem resistant to change, while others have good reasons to oppose using such techniques (at least, it seems so upon initial presentation of their arguments).

As I have studied flipped classroom techniques, I have begun to understand how they can be used in ways and situations that are not initially obvious to the uninitiated.  So, I became curious if any research had been done to assess what Faculty think about Internet-based education, and if there are techniques that can be used to effectively convince faculty to give them a try.

This presentation was given in March of this year at The SoTL Commons, a conference on the Scholarship of Teaching and Learning in Savannah, Georgia.  It can be found here.  It assesses faculty attitudes toward Internet-based education, actual faculty use of such techniques, and understanding of which courses are suitable and unsuitable for flipping.

They found that faculty tended to declare certain courses as unsuitable for online education, based mainly upon the laboratory component or assessment not being conducive to such a delivery modiality, or because the faculty member thought that student interaction was too important.  Slight majorities of faculty had favorable attitudes towards flipped education.

I was surprised that course content was not reported as a major objection.  When I was first considering such instruction, I rejected it, because of the large amount of content that I had to cover.  Though I now understand how classroom flipping can effectively cover large quantities of material, at the time, I considered that a major problem with flipped education.  Apparently my objection was an isolated one, if these findings can be effectively extrapolated to the general population.

----------
Khalid, A., Stuzzman, B., Colebeck, D., Sweigert, J., Chin, C., & Daws, L. B. (2013, March). Flipped Classroom or Flipped Out? Professors Attitudes Towards Online Learning. Paper presented at the meeting of The SoTL Commons, Savannah, GA. Retrieved on November 22, 2013, from http://spsu.edu/rlc/includes/P10.pdf

Saturday, November 16, 2013

Games in Education

Like many of my age, I grew up playing board games, video games, and the ilk, and many formative moments of my youth were based around video games; either in the play of them, or attempting to design, create, or program them.  Learning how to program games taught me a lot about logic, math, and organization.

Strangely, for a field that is stereotypically considered as being populated with isolationist loaners, games taught me a lot about people, as I could sense the ideas and preconceptions of the game designers by using their creations.

For all that I learned with video games as a motivation, I also learned how horrible "educational games" often can be.  They are created with the best of intentions, often by educators who see the power that games have over their students, and hope to harness that power for educational purposes.  Yet, something goes wrong in the process, leaving an educational game that no student wants to play volitionally.

The folks at Extra-Credits have a fine video blog, in which they discuss issues in game design and development.  One of these issues is gamification in general, and games in education, specifically, which is covered in this video.  They hold that a great problem with educational games is the way they are created and used: as a corollary to highly-controlled, instructor-driven lessons. They argue that gaming is based around play, which ceases to be play when it is controlled and mandated.

Yet, play can be an amazing learning tool, as it tends to consume our free time, even when we are not actively playing.  When we are engaged with a game, we seek to improve at it, and may find ourselves considering its strategies throughout the day.

An effective educational game would be one that encourages students to seek and explore available knowledge, by creating a competitive or reward-based framework around which such behavior is reinforced.  Games in education work best when they trust that learners will curiously seek after knowledge if they are appropriately motivated.

Wednesday, November 6, 2013

TBL FTW?

This weeks readings were reviews of common statistical techniques and thinking.  I was fortunate to take two quantitative stats courses over this past Summer, so most of the reading was a nice review.  Thus, rather than discussing some related subject, I've decided to write a little about my experiences composing a rough draft for my research paper on Team-Based Learning (TBL), which, incidentally, can be found here.

My colleague, Dr. John Mark Jackson, teaches Optics and Contact Lens classes at Southern College of Optometry, and has been using Team-Based Learning techniques for several years now.  This stoked my curiosity about the (to me) novel technique, although not enough to cause me to look into it beyond a merely cursory examination (my own attempts at implementing elements of TBL had been met with considerable resistance from the students, which certainly contributed to my gunshyness).

Having composed a paper on the subject, I can identify what I had done wrong when I attempting TBL in the past.  Core principles of TBL are team dynamics, immediate feedback, and student accountability.  I was doing none of those things.  Instead, I was using a poor imitation, asking classroom questions and having students report their prepared findings.  This added to the burden of the students, who had my lectures, and there own research to worry over.

I think I am ready to give TBL another go now that I am more prepared.  Research-wise, I would like to design a study for the literature, measuring my students grades and attitudes before and after the change.  Rest assured that, this time, all my moves will be well grounded in the literature and good study design.

Sunday, November 3, 2013

That issue of rigor, and its implications

As I have been reading and viewing information about Mixed-Methods research, I see rigorous methodology emphasized over and over.  Apparently it is relatively common for people to add a half-bad qualitative component to an otherwise-normal quantitative study, call it "mixed-methods," and receive accolades.

It's obvious that this is not ideal--if what we are doing is science (that is, systematic investigation into the way things are), then we should strive for rigorous methodology, simply so that our results are as accurately-reflective of reality as possible.  Knowing this, why do why have the problem with rigor?  Why is the warning against sloth one that needs to be addressed so vehemently?  A few ideas:


  1. Humans are naturally-lazy.  Whether this is a character defect or a survival mechanism, humans tend to perform the least amount of effort to get the desired result.  Rigor in research is an effort-rich activity: the undisciplined researcher could easy find corners to cut that make more different than he reckons, in a moment of weariness.
  2. Our motives are never as pure as we like to think.  Most people I know who get into science, do research, or what have you, like for people to think that they are doing so purely for the thrill of the chase--the rush of learning something new.  Yet, I would be very surprised if most are not at least influenced by the positive stigma attached to being a scientist; the money available from grants and employers, and the adulation of one's peers.  It is not necessarily bad to be motivated by these other, more base motives, but anytime a person lies to himself, he opens himself up for a fall.  It's only when we face our true motives that we can responsibly control them.  
  3. The moment of crisis is a great clarifier.  This could be a subheading under point 2, in fact.  When the researcher has done years of work that is considered among his field as important or ground-breaking, it is a great disappointment to analyze the results, only to find that the null hypothesis is right, and that there is no difference between groups.  Lately, in optometry, the AREDS2 study was released, which showed little-to-no decrease in the development of Age-Related Macular Degeneration when certain vitamin supplements were taken.  Many publications talked about the "disappointing" findings, seeming to express dismay that the results returned as they did.  I remember when I was doing my scientific work to get my MS; there was a scandal (can't remember exactly who did this), in which a lab had falsified its data, in part to meet expectations of its results.  The point of these examples is that there is a tendency to want highly-anticipated results to return in a certain direction.  I think that, as researchers, we must try to divorce ourselves from this desire, to protect ourselves from manipulation of the data to suit our preferences.

Monday, October 28, 2013

A trip to Seattle, and the American Academy of Optometry meeting

So, I was out of pocket slightly this last week, as I traveled to Seattle for the American Academy of Optometry meeting, for continuing education, and industry meetings, and even to present a little research.  I had quite a few opportunities to review some new research in Optometric Education, and so my thoughts on this are what follows:

-There are some interesting things going on in higher education.  The Optometric Education section held a symposium on Blended Learning, in which four presenters discussed the general theory behind blending traditional lecture with online lectures, team-based activities, and assignments.  Dr. Linda Casser, who introduced the symposium, held that blended learning is a more innately constructivist method of teaching than traditional lecture or laboratory, which is predominately behaviorist.  The other panelists described their experiences in implimenting Blended Learning in their classes.  One professor merely recorded his lectures and had his students watch them online.  Another recorded some lectures, gave other lectures in person, and engaged the class with assignments in other situations.  All in all, the symposium did a good job presenting how Blended Learning can be used in Optometric Education, and what its benefits are.

-A few papers that were presented studied the effect of gifts from pharmaceutical representatives on students' prescribing practices, and attitudes.  They showed that the gifts (both pure gifts and sampling) do have an effect on student and faculty behavior, and that many students feel they do not receive enough education on how to deal with reps and how to rationally prescribe.

-Interestingly, a few of the paper presentations contained statements that evidenced study problems.  One person said that his findings "did not appear to have a statistically-significant difference," though he apparently did not run the statistics to check.  Another seemed to express some design problems, admitting that she wanted to measure behavior, but used a survey instrument that was inappropriate for that use.


Sunday, October 20, 2013

Historical Research

A quick entry today, as I find myself in the middle of a (working) vacation in the Pacific Northwest!  I reviewed Leedy chapter 7 about historical research, and have found it an interesting subject.  Being one who has seen the resulting publication of such research projects, it is easy to assume that the researchers merely read a lot, then come up with a theory that matches their understanding.  After all, this is how we reach most of conclusions naturally--with snap judgments, based on a multitude of factors.  It is a mechanism that is well-suited to help us survive in a dangerous world, but my lead to misconceptions that are inexcusable in research.

The regimented nature of historical research is, therefore, unexpected and fascinating to me.  I like the idea of theorizing and understanding the true nature of events and ideas based upon analysis of the available historical data.  Perhaps in having a closer look at where our profession has been, we can avoid its mistakes and repeat its successes.

Monday, October 14, 2013

Stratification: qualitative research upon qualitative research

After reviewing my textbook chapters and suggested resources in this unit, I was left with a new-found understanding of qualitative research and its specific characteristics.  Though many examples of the different types of qualitative research were given in the textbook chapters, I wanted to test my understanding by reviewing a non-annotated study.  A review of online qualitative research links brought me here.

This study looks at two published instances in which Grounded Theory was used to analyse interviews that were given to evaluate the usefulness of instructional systems development tools.  That is, the study is a multiple case-report of Grounded Theory studies.  It is interesting because it presents two different methods of qualitative research together, with the first evaluating the second.

The authors of the multiple case-study spent some time defining the purposes of the two Grounded Theory studies, but since their research interest was predominantly in the use of Grounded Theory in interview analysis, no more than a cursory review is given.  They do, however, devote a good amount of space to the similarities and differences between the two study cases.  Grounded theory was used to evaluate group-based interviews in one, and individual interviews in the second.  In the first study, the purpose of the research was hidden from the participants; in the second, it was freely revealed.  Interviews were analyzed differently between studies, with the first using keyword analysis and a multiple-coding system, and the second qualitatively analyzing all sentences of the interviews and coding the results based upon a single-category system.

The researchers found that, though there were differences (described above) between the methodologies of the two studies, the use of Grounded Theory to analyse interviews was an appropriate one in both cases, supporting its use in analyzing other types of interviews.

This was a fascinating read, first, because it helped me understand Grounded Theory better; second, that it showed me how qualitative methods can be used to analyse other qualitative studies; and, third, because it illustrated a specific use of Grounded Theory (ie. interview analysis).  I look forward to learning more about these methods in the future.

---------------------------
Reference
Calloway, L. J., & Knapp, C. A. (1995). Using Grounded Theory to Interpret Interviews. Paper
     presented at the First AIS Americas Conference on Information Systems, Pittsburgh, PA. Retrieved
     from http://csis.pace.edu/~knapp/AIS95.htm

Monday, October 7, 2013

The Research Proposal

Chapter 5 of Leedy and Ormrod (2013) was familiar to me in many ways.  It reminded me of the process of writing my Master's thesis; the give-and-take with my adviser; the constant revisions; sending what I thought was a perfect draft for review and receiving it back with red ink dripping from the pages.

For a moment, it made me question why I've embarked on this journey again.  But the interest of the material, the desire to know more, the drive to improve, both personally and professionally; all these are the forces that drive us to go through this process.

I have a tendency towards robust, complex prose.  This is something I will have to fight as I compose research proposals.  As it was illustrated in the chapter, an unclear or unfocused first sentence can predispose a reviewer against your proposal; I know this is true, since it has happened to me before.  When reading a student's paper, I often would see a cliche, or waste of print, as an introduction.  My initial thoughts in such a situation were exactly as the chapter described: "Why are they wasting my time?", "Couldn't they have done without this?" and so on.

The emphasis on clear, flowing organization, and formatting detail was good to see.  Reading a confusing or unattractive paper can make a reviewer balk as well.  Ideally, we would strive to produce a research proposal that the reviewer reads easily, without distractions of form, grammar, or spelling, with all ideas supported and logically-presented, so that the reader understands our message exactly as we indented it.

Of course, such perfect communication is unobtainable, but the closer we can get to this ideal position, the more honest and clear the research proposal process can be.  Though I may desire for my research to be accepted for publication or support simply out of ego, the "better angels of my nature" truly wish that the best proposals get grants, that the most important research is complete, and that the poorly thought-out or inconsequential be filtered.  It is encumbent upon us, as researchers, to strive to be in that first group: so that we can write our research proposals without guile; truly attempting to represent our proposal accurately, acting with the confidence that we are backing an admirable process.

-----------------------
Works Cited
Leedy, P. D., & Ormrod, J. E. (2013). Practical Research: Planning and Design (10th ed.). Pearson: Boston.

Monday, September 30, 2013

Thoughts on research methodology

No article review today.  As directed, I shall instead discuss how my assigned reading this week led me to a realization about research methodology.  A few points:

  • Though I was previously unaware of the appropriate names for them, I see now that I have been engaged in the process of evaluating studies and claims against the absolutes of reliability and validity for years.  I have made a habit (almost a fetish) of looking for the story-behind-the-story, the hidden flaw in the reasoning, or in the methodology, that made a conclusion ill-founded.  Reading these chapters, however, makes me realize that no study instrument can hope to be purely valid and reliable.  All we can strive for is minimization of the error, since we are indirectly looking at the absolute truth.
  • What mechanisms are in place in the qualitative methodology to prevent reading one's own biases into the theory?  Although one could argue that the developed theory will be quantitatively-evaluated in future studies, and thus biases will be removed from the theory in time, the lag of time between these two events seems dangerous to me.  As an observer of culture and politics, I have seen first-hand the problems that can arise when a theory is proposed that takes hold of the public consciousness, is accepted as fact, and then acted upon, before its assertions can be quantitatively tested.  And so, money and effort are spent on a seemingly accurate theory that has all of its basis on the interpretation of one learned researcher.  Surely, then, it behooves the wise and honest researcher to limit his biases on the front end, creating the most objective theory possible.  So what is done to assure this?
  • While reading the chapter in Leedy & Ormrod (2013), I was struck with an unusual association: Quantitative research : Qualitative research :: Traditional journalism : "New" journalism (a la Wolfe) ::  Perhaps not very insightful, but just a thought.
  • My personal activity (as stated above) has been qualitative in most informal matters.  Yet, when it comes to research, I always look for quantitative data (as befits my scientific training).  Would I be better off if my personal activities become more regimented; and my professional, more free-ranging?
  • Finally, the most robust response I had to these chapters was the stringency that goes into appropriate qualitative research.  I had always had a poor view of such research, thinking it a mere "soft science.'  I begin to understand, however, that, done well, it can teach us much about the world around us.
----------------------
Work Cited
Leedy, P. D. & Ormrod, J. E. (2013). Practical Research: Planning and Design (10th ed.). Boston: Pearson.

Work Consulted
Lunenburg, F. C. & Irby, B. J. (2008). Writing a Successful Thesis or Dissertation. Thousand Oaks, California: Corwin Press.

Monday, September 23, 2013

Learner centred approaches in medical education

Last post, I reviewed an article that described the use of new technology in the anatomy laboratory, and the mixed results that were obtained.  This article referenced the Adult Learning theories of Malcolm Knowles.  I looked into these, and found that he posited his theories in a seminal book, "The Adult Learner: a Neglected Species."

I have put this on my "must-read" list.  However, searching for this source led me to a nice review of learner-centered educational techniques that are used in medical education, published in the British Medical Journal in 1999 by Spencer and Jordan.

This article speaks directly to the challenge that I find myself in: the traditional method of course presentation has presented large quantities of information in an abbreviated, rapid manner, leaving students in a "no-win" situation where they must learn shallowly in order to survive.  In the past, I have personally resisted attempts by my administration and colleagues toward moving away from such techniques, because my experiences as a student in student-centered environments had been fraught with such confusion and uncertainty.  However, realizing that my personal experiences are not authoritative, I attempt to bypass my biases and give student-centered learning another look.

The authors mention--relatively early on--that learner-centered approaches toward education are often poorly implemented and may even take on the form of a mere "veneer," which does nothing but give a new-looking facade to the same old content.  This gives me a slight flicker of hope that there is more to Learner-centered education than the half-hearted course design I had seen in the past.

Self-directed learning appears to be the key, which leads to deeper learning experiences.  The first example that is presented is problem-based learning, which is the technique I have had so little success with in the past.  As they present it, it seems a fine idea: give the students problems to solve, and guide the process of solving so that students will better understand the core material that it is based upon.

It seems a nice system.  However, the list of negatives that have been identified (eg. inefficiency, difficulty implementing in large courses, and poor learning of the basic sciences, et al.) fill me with concern.  The authors conclude the section by mentioning that there is no evidence yet to support the assertion that problem-based education makes better clinicians.

Realizing these limitations, the mixed approach (Guided Discovery Learning) is presented.  Here, the "best of both worlds" are used: a traditional knowledge transfer system is used to assure all learners achieve a common base of information.  Study aids are designed using Learner Centered approaches to help the students use what they have learned immediately, cementing the newly-acquired knowledge and helping learners to see its clinical relevance.

Here is a method that seems to strive for depth of understanding, while realizing the realities of standardized testing and basic science core requirements that must be met.  I recognize that, without consciously doing so, I have been using a mixed approach in my own classes.  However, they have been mostly lecture courses, and so I could perhaps add more study aids to them, particularly when trying to express a clinical correlate.

APA Citation:
Spencer J.A., & Jordan R.K. (1999). Learner centred approaches in medical education. BMJ, 318(7193), 1280-1283.

Monday, September 9, 2013

Innovative Technology in Anatomy Lab

I found this article in the Summer 2013 edition of Optometric Education (pp. 100-105), in which the author had implemented the use of a variety of iPad anatomy apps in her Anatomy Laboratory among first-year optometry students, while simultaneously withholding them from a single lab group, to act as a control.  The results showed no statistical difference between the control group and the lab groups who were given iPads with apps to use in lab.  They found that students liked the iPads, but preferred increased instructor involvement to the programs.

This was fascinating to me, because I also teach anatomy laboratory in an Optometry school, to first-year optometry students.  Being a traditionalist by temperament, I am hesitant to embrace updated technology in the lab unless I can see the benefit of it.  After my predecessor removed microscopes from the lab, for example, I returned them the following year, to help my students get used to discovery and control of microscopy (important points in an eye examination).

That the students who were given the iPad apps gave some feedback implying that they would prefer a more structured lab was very interesting to me--in my experience, students often want to be led by the hand.  Part of the point of the Optometric curriculum, however, must be to train students to venture out on their own.  Thus, I don't find the students' distaste for their digitally-based independence to be a major concern, on its own.

More concerning to me is the lack of improved understanding, as measured by post lab quizzes.  The great promise I have heard is that technology use will revolutionize the classroom and lab.  However, the apps the students were using appeared to use rather traditional teaching methods (eg. rote memorization, flashcards, and manipulation of models--albeit virtual ones, etc.), merely repackaging them in a digital context (see Table 1, reproduced below). Students complained that the time spent learning the new technology took away from their lab time as well.

While an interesting premise, this study only seems to reinforce that what is needed to improve instruction is not a digital version of existing resources, but newly-designed resources that take advantage of the unique opportunities that mobile technology brings.  Purchasing new tech may not be so useful without improved instructional methods.

Going forward, I'd like to look into educational software design theory--how software can truly change the way we educate, rather than merely digitizing the old methods--as well as investigate the Learning Theories of Malcolm Knowles, who the article referenced.  I'm not familiar with his work.



APA citation:

Sanchez-Diaz, P.C. (2013). Impact of Interactive Instructional Goals in Gross Anatomy for Optometry
      Students: a Pilot Study. Optometric Education. 38(3), 100-105.  Retrieved from
      http://www.opted.org/files/Volume38_Number3_Summer2013.pdf