Scott McLeod’s MindDump blog featured a set of pie charts reflecting professors’ use of technology. The charts are reproduced from a piece in the Chronicle of Higher Education, and is based on a survey of about 4,600 professors from 50 Universities, collected in the spring of 2009. The piece cites, but does not link to the actual study results. Some poking around turned up the FSSE site, but I was unable to find the cited data there. The closest I found was a page reporting on the use of communication technologies, which seemed to reflect different numbers of respondents.
Nonetheless, assuming that the data are not bogus, we can ask some questions about what this means.
The technologies in question may be divided into some broad categories: tools that help the professor manage the process of teaching (course management and plagiarism detection), tools that help with out-of-class interaction with students (collaborative editing software, blogs, video conferencing), and in-class tools such as student response system and virtual world tools. (There may be some overlap in the use of some of the tools, but that shouldn’t detract from my argument.). With the exception of course management software which is used by 72% of the responding faculty, the other technologies were not used.
Unfortunately, neither Scott’s post nor the article it cites provides much context for the charts. Where there differences in response rate by university? by school or department? by faculty member? Where there significant correlations among the responses? Is someone who uses one sort of technology more likely to use more than one, or are these unrelated? Are there significant correlations within each category?
And of course one piece of technology that is conspicuous by its absence is PowerPoint. Why wasn’t that part of the survey?
Another oddity in the charts reported The Chronicle of Higher Education was the low response rate for the use of blogs (13% of respondents used it at least some of time) compared to the 31-37% rate of use of discussion board postings reported on the FSSE page. Surely discussion boards and blogs fall into a similar technological category. Why the discrepancy?
Finally, there remain two important questions: why? and, does it matter?
What accounts for this behavior? Did the people who conducted the survey ask open-ended questions about the professors’ decisions? It isn’t lack of awareness: most faculty knew about the technologies in the survey. Was it lack of money? Lack of interest? Was the technology hard to use? Did it not solve the exact problem the instructors had? What were some other reasons?
And finally, does it matter? It would be great to know whether the use of any of these technologies had material impact on the outcomes of the educational process? It would be great to be able to set up some contrasts in the survey data to see whether the use of technology in the classroom matters, or if other factors, such as class size or individual differences among instructors (for example) account for the variability.
Lament about the lack of adoption of technology is misplaced unless technology can be shown to improve outcomes. After all, the goal of the educational system is to educate, not to provide cannon fodder for technologists with half-baked ideas.