The Annotated Bibliography is a collection of curated links to research literature on AV topics.

Evaluating the educational impact of visualization

TitleEvaluating the educational impact of visualization
Publication TypeConference Paper
Year of Publication2003
AuthorsNaps, T. L., S. Cooper, B. Koldehofe, C. Leska, G. Rößling, W. Dann, A. Korhonen, L. Malmi, J. Rantakokko, R. J. Ross, J. M. Anderson, R. Fleischer, M. Kuittinen, and M. F. McNally
Conference NameITiCSE-WGR '03: Working group reports from ITiCSE on Innovation and technology in computer science education

Working group report from ITiCSE 2002 on evaluating the impact of visualization in computer science education. This paper considers the disconnect between the belief held by instructors that visualization helps students learn, and the integration of visualization techniques in classroom instruction. The authors explore this from the point of view of the instructor. They discuss the various impediments faced by the instructors—find, download, install, learn, develop if necessary, adapt and integrate into the course, teach, maintain, and upgrade the visualizations—and offer guidelines to develop algorithm visualizations and suggest techniques to measure the satisfaction of the instructor. They authors suggest ways to disseminate algorithm visualizations so that to make it easier for instructors to find them. They also inspect various forms of evaluation, and the impact of student learning styles on learner outcomes. This paper has been written with the premise that algorithm visualizations can make a significant impact on CS education only if they get widely used and they result in positive learner outcomes. Thus, the paper discusses the ways to research about the factors that would influence instructor satisfaction. It also discusses the ways to measure learner outcomes. The factors that are affecting instructor satisfaction are not very-well studied in the literature. The authors state that as the deployment of visualizations are important in increasing the impact of these visualizations, the factors influencing instructor satisfaction should be considered. I partially agree with that, since without classroom usage of visualizations, only curious students who search for these visualizations utilize them and the impact of the visualizations will not be significant in the CS education. I also believe that visualizations should not be the only teaching method in class due to the variance In the learning styles of the students. Yes, the usage of visualizations should increase in the classrooms, but no it should not be the only way to teach a course. My belief is that the usage of visualizations should be supporting the lecture, decreasing the effort to give examples for instructors and serving as an additional study tool for students. The authors discuss about ways to measure instructor satisfaction from visualizations to overcome the impediments to the deployment of visualizations. I agree with most of the points listed in the section 2, but I also think that another up-most important thing to increase instructor satisfaction is evaluating the usability of visualizations and improve their usability accordingly. A visualization which has a usable interface would not need much effort from the instructor in both learning and teaching how to use the tool. I also do not believe in the “capture larger concepts” idea. A visualization might be about only one specific subject like “algorithms in action” visualization, but it would still satisfy instructors. I think instead of “capturing larger concept”, registering tools on repositories, disseminating them is more important and meaningful, because repositories will already be serving for “capturing larger concepts”. Moreover, evaluation for increasing instructor satisfaction part includes well-thought evaluation ideas. Authors also discussed the ways to evaluate learner outcomes and provided a guideline for it. Most of the points mention in this section have already been used by some studies in the literature. But I did not see any evaluation methods by using “student evaluation to visualization”. I think it is a bit out of the concept, since not how much a student pays attention to the visualization, but how much he learns from it is more important. I believe that most of the time student’s attention on the visualization would not give us an idea about the impact of the visualization on the learning outcomes. To sum up, I think this paper touches an important point in the algorithm visualization field. This point is the importance of overcoming the impediments to the deployment of visualizations and increasing instructor satisfaction. If this can be achieved, then the usage of the visualizations in class and out of the class will increase; and this will be followed by many more studies to evaluate visualizations and to create the best visualizations.