The Annotated Bibliography is a collection of curated links to research literature on AV topics.

Empirically evaluating the use of animations to teach algorithms

TitleEmpirically evaluating the use of animations to teach algorithms
Publication TypeConference Paper
Year of Publication1994
AuthorsLawrence, A. W., J. T. Stasko, and A. Badre
Conference NameProceedings, IEEE Symposium on Visual Languages 1994
PublisherIEEE Computer Society
Conference LocationSt. Louis, MO
DOI10.1109/VL.1994.363641
Review: 

Also reported in Chapter 9 of Lawrence's dissertation.

  

This paper is about a study the authors conducted involving the use of algorithm animations in classroom and laboratory settings. It talks about the results indicated that allowing students to create their own examples in a laboratory session led to higher accuracy on a post-test examination of understanding as compared to students who viewed prepared examples or no laboratory examples. The paper describes various systems, which use different kinds of animations. Then it moves to the study in question. It involved Kruskal’s algorithm. The authors describe the different scenarios involved in the experiment—students were made to construct their own animation in a laboratory setting, some were made to work with prepared examples, and others did not view any animation at all. Their scores were recorded on a post-test and the result analyzed. The authors conclude that preparing their own examples helped the students in the post-test. This is an empirical study targeting to determine the influence of algorithm animations in classroom and laboratory settings. The authors are trying to provide an answer to an important question: are the algorithm animations superior to transparencies in a lecture presentation? The authors argue that encouraging students to create their own examples in a laboratory session leads to a better understanding of a presented material. The subjects of the study were students enrolled at the Georgia Tech Institute of Technology enrolled in CS1410, the initial programming course for the computer science major. The algorithm used for visualization is Kurskal’s minimum spanning tree algorithm. This is a well known and extremely important problem in computer science. The experimental setup was as follows: The class was initially split in four groups: lecture/animation, lecture/slides, lecture and lab/animation, lecture and lab/slides. Each lab section was further divided into active and passive subsections. The students in the lecture/animations and lecture/slides groups were presented the same (in advance prepared) lecture, using different techniques: lecture/slides group was presented the lecture using transparencies, while the lecture/animation group was presented the lecture using the animation created by the Polka Algorithm animation software. Lecture and lab/animation and lecture and lab/slides groups were further divided into two different sections: Active and Passive. While the passive section was given the input data files and was required to conduct experiments with the given files, the active subsection was required to create the input files and conduct the experiments using the created files. All groups completed a multiple–choice/true-false online test requiring application or understanding of the algorithm. Groups also completed a free response test on the paper that was designed to require students to articulate concepts relating to understanding the algorithm. The authors hypothesized that the subjects who received the lecture accompanied by the algorithm animations would outperform the students to whom the lecture was presented via slides. Also, the authors hypothesized that the laboratory session would improve the performance of the students. After conducting a series of experiments to test their hypothesis, the authors have drowned the following conclusions: - The algorithm animation did not make any difference in teaching the algorithm. Furthermore, the group which was presented the animations (without the lab session) performed worse than the group which was presented the transparencies (without the lab session). - The advantage of the interactive laboratory session was confirmed: the students who attended the laboratory sessions and had to create their own data sets performed better than those who did not participate in the lab section or those who did but were given the data sets. - The interactive laboratory section performed the best for the online and free response test. However the advantage of the interactive lab session was larger for the free response test. One of the results the authors present is that with the online test the groups that received the laboratory session performed better than the groups who did not. This is highly expected and obvious result since the groups who received the laboratory session spent more time on the lecture than the groups who did not attend the laboratory session. Therefore there was almost no necessity to present these results. In Section 4.1, the authors say that there was no significant difference between the two groups: the lecture accompanied by slides or lectures accompanied by an animation example. Later in the paper the authors justify those results by saying that the questions from the on-line test are more on procedural level than on conceptual level. They also say that these results help to understand which types of learning are not affected by animation. It is not clear why the authors chose to present the mentioned results. Is the on-line test the only test not affected by the animation? If yes, the authors need to prove that. If the answer is no, then what is the reason for presenting the on-line test? The entire study is rather narrow. The authors use only one algorithm throughout the entire study, and they never specify the reason for choosing that specific algorithm. The study would be more useful if the authors focused on different types of algorithms and determined what are the algorithms (if any) that would benefit the most from the animation.