This is a huge topic: How to provide and evaluate questions/exercises to the readers. Here are just a few issues that need to be addressed:
- Generating the questions. Questions might simply be taken verbatim from a question bank, or might be auto-generated from a parameterized description (such as a question about a randomly generated BST).
- Capturing the student's answer
- Evaluating the student's answer
- Giving hints
- Storing the results of the student's answers and communicating those results to the instructor
- Steering the presentation based on student's performance (intelligent tutoring)
An important component for the assessment system is support for TRAKLA2-style proficiency exercises. Support for proficiency exercises is implemented in JSAV. Beyond this, we would like to read a few paragraphs of content, then do some sort of exercise, then read a little more. We have been using the open-source Khan Academy infrastructure for creating exercises, combined with the proficiency exercises. This does not address the intelligent tutoring issues.
One issue with the original TRAKLA2 proficiency exercises is that the student gets no feedback about whether they are doing it right. So if he goes off track, everything after is wrong. An alternate approach is, at each step done, the student is told if it is correct, and "pushed back on track" if it was wrong. One can still keep score of how many he does correctly. The current JSAV implementation gives options for what feedback is given to the student as he/she progresses through the steps of the exercise.
For ideas about web services, see:
Ville Karavirta, Petri Ihantola: Initial Set of Services for Algorithm Visualization, in Proceedings of PVW'2011.
For ideas on some interesting question types, see the Ville system.
Also see: Rößling, G., M. Mihaylov, and J. Saltmarsh, AnimalSense: Combining Automated Exercise Evaluations with Algorithm Animations, in Proceedings of the 16th annual joint conference on Innovation and technology in computer science education, Darmstadt, Germany, June 2011.
We would like to support questions of the nature "Give me a set of numbers that need 3 swaps to sort by insertion sort." A back-end evaluator would need to be tied to running the algorithm on the student's answer to verify correctness. This sort of thing is supported by the Khan Academy infrastructure. You can see examples of Khan Academy-style exercises that we have created here.
The folks at Connexions are working on a question bank system named QuadBase: http://quadbase.org.
For a class project, students at Virginia Tech did an initial proof-of-concept implementation of a graphical front end for authoring questions (based on quadbase), but which can support templated questions and translation to Khan Academy-style question format. See http://algoviz-beta.cc.vt.edu/QBank. Ann Paul is continuing this work for her MS thesis, see https://github.com/cashaffer/QBank.