AlgoViz Wiki Catalog Evaluation Rubric

7 replies [Last post]
shaffer
shaffer's picture
Offline
White BeltYellow BeltGreen BeltRed BeltBlack Belt
Joined: 2009-05-28
Posts:
Points: 2019
Since we are starting to discuss the awards process, one thing we should consider doing is give the voters some guidance on how to assess a nominee. Here is the text from Eileen’s proposed criteria document, with some editing and revisions by me. Nature and Purpose of the AlgoViz.org Award The AlgoViz.org Algorithm Vizualization Award is intended to reward the creators of algorithm visualizations and systems that exemplify “good practice” in the domain of algorithm visualization in support of Computer Science education, and to promote the development and use of algorithm visualization in the classroom. In formulating the notion of “good practice” in algorithm visualization, we consider the following factors: 1.The visualization depicts the workings of an algorithm relevant to the study of computer science. Factors considered here include the importance of the algorithm, the degree of difficulty students typically encounter in learning about the algorithm, and the technical correctness of the depiction. (25 points) 2.The visualization is easy to use and aesthetically pleasing. Factors considered here include clarity of controls, clarity of information content, layout, color, timing, etc. (25 points) 3.The algorithm animation is pedagogically sound. Here, we include factors outlined in [Naps02], and consider whether the authors of the visualization (40 points): a.Provide resources that help learners interpret the graphical representations. b.Adapt to the knowledge level of the user. c.Provide multiple views. d.Include performance information. e.Include execution history f.Support flexible execution control. g.Support learner-built visualizations h.Support custom input data sets i.Support dynamic questions j.Support dynamic feedback k.Complement visualizations with explanations. 4.The algorithm visualization has been empirically evaluated for effectiveness. Factors here include the rigor of the study, size of the sample, and overall strength of the findings. (10 points)

shaffer
shaffer's picture
Offline
White BeltYellow BeltGreen BeltRed BeltBlack Belt
Joined: 2009-05-28
Posts:
Points: 2019
Re: AlgoViz Wiki Catalog Evaluation Rubric
Steve and I have been thinking about the process of giving the “editorial” rating at the Wiki Catalog. Here is a description that we came up with. Feedback would be appreciated, and it might help with describing the review criteria suggestions for voters. AVs are rated along two dimensions: Clarity of Communication/Presentation of Content: Excellent: Clearly communicates difficult material OR Incorporates analysis and insight, not just enables mechanical proficiency Good: Clearly supports understanding of transitions between states (typically by animating the transformation process) Fair: Presents a series of clearly defined states, but does not sufficiently illustrate the transitions between them Poor: Hard to understand the necessary contents or information defining a state, or the transitions between them Level of Engagement: Excellent: Direct interaction, user predicts and/or manipulates the state change process Good: Some engagement, typically with direct control of the pace of state changes (at minimum, a NEXT button to drive the next state transition) Fair: Passive, no control of state changes (typical example is an animation with only an overall speed control), but the information rate is not overwhelming Poor: Discourages engagement: overwhelming, uncontrolled information flow; inadequate interface; bugs; or other off-putting factors AVs are then given an overall rating of Recommended, Has Potential, or Not Recommended. The primary justification of those ratings is the score on the Content and Engagement dimensions. An AVs is typically given a rating of Recommended if it scores Good or better in both dimensions. Occasionally, an AV with Excellent in Content might be Recommended even if it is only Fair on Engagement. An AV is typically given a rating of Not Recommended if it scores Fair or worse in both dimensions. However, the overall rating is also influenced by the level of competition from other AVs in that topic. If lots of AVs exist that are better, the final rating could drop a level. If few AVs on that topic exist, the final rating could raise a level.
pilucrescenzi
pilucrescenzi's picture
Offline
White BeltYellow BeltGreen BeltRed BeltBlack Belt
Joined: 2009-06-08
Posts:
Points: 161
Re: AlgoViz Wiki Catalog Evaluation Rubric
One problem: what to do if the AV does not support a platform? I have tried the first two AVs and both of them do not work on my MacBook Pro. In the case of the first one, actually, both Safari and Firefox crashed. Should we include the multi-platform availability as a requisite?
shaffer
shaffer's picture
Offline
White BeltYellow BeltGreen BeltRed BeltBlack Belt
Joined: 2009-05-28
Posts:
Points: 2019
Re: AlgoViz Wiki Catalog Evaluation Rubric
Which ones did you have trouble with?
pilucrescenzi
pilucrescenzi's picture
Offline
White BeltYellow BeltGreen BeltRed BeltBlack Belt
Joined: 2009-06-08
Posts:
Points: 161
Re: AlgoViz Wiki Catalog Evaluation Rubric
I had trouble with both the AVs of “Algorithms in Action” (that is, Quicksort and 2,34-tree): in both cases, the browser crashes (both on a PPC and on an Intel). I also had trouble with the Binary Treesome applet: the applet loading fails (but the browser does not crash).
ajalon
ajalon's picture
Offline
White BeltYellow BeltGreen BeltRed Belt
Joined: 2009-05-26
Posts:
Points: 60
Re: AlgoViz Wiki Catalog Evaluation Rubric
Tom Naps confirmed the problems with the AVs you mentioned, on OSX 10.4. We’ll try to get in touch with the respective authors. I do believe that cross-browser and -platform compatibility should be a factor, but I’m not sure how to address this in the rubric.

Cheers! -AJ

ville
ville's picture
Offline
White BeltYellow BeltGreen BeltRed BeltBlack Belt
Joined: 2009-05-28
Posts:
Points: 559
Re: AlgoViz Wiki Catalog Evaluation Rubric
I can also confirm that the mentioned AVs crash at least Firefox, Camino, Safari4 and Webkit nightly on my OSX10.5. Contacting the authors brings another question in mind: should (or are they already?) the authors of the AVs be informed that their AV is a nominee for an award? And by being nominated, are the authors expected to attend SIGCSE if they win an award? I think it would be nice if the would would be present to receive it.

Ville Karavirta, Aalto University, http://villekaravirta.com/

shaffer
shaffer's picture
Offline
White BeltYellow BeltGreen BeltRed BeltBlack Belt
Joined: 2009-05-28
Posts:
Points: 2019
Re: AlgoViz Wiki Catalog Evaluation Rubric
I did mention the awards nominations in my email. I sent email to the Norway contact for Binary Treesome a couple of days ago. I just sent email to the AIA contact a few minutes ago. So far as I can tell, the AIA code hasn’t been compiled in a long time, so it is probably not the Java 1.6 issue.