Monday, July 14, 2008

Implementing the Transformed Grade Book II

We are leaving Blogger for Wordpress, please join us there.

Jayme Jacobson, Theron DesRosier, Nils Peterson with help from Jack Wacknitz

Previously we showed examples of how a transformed grade book (or Harvesting grade book ) might look from the perspective of a piece of student work seeking feedback. That demonstration was hand built and is not practical to scale up to the level of an academic program. Even the smaller scale use such as that suggested by a recent piece in the Chronicle entitled Portfolios Are Replacing Qualifying Exams would benefit from some automation. This post outlines a design for a software application that could plug into the Skylight Matrix Survey System (a new survey tool WSU is developing).

There are two workflows that we have previously described in general terms, one for the instructor to register assignments and another for students to request feedback as they work on those assignments. In the figure below we outline a series of screen shots for the instructor registering the assignment. During the registration process the instructor is matching the assignment to the assessment rubric dimensions that are appropriate. We view this registration of the assessments as a possible implementation of Stephen Downes’ idea in Open Source Assessment.


Instructor Dashboard (click image to enlarge)


This examples shows how an instructor might use a dashboard to register and monitor assignments. The workflow shows capturing the assignment, assigning rubric dimensions to be used for assessing both the student work and the assignment itself and ends with routing the assignment for review. This mock-up does not show how the instructor would see the student work created in response to the assignment, or the scores associated with that student work . The next step in the workflow would require an upload of the assignment so that it could be retained in a program-level archive. The assignment could be referenced from that archive for faculty scholarship of teaching (SoTL), as well as for program or accreditation review and other administrative purposes.

Once the assignment has been registered, the student could start from a student dashboard to request a review or feedback and guidance. We are differentiating the idea of a full review (with a rubric) from more informal feedback and guidance. This informal feedback would probably not be fed into a grade book but the captured feedback could be used by a student as evidence in a learning portfolio.


Student Dashboard (click image to enlarge)


The basic workflow for a student would let the student request a rubric-based review for a specific assignment in a specific course. The student would select the course, assignment, and other metadata. Once posted for review, the request would either be routed to multiple reviewers or the student would embed the review into a webpage using the HTML code provided. In the second step there would be an opportunity to upload a document. This might be used in cases where the document had no web incarnation (to give it a URL) or to “turn-in” a copy of the document that would not be subject to further editing, as might be required in some high-stakes assessments.

The Learning 2.0 model is supported in the last step, where the assessment is embedded in a web space still open to modification by the learner (as the previous examples illustrated)


Student-created Rubric-based Survey (click image to enlarge)

Students might want to use their own rubric-based surveys. This mock-up shows how workflow would branch from the previous workflow to allow the student to define rubric dimensions and criteria.


Student-created Simple Feedback Survey (click image to enlarge)

This last example shows how the student would create a simple feedback survey.


State of the Art
Presently there are several tools that might be used to deliver a rubric survey. The challenge is the amount of handwork implied to let each student have a rubric survey for each assignment in each course and to aggregate the data from those surveys by assignment, by course, and by student for reporting. A future post will explore what might be learned by having the data centrally aggregated. If there is value in central aggregation, it will have implications for the tool and method selected for the rubric survey delivery. The Center for Teaching Learning and Technology at WSU already has tools to make the handwork implied tractable for use in a pilot course of 20-30 students. We understand the path to develop further automation, but both pilot test and further automation require investment which requires further analysis of commitment, infrastructure, and resources.

Questions
1. Can this concept provide transformative assessment data that can be used by students, instructors, and programs to advance learning? In addition to assessing student learning, can it provide data for instructor course evaluation and for program level assessment and accreditation?

2. Can the process be made simple enough to be non-obtrusive in terms of overhead in a course’s operations?

3. Is it necessary to implement a feedback and guidance as a central university-hosted tool, or could students implement an adequate solution without more investment on the university's part?

No comments: