Friday, June 20, 2008

Implementing the Transformed Grade Book

Theron DesRosier, Jayme Jacobson, Nils Peterson

Previously we described ideas on Transforming the Grade Book, by way of elaborating on Gary Brown’s ideas of a “Harvesting Gradebook.”

Here we demonstrate an implementation of those ideas in the form of a website with student work and embedded assessment. This demonstration is implemented in Microsoft SharePoint with the WSU Critical and Integrative Thinking rubric as the criteria and a Google Doc survey as the data collection vehicle, but other platforms could be used to contain the student work and other assessment rubrics delivered in other survey tools could be developed. (Note- this implementation is built with baling wire and duct tape and the implementation would not scale.)

There are four examples of student work (a Word document with track changes, a blog post, a wiki diff and an email), to illustrate the variety of student work that might be collected and the variety of contexts in which students might be working. This student work might be organized as part of an institutionally sponsored hub-and-spoke style LMS or in an institutionally sponsored ePortfolio (as WSU is doing with SharePoint mySites) or directly in venues controlled by the student (see for example the blog and email examples below) where the student embeds a link to the grade book provided by the academic program.

Examples of Assessing Student Work (aka transformed ‘grading’)

The first example is a Microsoft Word document, stored in SharePoint, and included in this page with Document Viewer web part. You are seeing the track changes and comments in the Word document. In some browsers you will see pop-up notes and the identities of the reviewers.

To the right of the document is the rubric for assessing the work. Clicking on “Expand” in the rubric will open a new window with details of the rubric dimension and a Google Doc survey where you can enter a numeric score and comments of your assessment of the work with this criteria.

This survey also collects information about your role because it is important in our conceptualization of this transformed grade book to have multiple perspectives and to be able to analyze feedback based on its source.

In our description of the workflow that for this assessment process we say:
Instructors start the process by defining assignments for their classes and “registering” them with the academic program. Various metadata are associated with the assignment in the registration process. Registration is important because in the end the process we propose will be able to link up student work, assessment of the work, the assignment that prompted the work, and assessments of the assignment.
This demonstration shows one of the important impacts of the “registration” -- as a reviewer of the student's work, you can follow a link to see the assignment that generated this piece of student work, AND, you can then apply the assessment criteria to the assignment itself.

Finally, as an effort in ongoing improvement of the assessment instrument itself, the survey asks for information about the learning outcome, its description and relevance, with the assumption that the rubric is open for revision over time.

In this demo, you can complete the survey and submit data, but your data will not be visible in later parts of the demo. Rather, specific data for demonstration purposes will be presented elsewhere.

The second example is a blog post, in Blogger, included in the site with SharePoint’s Page Viewer web part. Again, to the right of the post is the rubric implemented in the form of a survey. With the Page Viewer web part the reviewer can navigate around the web from the blog post to see relevant linked items.

While this demonstration has embedded the blog post into a SharePoint site, that is not a requirement. The student could embed the rubric into the footer of the specific blog or in the margin of the whole blog. To demonstrate the former of these ideas, we have embedded a sample at the bottom of this post. Adding criterion-based feedback extends the power of comment and trackback already inherent in blogging.

The third example is a Wiki Diff, again included in the site with SharePoint’s Page Viewer web part. Again, to the right is a rubric implemented in the form of a survey.

The fourth example is an email the student wrote. This was embedded into the SharePoint site, but as with the blog example, the author could have included a link to criterion-based review as part of the footer of the email.

A subsequent post will address visualization of this data by the student, the instructor, and the academic program.

Please use the rubric below to give us feedback on this work:

Online Course Evaluations and Response Rate Considerations

The following is an email exchange between Gary Brown, Nils Peterson of Center for Teaching Learning and Technology at WSU and members of the TLTGroup.

Ehrmann@ TLT: Nils,

 I’ve gotten a couple questions from a subscriber.

Do any WSU colleges conduct student course evaluations exclusively online? All of them?

What kind of response rate does WSU get to online surveys and what strategies seem to work best for that purpose?



Nils: WSU has several colleges that do online surveys exclusively (Engineering & Architecture; Agriculture and Natural & Human Resources; Pharmacy). Response rates vary by course from very low to 100%. Gary Brown can take up this conversation to talk about what we do/don’t know what drives response rates

Gary: As Nils notes, we have several colleges doing online evaluations, some exclusively, more joining all the time. Response rates vary, but maybe more importantly, so do the instruments and, more importantly yet, the way the evaluations are used. I won’t go into detail about the differences in the evaluations instruments we’ve encountered, but online or not, the quality and fit for a variety of pedagogies is for me much more of a concern than the mode of delivery. The way they are used extends validity, because response rates matter little if results are ignored by faculty, misunderstood or difficult to interpret, and, all too common, boiled down to a single number for ranking purposes. It is hard to make arguments about the validity of an instrument and process if it is all capped by use that is itself invalid. But that makes the more important argument—it isn’t response rate and subsequent issue of response bias that matters as much as it ought be making sure that the response is representative and appropriate for the purpose of the process—hopefully for improving students’ learning experiences.

All that aside, response rates:

In our College of Agriculture, the response rate was 53%. But that number varies widely across departments. Here is a picture of response rates across departments from about a year ago:


Needless to say, the variance across departments is mirrored by similar and dramatic variance among courses/faculty, so it is hard for us to attribute the variance exclusively to the medium of delivery. We make other conjectures in our analysis in an article we published a while back. A key to response rates, we note in the article, is that in department with the higher response rates, the chairs of the departments were involved in the design of the instrument and the decision to put it online. So there is something important to be said for leadership and the engagement in the process of that leadership. We also point to other associations with higher rates we tracked in certain classes, mostly associated with the engagement of faculty in the process, their demonstration throughout the term that they listen and respond (not necessarily capitulate) to students’ concerns, and that they work overtly to engage students in the teaching/learning/assessment process.

The issue is pretty hot, too, and there are a number of discussions about response rates:
http://www.utexas.edu/academic/diia/assessment/iar/teaching/gather/method/survey-Response.php

http://books.google.com/books?id=zrjGUewMWHEC&pg=PA92&lpg=PA92&dq=adequate+survey+response+rates&source=web&ots=Q-1Sj0ntID&sig=XvzUTqM5dv5NjHIC0FX4CAr2LxM#PPA92,M1

http://books.google.com/books?id=H0Uexcg9xBcC&pg=PA42&lpg=PA42&dq=adequate+survey+response+rates&source=web&ots=aLzsrkerPO&sig=Prza517KiMb_Cf2jGenNXhKG5Dk#PPA46,M1

http://www.aapor.org/bestpractices

Most of these suggest, as you will see, that 50% is adequate, if not stellar. (The most authoritative is the last link, and they say, too, that 50% is ok.) The larger concerns I infer from your note is the utility of responses at low rates (we’ll let others worry for the moment about the implications of comparing results, as some chairs do, when the response rates differ significantly).

But our own work here at WSU with the College of Engineering suggests that the response bias may be less salient than one would presume.

We have not written this up yet, but here is a comparison of online versus paper done with the college of engineering at WSU. We have shared this with a work group from the American Evaluation Association (AEA) and are finding others who report the same phenomenon. The response rate online was about 51%, paper in class at about 71% (which is much lower than most people believe is the case for traditional paper-based, with the presumption that it runs closer to mid 90s). The samples are convenience samples based upon faculty preference for using paper or trying the online. The graph reflects 26 student evaluations randomly drawn from each of the three groups. If there is some kind of response bias, the picture here does not reveal it. We have been monitoring this as we move more and more online and remain interested in exploring the distinctions we may get (or not) when populations complete the instruments voluntarily, for extra credit, or when they are required to do so.

Thursday, June 12, 2008

Transforming the Grade Book

We are leaving blogger for WordPress. Please visit us there.

Nils Peterson, Theron Desrosier, Jayme Jacobson

CTLT has been thinking about portfolios for learning and their relationship to institutionally supported learning tools and course designs. This thinking has us moving away from the traditional LMS. It has also led to a recognition that grade books are QWERTY artifacts of Learning 1.0. In a recent Campus Technology interview Gary Brown introduced the term “harvesting gradebook” to describe the grade book that a faculty needs to work in these decentralized environments.
“Right now at WSU, one of the things we're developing in collaboration with Microsoft is a 'harvesting' gradebook. So as an instructor in an environment like this, my gradebook for you as a student has links to all the different things that are required of you in order for me to credit you for completing the work in my class. But you may have worked up one of the assignments in Flickr, another in Google Groups, another in Picasa, and another in a wiki.”
This post will provide more definition and a potential implementation for this new kind of transformed grade book. It is the result of a conversation between Nils Peterson, Theron DesRosier and Jayme Jacobson diagrammed here.


Figure 1: White board used for drafting these ideas. Black ink is “traditional” model, Blue is a first variation, Red is a second variation.

The process begins with a set of criteria that is agreed by to be useful by a community and is adopted across an academic program. An example is WSU’s Critical Thinking Rubric. This rubric was developed by the processes of a “traditional” academic community. How the process changes as the community changes will be discussed below.

Instructors start the process by defining assignments for their classes and “registering” them with the program. Various metadata are associated with the assignment in the registration process. Registration is important because in the end the process we propose will be able to link up student work, assessment of the work, the assignment that prompted the work, and assessments of the assignment. More implications of this registration will be seen below.

The student works the assignment and produces a solution in any number of media and venues, which might include the student’s ePortfolio (we define ePortfolio broadly). The student combines their work with the program’s rubric (in a survey format). The rubric survey is administered to either a specifically selected list of reviewers or to an ad hoc group. (We have been experimenting with two mechanisms for doing this “combining.” One places the rubric survey on the page with the student’s work as a sidebar or footer (analogous to a Comment feature, or the “Was this helpful?” survey included in some online resources). This approach is public to anyone who can access the web page. The other strategy imbeds a link to the student’s work in a survey, it can be targeted to a specific reviewer. This example comes from the judging CTLT’s 2nd ePortfolio contest.

In either case the survey collects a score and qualitative feedback for the student’s work. We are imagining the survey engine is centrally hosted so that all the data is compiled into a single location and therefore is accessible to the academic program. Data can be organized by student, assignment, academic term, or course. A tool we are developing that can do this is called Skylight Matrix Survey System, which is rebranded as Flashlight Online 2.0 by the TLT Group. The important properties of Skylight for this application are the ability to render a rubric question type and the ability to have many survey instances (respondent pools) within one survey and both report instances individually and aggregate the data across some/all the instances.

Audiences for this data
The transformative aspects of this strategy arise from the multiple audiences for the resulting data. We have labeled these collections of data, and the capacities to present the data to audiences “assessment necklaces”

Figure 2: Diagram of rubric-based assessment. Learners, peers, and faculty are shown collecting data from rubric-based assessment of portfolios, then reflecting on and presenting the multiple data points (necklaces) in contexts important to them.

Students can review the data for self-reflection and can use the data as evidence in a learning portfolio. We are exploring ideas like Google’s Motion Chart gadget (aka Trednalyzer/Gapminder) to help visualize this data over time. They can also learn from giving rubric-based reviews to peers and by comparing themselves to aggregates of peer data.

Instructors can use the data (probably presented in the student’s course portfolio) for “grading” in a course. It’s worth noting that the Instructor’s assignments can be assessed with the same rubric, asking, “To what extent does this assignment advance each the goals of this rubric?” With the assignment rated, instructors can review the data across multiple students, assignments, and semesters for their own scholarship of teaching and learning (SoTL). Here the instructor can combine the rubric score of an assignment with the student performance on the assignment to improve the assignment. Instructors might also present this comparison data in a portfolio for more authentic teaching evaluations.

In this example the assignment might be rated by students or the instructor’s peers. Below, the rating of the assignment by wider communities will be explored.

Academic Programs can look across multiple courses and terms, for program-level learning outcomes and SoTL. They can also present the data in showcase portfolios used for recruiting students and faculty, funding and partners. This is where the collective registration of the assignment becomes important. The program can access the assignment in the context of the program, with an eye to coordinate assignments and courses to improve the coherence of the program outcomes.

The community, which might include accrediting bodies, employers and others, can use the data, as presented in portfolios by students, instructors, and the academic program, to reflect on, or give feedback to, the academic program. Over time, an important effect of this feedback should be to open dialogs that lead to changes in the rubric.

Variations on this model
The description above is still traditional in at least two important ways: the program (ie faculty) develop the rubric and the instructor decides the assignment. Variants are possible where outside interested parties participate in these activities.

First variation. WSU and University of Idaho run a joint program in Food Science. We have observed that the program enrolls a significant number of international students, from nations where food security is a pressing issue. We imagine that those nations view training food scientists as a national strategy for economic development.

We have imagined a model where the students (in conjunction with their sponsoring country), and interested NGOs, bring problem statements to the program and the program designs itself so that students are working on aspects of their problem while studying. The sponsors would also have an interest in the rubric, and students would be encouraged (required?) to maintain contacts with sponsors and NGOs and cultivate among them people to provide evaluations using the rubric.

The processes and activities described above would be similar, but the input from stakeholders would be more prominent than in the traditional university course. Review of the assignments, and decisions about the rubric, would be done within this wider community (two universities, national sponsors and NGOs). The review of assignments and the assessment of the relationship of assignments and learning products creates a very rich course evaluation, well beyond the satisfaction models presently used in traditional courses.

Second variation. This option opens the process up further and provides a model to implement Stephen Downes’ idea in Open Source Assessment. Downes says “were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.”

In our idea of this model, the learner would come with the problem, or find a problem, and following Downes, learners would present aspects of their work to be evaluated with the program’s rubric, and the institution would credential the work based on its (and the community’s) judging of the problem/solution with the rubric. This sounds a lot like graduate education, the learner defines a problem of significance to a community and addresses that problem to the satisfaction of the community. In our proposed implementation, the ways that the community has access to the process are made more explicit.

In this variant, the decision about the rubric is an even broader community dialog and the assessment of the instructor (now mentor/learning coach) will be done by the community, both in terms of the skills demonstrated by students that the instructor mentored, and by the nature of the problems/approaches/solutions that were a result of the mentoring. The latter asks, is the instructor mentoring the student toward problems that are leading or lagging the thinking of the community?

Examples
For some sense of learning portfolios created by the processes above, consider these winners from CTLT’s 2007-08 ePortfolio contest.

The following two winners are examples of the second variant, where students were paired with a problem from a sponsor:

The Kayafungo Women’s Water Project documents the efforts of Engineers Without Borders at WSU (EWB@WSU) who partnered with the Student Movement for Real Change to provide clean water to 35,000 people in Kayafungo, Kenya.

The EEG Patient Monitoring Device portfolio follows the learning process of four MBA students who collaborated with faculty, the WSU Research Foundation, inventors, and engineers to develop a business plan for a wireless EEG patient monitoring device.

The next two are examples of the second variant -- student defined problems assessed by the community. In the latter case, the student is using the work, both her activism in the community and her study-in-action as her dissertation:

The Grace Foundation started with a vision to create a non-profit organization that would assist poor and disenfranchised communities across Nigeria in four areas: Education, Health, Entrepreneurship and Advocacy. The author used the UN online volunteering program to form a team to develop a participatory model of development that addresses issues of poverty eradication from a holistic manner.

El Calaboz Portfolio chronicles the use of Internet and media strategies by the Lipan Apache Women's Defense, a group that has grown in national and international prominence the last 75 days, from less than 10 people in August, to an e-organization of over 312 individuals currently working collectively. It now includes NGO leaders, tribal leaders, media experts, environmentalists, artists, and lawyers from the Center for Human Rights and Constitutional Law. It recently received official organization status at the UN.

The next steps in this work at WSU are to build worked examples of these software tools and to recruit faculty partners to collaborate in a small scale pilot implementation.

Wednesday, June 4, 2008

Hub and Spoke Model of Course Design

We have been exploring the idea of "hub and spoke" course designs where the learners are using ePortfolios and Web 2.0 tools and working in communities and contexts where their chosen problem is being addressed. For such a course, we have been using the term Hub and Spoke to describe how the institutionally-operated course space (hub) relates to the learners and the learner's electronic spaces. (see: Out of the Classroom and Into the Boardroom (PDF),
Out of the Classroom and Beyond, Case Study of Electronic Portfolios, and ePortfolio as the Core Learning Application.
Recently Blackboard has been adding "Web 2.0" features, so we had a discussion to delineate the reasons to use SharePoint rather than Blackboard as the hub in a hub and spoke course design.

Worldware
Worldware is a double reason. First, students are learning skills in SharePoint that they can later in work contexts, where Blackboard skills are not useful outside the school context. Second, as our university adopts SharePoint for a variety of administrative purposes, there become a larger group of SharePoint experts who can provide support to both faculty and students using SharePoint as a learning platform.


Document Library and Tagging
SharePoint's document libraries are very flexible, allowing users to add metadata that suites their purposes. In CTLT's ePortfolio contest ctlt.wsu.edu/contest07/ we have had several examples of this, perhaps of the most developed is in this winner's portfolio. (Its also worth noting that this contestant used email to send documents to the library (a SharePoint feature that integrated the "collect" phase of her portfolio work more completely with her other project work). We are now exploring how to mashup SharePoint document libraries with other tools to create timelines showing the evolution of ideas in the portfolio.

Authorization controls
While WSU has a mechanism for outsiders to gain an identity and login to university systems, as we have Blackboard configured, instructors can only authorize people into courses in the role of Teaching Assistant. Further, authorization to a Blackboard course gives access to the whole course, there is no fine-grained control to specific parts of the course. Finally, a SharePoint site can be configured for anonymous read, opening (portions of) the course to the world if needed.

"Pre-cooked" webparts and tools
SharePoint has a concept for exporting sites and elements of sites (libraries, web parts, surveys, etc) as .STP files and then re-importing these into other sites or adding them to templates for users to choose. This allows time-savings such as configuring a document library with specific columns, or an RSS reader with specific feeds pre-installed.

Adding more tools
Finally, SharePoint's architecture enables other linkages and mashups. It is a source and consumer of RSS, will support embedding of other Web 2.0 resources in its pages, and can capture email and originate email alerts. And with the SharePoint mySite, where the student is the owner of the SharePoint site over the span of their career, there is greater flexibility to support the hub and spoke models.