Tuesday, August 12, 2008

Leaving Blogger for Wordpress

We are leaving blogger for WordPress. Please visit us there.

I've been looking into how Blogger handles trackback. Sigh, Blogger doesn't do it, it uses a slower "linkback" technology. But its not clear that linkback from Blogger to other blogs ever happens. Resulting in no conversation between blogs, where the point of blogging is to foster conversation. As a result, we are moving the CTLT blog to Wordpress hosting where trackback/pingback is real.

To add insult to injury, Blogger has now decided our former blog is a spam blog and require us to use a Captcha to edit (after logging in). So much for making blog posts rich in contextualized links.

Thursday, July 31, 2008

Graphing Multidimensional Data for Learning

We are moving out of Blogger. Please link and comment on this post in our Wordpress blog.

Nils Peterson, Theron DesRosier, Jayme Jacobson

This post is in a series exploring issues related to transforming the grade book. Most recently we have been developing elements of the implementation of these ideas, and have been asking if it is credible that students and faculty and academic programs could learn from this data.

Below is a rendering of data previously collected in a rating session on a self-reflection document created by student employees as part of their job performance review. This study was done in collaboration between the WSU Career Center and our office. The rubric had five dimensions, and the scale ranged from 1 to 6, 6 being highest, 4 being competent. Three groups were involved in the rating: students (peers), faculty and external employers. The employers were recruiters from major companies in Washington state, who were on campus as part of their student recruiting work.
(click to enlarge image)

The five dimensions of the rubric are plotted in this radar graph with each dimension having its origin in the center of the pentagon and its highest possible value at the outer edge. The three groups of raters are shown as colored regions. This diagram shows a familiar story: students rate peers more highly than faculty and employers think the least highly of student abilities. The diagram also shows that in teamwork and communication each group agrees students are underprepared (vertices which are below competency). In Mission, students think they are competent while others disagree. In use of Evidence, faculty and students each think students are prepared, but employers disagree. Only in Duties do all three groups agree that students are prepared.

Monday, July 21, 2008

Authentic assessment of learning in global contexts

We are moving out of Blogger. Please link and comment to this post in our Wordpress blog.


Nils Peterson, Gary Brown, Jayme Jacobson, Theron DesRosier


The AAC&U 2009 conference, Global Challenge, College Learning, and America’s Promise asks a series of questions about “the relationship between college learning and society” and the Call for Proposal notes “[College and university] failure to fully engage our publics with the kinds of learning best suited to our current needs…”

The following is an abstract submitted in response to the Call above. It served as an opportunity to “rise above” our work on these topics and reflect on its implications.

---


Assessment of learning by a community inside and outside the classroom is a key component to developing students’ global competencies and building a strong relationship between college learning and society.

In 2007-08 Washington State University conducted an ePortfolio Contest to demonstrate ways to harness the interests and expertise of the WSU community to address real world problems encountered by communities both locally and globally. It called upon contestants to collaborate with community members - institutional, local, or global – to identify a problem, explore solutions, develop a plan, and then take steps toward implementing that plan. Contestants were asked to use electronic portfolios to capture and reflect on their collaborative problem-solving processes and the impact of their projects. Judges from industry, the local community, and WSU used a rubric based on the WSU Critical Thinking Rubric to evaluate the portfolios. Since the contest, we have been distilling design principles for portfolios that facilitate learning.

This exploration has taught us to value the learner consciously leaving a ‘learning trace’ as they work on a problem, and we believe the capturing and sharing of that trace is an important part of documenting learning. A striking example illustrating a learning trace in a portfolio is one of the winners in our contest. Not only does this portfolio exhibit a learning trace, it captures feedback from the community regarding the quality of the work. A recent AAC&U survey of employers supports this bias for richer documentation of learner skills.

Our thinking about portfolios for learning is moving us away from traditional ideas about courses contained in classrooms and toward Stephen Downes’ eLearning 2.0 ideas: “Students' [learning portfolios] are often about something from their own range of interests, rather than on a course topic or assigned project. More importantly, what happens when students [work in this way], is that a network of interactions forms-much like a social network, and much like Wenger's community of practice.” And far from being trivial social networking, our portfolio contest captured rich and substantive learning happening in the community outside the classroom.

But documentation of learning, without feedback, guidance and assessment leaves the learner to work without the support of a community, which has led us to a recognition that grade books are QWERTY artifacts of a Learning 1.0 model. To address that, we have been exploring ways to transform the grade book to support learners working simultaneously within the university and within their communities of practice. This approach to a grade book has the additional benefit for the scholarship of teaching and learning of gathering feedback on the assignment, course and program from the community at the same time that it invites the community to assess a learner’s work. Such feedback can help the university engage its publics in a discussion of the kinds of learning most suited to current needs.

The questions in the AAC&U Call will be used to help the audience highlight and frame the discussion of our ideas.

---

The Call “invites proposals of substantive, engaging sessions that will raise provocative questions, that will engage participants in discussion, and that will create and encourage dialogue--before, during, and after the conference itself.”

In the spirit of the AAC&U call, you are invited to engage this discussion before or after the conference by posting comments here, on the related pages, or by tracking back from your own blog, and then meet us in Seattle to further the conversation.

Wednesday, July 16, 2008

Online survey response rates - effect of more time

The graph below is a comparison of response rates to an online course evaluation in a college at Washington State University. Faculty had hypothesized that the response rate was limited by the amount of time the survey was open to students so the time available was varied in two administrations:

Fall 2007 11/23 to 12/22 (29 days) 10582 possible respondents
Spring 2008 3/31 - 5/5 (35 days) 9216 possible responents

The x-axis in the graph below is time, but normalized to the % of time the survey was open.

The y axis is normalized also, to the number of responses possible, based on course enrollment data, i.e., Y is response rate.


Figure 1 (click to enlarge)
The Fall 2007 survey ran to completion (reached an asymptote) where the spring one was (maybe) still rising at the cut off date. Fall and Spring total response rates are very similar, suggesting that more time when the survey is open has little impact on total response rate. So, contrary to what faculty hypothesized, the same overall response rate was achieved in a longer surveying window. This aligns with other data we have on response rates -- there is some other factor governing response rate that is not yet identified.

Its interesting to note that you can see waves in the spring data, as if faculty exhorted students on Monday and got another increment of response.

Monday, July 14, 2008

Implementing the Transformed Grade Book II

We are leaving Blogger for Wordpress, please join us there.

Jayme Jacobson, Theron DesRosier, Nils Peterson with help from Jack Wacknitz

Previously we showed examples of how a transformed grade book (or Harvesting grade book ) might look from the perspective of a piece of student work seeking feedback. That demonstration was hand built and is not practical to scale up to the level of an academic program. Even the smaller scale use such as that suggested by a recent piece in the Chronicle entitled Portfolios Are Replacing Qualifying Exams would benefit from some automation. This post outlines a design for a software application that could plug into the Skylight Matrix Survey System (a new survey tool WSU is developing).

There are two workflows that we have previously described in general terms, one for the instructor to register assignments and another for students to request feedback as they work on those assignments. In the figure below we outline a series of screen shots for the instructor registering the assignment. During the registration process the instructor is matching the assignment to the assessment rubric dimensions that are appropriate. We view this registration of the assessments as a possible implementation of Stephen Downes’ idea in Open Source Assessment.


Instructor Dashboard (click image to enlarge)


This examples shows how an instructor might use a dashboard to register and monitor assignments. The workflow shows capturing the assignment, assigning rubric dimensions to be used for assessing both the student work and the assignment itself and ends with routing the assignment for review. This mock-up does not show how the instructor would see the student work created in response to the assignment, or the scores associated with that student work . The next step in the workflow would require an upload of the assignment so that it could be retained in a program-level archive. The assignment could be referenced from that archive for faculty scholarship of teaching (SoTL), as well as for program or accreditation review and other administrative purposes.

Once the assignment has been registered, the student could start from a student dashboard to request a review or feedback and guidance. We are differentiating the idea of a full review (with a rubric) from more informal feedback and guidance. This informal feedback would probably not be fed into a grade book but the captured feedback could be used by a student as evidence in a learning portfolio.


Student Dashboard (click image to enlarge)


The basic workflow for a student would let the student request a rubric-based review for a specific assignment in a specific course. The student would select the course, assignment, and other metadata. Once posted for review, the request would either be routed to multiple reviewers or the student would embed the review into a webpage using the HTML code provided. In the second step there would be an opportunity to upload a document. This might be used in cases where the document had no web incarnation (to give it a URL) or to “turn-in” a copy of the document that would not be subject to further editing, as might be required in some high-stakes assessments.

The Learning 2.0 model is supported in the last step, where the assessment is embedded in a web space still open to modification by the learner (as the previous examples illustrated)


Student-created Rubric-based Survey (click image to enlarge)

Students might want to use their own rubric-based surveys. This mock-up shows how workflow would branch from the previous workflow to allow the student to define rubric dimensions and criteria.


Student-created Simple Feedback Survey (click image to enlarge)

This last example shows how the student would create a simple feedback survey.


State of the Art
Presently there are several tools that might be used to deliver a rubric survey. The challenge is the amount of handwork implied to let each student have a rubric survey for each assignment in each course and to aggregate the data from those surveys by assignment, by course, and by student for reporting. A future post will explore what might be learned by having the data centrally aggregated. If there is value in central aggregation, it will have implications for the tool and method selected for the rubric survey delivery. The Center for Teaching Learning and Technology at WSU already has tools to make the handwork implied tractable for use in a pilot course of 20-30 students. We understand the path to develop further automation, but both pilot test and further automation require investment which requires further analysis of commitment, infrastructure, and resources.

Questions
1. Can this concept provide transformative assessment data that can be used by students, instructors, and programs to advance learning? In addition to assessing student learning, can it provide data for instructor course evaluation and for program level assessment and accreditation?

2. Can the process be made simple enough to be non-obtrusive in terms of overhead in a course’s operations?

3. Is it necessary to implement a feedback and guidance as a central university-hosted tool, or could students implement an adequate solution without more investment on the university's part?

Thursday, July 3, 2008

Implementing Feedback and Guidance

Previously we (Theron DesRosier, Jayme Jacobson, and I) wrote about implementing our ideas for a transformed grade book. The next step in that discussion was to think about how to render the data. Our question was, “How do we render this data so that the learner can learn from it?”

We’ve been spending time with Grant Wiggins’ Assessing Student Performance, Jossey-Bass, 1993. In Chapter 6 on Feedback, he differentiates ‘feedback’ from ‘guidance’ in several examples and defines feedback as “information that provides the performer with direct, usable insights into current performance, based on tangible differences between current performance and hoped-for performance.”

Wiggins describes ‘guidance’ as looking forward, the roadmap to follow to my goal and ‘feedback’ as looking backward, did my last action keep me on the road or steer me off?
Wiggins points to Peter Elbow, Embracing Contraries and Explorations in Learning and Teaching, “The unspoken premise that permeates much of education is that every performance must be measured and that the most important response to a performance is to measure it.” Both authors go on to suggest that less measurement is needed, and that feedback is the important (often missing) element to substitute for measurement.

Our previous posts on the transformed grade book are designed to be a measurement strategy (that we hoped would also provide feedback to learners), but Wiggins leads me to think that learners need a feedback tool different from a measurement tool (used to populate the grade book).

While bloggers have some experience getting feedback from the blogosphere by informal means, I think that it would be useful to scaffold requesting feedback, for both learners and feedback givers. However, I want the process to be simple and fast for both parties. What I hope to avoid is the too common tendency of student peers to give trite “great job” reviews, or to fall into reviewing mechanical things, such as looking for spelling errors.

To that end, I am exploring a simplified approach from the idea in the last post. Recently, I tried a version of this simple idea in an email to Gary Brown (our director). He had asked me for a report on LMS costs to be shared with a campus committee. I replied with the report and this:

Feedback request: Is this going to meet your needs for the LMS committee? (Yes/somewhat/no)

Guidance request: what additional issues would you like considered?

Implicit in the feedback request was my goal of meeting his needs.

Even with this simple feedback + guidance request the question remains, can we render the data that would be collected in a way the learner could learn from it? Below is a hypothetical graph of multiple events (draft and final documents) where I asked Gary for feedback: “Is this useful?” The series makes evident to me (the learner) that initially the feedback I’m getting is not very affirming, and final versions don’t fair much better than drafts. Reflecting on this I have a heart-to-heart talk and afterwards develop a new strategy that improves my feedback results.

Versions of this kind of “Was this helpful?” feedback appear on some online help resources, and I assume that someone is reviewing the feedback and updating the help pages, and could produce graphs similar to the one above, showing improved feedback after specific interventions.

Here is Google's feedback request from a help page found from a Google App:
When you choose the Yes or No feedback, another question appears, and in this case you are giving guidance on what would make the item better, either picking a guidance from a list or providing an open-ended reply.

In addition to comments or trackbacks, please give me feedback and guidance on this post (my form is not as slick as Google's).

Friday, June 20, 2008

Implementing the Transformed Grade Book

Theron DesRosier, Jayme Jacobson, Nils Peterson

Previously we described ideas on Transforming the Grade Book, by way of elaborating on Gary Brown’s ideas of a “Harvesting Gradebook.”

Here we demonstrate an implementation of those ideas in the form of a website with student work and embedded assessment. This demonstration is implemented in Microsoft SharePoint with the WSU Critical and Integrative Thinking rubric as the criteria and a Google Doc survey as the data collection vehicle, but other platforms could be used to contain the student work and other assessment rubrics delivered in other survey tools could be developed. (Note- this implementation is built with baling wire and duct tape and the implementation would not scale.)

There are four examples of student work (a Word document with track changes, a blog post, a wiki diff and an email), to illustrate the variety of student work that might be collected and the variety of contexts in which students might be working. This student work might be organized as part of an institutionally sponsored hub-and-spoke style LMS or in an institutionally sponsored ePortfolio (as WSU is doing with SharePoint mySites) or directly in venues controlled by the student (see for example the blog and email examples below) where the student embeds a link to the grade book provided by the academic program.

Examples of Assessing Student Work (aka transformed ‘grading’)

The first example is a Microsoft Word document, stored in SharePoint, and included in this page with Document Viewer web part. You are seeing the track changes and comments in the Word document. In some browsers you will see pop-up notes and the identities of the reviewers.

To the right of the document is the rubric for assessing the work. Clicking on “Expand” in the rubric will open a new window with details of the rubric dimension and a Google Doc survey where you can enter a numeric score and comments of your assessment of the work with this criteria.

This survey also collects information about your role because it is important in our conceptualization of this transformed grade book to have multiple perspectives and to be able to analyze feedback based on its source.

In our description of the workflow that for this assessment process we say:
Instructors start the process by defining assignments for their classes and “registering” them with the academic program. Various metadata are associated with the assignment in the registration process. Registration is important because in the end the process we propose will be able to link up student work, assessment of the work, the assignment that prompted the work, and assessments of the assignment.
This demonstration shows one of the important impacts of the “registration” -- as a reviewer of the student's work, you can follow a link to see the assignment that generated this piece of student work, AND, you can then apply the assessment criteria to the assignment itself.

Finally, as an effort in ongoing improvement of the assessment instrument itself, the survey asks for information about the learning outcome, its description and relevance, with the assumption that the rubric is open for revision over time.

In this demo, you can complete the survey and submit data, but your data will not be visible in later parts of the demo. Rather, specific data for demonstration purposes will be presented elsewhere.

The second example is a blog post, in Blogger, included in the site with SharePoint’s Page Viewer web part. Again, to the right of the post is the rubric implemented in the form of a survey. With the Page Viewer web part the reviewer can navigate around the web from the blog post to see relevant linked items.

While this demonstration has embedded the blog post into a SharePoint site, that is not a requirement. The student could embed the rubric into the footer of the specific blog or in the margin of the whole blog. To demonstrate the former of these ideas, we have embedded a sample at the bottom of this post. Adding criterion-based feedback extends the power of comment and trackback already inherent in blogging.

The third example is a Wiki Diff, again included in the site with SharePoint’s Page Viewer web part. Again, to the right is a rubric implemented in the form of a survey.

The fourth example is an email the student wrote. This was embedded into the SharePoint site, but as with the blog example, the author could have included a link to criterion-based review as part of the footer of the email.

A subsequent post will address visualization of this data by the student, the instructor, and the academic program.

Please use the rubric below to give us feedback on this work: