We are moving out of Blogger. Please link and comment on this post in our Wordpress blog.
Nils Peterson, Theron DesRosier, Jayme Jacobson
This post is in a series exploring issues related to transforming the grade book. Most recently we have been developing elements of the implementation of these ideas, and have been asking if it is credible that students and faculty and academic programs could learn from this data.
Below is a rendering of data previously collected in a rating session on a self-reflection document created by student employees as part of their job performance review. This study was done in collaboration between the WSU Career Center and our office. The rubric had five dimensions, and the scale ranged from 1 to 6, 6 being highest, 4 being competent. Three groups were involved in the rating: students (peers), faculty and external employers. The employers were recruiters from major companies in Washington state, who were on campus as part of their student recruiting work.
The five dimensions of the rubric are plotted in this radar graph with each dimension having its origin in the center of the pentagon and its highest possible value at the outer edge. The three groups of raters are shown as colored regions. This diagram shows a familiar story: students rate peers more highly than faculty and employers think the least highly of student abilities. The diagram also shows that in teamwork and communication each group agrees students are underprepared (vertices which are below competency). In Mission, students think they are competent while others disagree. In use of Evidence, faculty and students each think students are prepared, but employers disagree. Only in Duties do all three groups agree that students are prepared.
Thursday, July 31, 2008
Monday, July 21, 2008
Authentic assessment of learning in global contexts
We are moving out of Blogger. Please link and comment to this post in our Wordpress blog.
Nils Peterson, Gary Brown, Jayme Jacobson, Theron DesRosier
The AAC&U 2009 conference, Global Challenge, College Learning, and America’s Promise asks a series of questions about “the relationship between college learning and society” and the Call for Proposal notes “[College and university] failure to fully engage our publics with the kinds of learning best suited to our current needs…”
The following is an abstract submitted in response to the Call above. It served as an opportunity to “rise above” our work on these topics and reflect on its implications.
---
Assessment of learning by a community inside and outside the classroom is a key component to developing students’ global competencies and building a strong relationship between college learning and society.
In 2007-08 Washington State University conducted an ePortfolio Contest to demonstrate ways to harness the interests and expertise of the WSU community to address real world problems encountered by communities both locally and globally. It called upon contestants to collaborate with community members - institutional, local, or global – to identify a problem, explore solutions, develop a plan, and then take steps toward implementing that plan. Contestants were asked to use electronic portfolios to capture and reflect on their collaborative problem-solving processes and the impact of their projects. Judges from industry, the local community, and WSU used a rubric based on the WSU Critical Thinking Rubric to evaluate the portfolios. Since the contest, we have been distilling design principles for portfolios that facilitate learning.
This exploration has taught us to value the learner consciously leaving a ‘learning trace’ as they work on a problem, and we believe the capturing and sharing of that trace is an important part of documenting learning. A striking example illustrating a learning trace in a portfolio is one of the winners in our contest. Not only does this portfolio exhibit a learning trace, it captures feedback from the community regarding the quality of the work. A recent AAC&U survey of employers supports this bias for richer documentation of learner skills.
Our thinking about portfolios for learning is moving us away from traditional ideas about courses contained in classrooms and toward Stephen Downes’ eLearning 2.0 ideas: “Students' [learning portfolios] are often about something from their own range of interests, rather than on a course topic or assigned project. More importantly, what happens when students [work in this way], is that a network of interactions forms-much like a social network, and much like Wenger's community of practice.” And far from being trivial social networking, our portfolio contest captured rich and substantive learning happening in the community outside the classroom.
But documentation of learning, without feedback, guidance and assessment leaves the learner to work without the support of a community, which has led us to a recognition that grade books are QWERTY artifacts of a Learning 1.0 model. To address that, we have been exploring ways to transform the grade book to support learners working simultaneously within the university and within their communities of practice. This approach to a grade book has the additional benefit for the scholarship of teaching and learning of gathering feedback on the assignment, course and program from the community at the same time that it invites the community to assess a learner’s work. Such feedback can help the university engage its publics in a discussion of the kinds of learning most suited to current needs.
The questions in the AAC&U Call will be used to help the audience highlight and frame the discussion of our ideas.
---
The Call “invites proposals of substantive, engaging sessions that will raise provocative questions, that will engage participants in discussion, and that will create and encourage dialogue--before, during, and after the conference itself.”
In the spirit of the AAC&U call, you are invited to engage this discussion before or after the conference by posting comments here, on the related pages, or by tracking back from your own blog, and then meet us in Seattle to further the conversation.
Nils Peterson, Gary Brown, Jayme Jacobson, Theron DesRosier
The AAC&U 2009 conference, Global Challenge, College Learning, and America’s Promise asks a series of questions about “the relationship between college learning and society” and the Call for Proposal notes “[College and university] failure to fully engage our publics with the kinds of learning best suited to our current needs…”
The following is an abstract submitted in response to the Call above. It served as an opportunity to “rise above” our work on these topics and reflect on its implications.
---
Assessment of learning by a community inside and outside the classroom is a key component to developing students’ global competencies and building a strong relationship between college learning and society.
In 2007-08 Washington State University conducted an ePortfolio Contest to demonstrate ways to harness the interests and expertise of the WSU community to address real world problems encountered by communities both locally and globally. It called upon contestants to collaborate with community members - institutional, local, or global – to identify a problem, explore solutions, develop a plan, and then take steps toward implementing that plan. Contestants were asked to use electronic portfolios to capture and reflect on their collaborative problem-solving processes and the impact of their projects. Judges from industry, the local community, and WSU used a rubric based on the WSU Critical Thinking Rubric to evaluate the portfolios. Since the contest, we have been distilling design principles for portfolios that facilitate learning.
This exploration has taught us to value the learner consciously leaving a ‘learning trace’ as they work on a problem, and we believe the capturing and sharing of that trace is an important part of documenting learning. A striking example illustrating a learning trace in a portfolio is one of the winners in our contest. Not only does this portfolio exhibit a learning trace, it captures feedback from the community regarding the quality of the work. A recent AAC&U survey of employers supports this bias for richer documentation of learner skills.
Our thinking about portfolios for learning is moving us away from traditional ideas about courses contained in classrooms and toward Stephen Downes’ eLearning 2.0 ideas: “Students' [learning portfolios] are often about something from their own range of interests, rather than on a course topic or assigned project. More importantly, what happens when students [work in this way], is that a network of interactions forms-much like a social network, and much like Wenger's community of practice.” And far from being trivial social networking, our portfolio contest captured rich and substantive learning happening in the community outside the classroom.
But documentation of learning, without feedback, guidance and assessment leaves the learner to work without the support of a community, which has led us to a recognition that grade books are QWERTY artifacts of a Learning 1.0 model. To address that, we have been exploring ways to transform the grade book to support learners working simultaneously within the university and within their communities of practice. This approach to a grade book has the additional benefit for the scholarship of teaching and learning of gathering feedback on the assignment, course and program from the community at the same time that it invites the community to assess a learner’s work. Such feedback can help the university engage its publics in a discussion of the kinds of learning most suited to current needs.
The questions in the AAC&U Call will be used to help the audience highlight and frame the discussion of our ideas.
---
The Call “invites proposals of substantive, engaging sessions that will raise provocative questions, that will engage participants in discussion, and that will create and encourage dialogue--before, during, and after the conference itself.”
In the spirit of the AAC&U call, you are invited to engage this discussion before or after the conference by posting comments here, on the related pages, or by tracking back from your own blog, and then meet us in Seattle to further the conversation.
Labels:
assessment,
beyond LMS,
feedback,
grade book,
learning portfolio
Wednesday, July 16, 2008
Online survey response rates - effect of more time
The graph below is a comparison of response rates to an online course evaluation in a college at Washington State University. Faculty had hypothesized that the response rate was limited by the amount of time the survey was open to students so the time available was varied in two administrations:
Fall 2007 11/23 to 12/22 (29 days) 10582 possible respondents
Spring 2008 3/31 - 5/5 (35 days) 9216 possible responents
The x-axis in the graph below is time, but normalized to the % of time the survey was open.
The y axis is normalized also, to the number of responses possible, based on course enrollment data, i.e., Y is response rate.
Figure 1 (click to enlarge)
The Fall 2007 survey ran to completion (reached an asymptote) where the spring one was (maybe) still rising at the cut off date. Fall and Spring total response rates are very similar, suggesting that more time when the survey is open has little impact on total response rate. So, contrary to what faculty hypothesized, the same overall response rate was achieved in a longer surveying window. This aligns with other data we have on response rates -- there is some other factor governing response rate that is not yet identified.
Its interesting to note that you can see waves in the spring data, as if faculty exhorted students on Monday and got another increment of response.
Fall 2007 11/23 to 12/22 (29 days) 10582 possible respondents
Spring 2008 3/31 - 5/5 (35 days) 9216 possible responents
The x-axis in the graph below is time, but normalized to the % of time the survey was open.
The y axis is normalized also, to the number of responses possible, based on course enrollment data, i.e., Y is response rate.
Figure 1 (click to enlarge)
The Fall 2007 survey ran to completion (reached an asymptote) where the spring one was (maybe) still rising at the cut off date. Fall and Spring total response rates are very similar, suggesting that more time when the survey is open has little impact on total response rate. So, contrary to what faculty hypothesized, the same overall response rate was achieved in a longer surveying window. This aligns with other data we have on response rates -- there is some other factor governing response rate that is not yet identified.
Its interesting to note that you can see waves in the spring data, as if faculty exhorted students on Monday and got another increment of response.
Monday, July 14, 2008
Implementing the Transformed Grade Book II
We are leaving Blogger for Wordpress, please join us there.
Jayme Jacobson, Theron DesRosier, Nils Peterson with help from Jack Wacknitz
Previously we showed examples of how a transformed grade book (or Harvesting grade book ) might look from the perspective of a piece of student work seeking feedback. That demonstration was hand built and is not practical to scale up to the level of an academic program. Even the smaller scale use such as that suggested by a recent piece in the Chronicle entitled Portfolios Are Replacing Qualifying Exams would benefit from some automation. This post outlines a design for a software application that could plug into the Skylight Matrix Survey System (a new survey tool WSU is developing).
There are two workflows that we have previously described in general terms, one for the instructor to register assignments and another for students to request feedback as they work on those assignments. In the figure below we outline a series of screen shots for the instructor registering the assignment. During the registration process the instructor is matching the assignment to the assessment rubric dimensions that are appropriate. We view this registration of the assessments as a possible implementation of Stephen Downes’ idea in Open Source Assessment.
Instructor Dashboard (click image to enlarge)
This examples shows how an instructor might use a dashboard to register and monitor assignments. The workflow shows capturing the assignment, assigning rubric dimensions to be used for assessing both the student work and the assignment itself and ends with routing the assignment for review. This mock-up does not show how the instructor would see the student work created in response to the assignment, or the scores associated with that student work . The next step in the workflow would require an upload of the assignment so that it could be retained in a program-level archive. The assignment could be referenced from that archive for faculty scholarship of teaching (SoTL), as well as for program or accreditation review and other administrative purposes.
Once the assignment has been registered, the student could start from a student dashboard to request a review or feedback and guidance. We are differentiating the idea of a full review (with a rubric) from more informal feedback and guidance. This informal feedback would probably not be fed into a grade book but the captured feedback could be used by a student as evidence in a learning portfolio.
Student Dashboard (click image to enlarge)
The basic workflow for a student would let the student request a rubric-based review for a specific assignment in a specific course. The student would select the course, assignment, and other metadata. Once posted for review, the request would either be routed to multiple reviewers or the student would embed the review into a webpage using the HTML code provided. In the second step there would be an opportunity to upload a document. This might be used in cases where the document had no web incarnation (to give it a URL) or to “turn-in” a copy of the document that would not be subject to further editing, as might be required in some high-stakes assessments.
The Learning 2.0 model is supported in the last step, where the assessment is embedded in a web space still open to modification by the learner (as the previous examples illustrated)
Student-created Rubric-based Survey (click image to enlarge)
Students might want to use their own rubric-based surveys. This mock-up shows how workflow would branch from the previous workflow to allow the student to define rubric dimensions and criteria.
Student-created Simple Feedback Survey (click image to enlarge)
This last example shows how the student would create a simple feedback survey.
State of the Art
Presently there are several tools that might be used to deliver a rubric survey. The challenge is the amount of handwork implied to let each student have a rubric survey for each assignment in each course and to aggregate the data from those surveys by assignment, by course, and by student for reporting. A future post will explore what might be learned by having the data centrally aggregated. If there is value in central aggregation, it will have implications for the tool and method selected for the rubric survey delivery. The Center for Teaching Learning and Technology at WSU already has tools to make the handwork implied tractable for use in a pilot course of 20-30 students. We understand the path to develop further automation, but both pilot test and further automation require investment which requires further analysis of commitment, infrastructure, and resources.
Questions
1. Can this concept provide transformative assessment data that can be used by students, instructors, and programs to advance learning? In addition to assessing student learning, can it provide data for instructor course evaluation and for program level assessment and accreditation?
2. Can the process be made simple enough to be non-obtrusive in terms of overhead in a course’s operations?
3. Is it necessary to implement a feedback and guidance as a central university-hosted tool, or could students implement an adequate solution without more investment on the university's part?
Jayme Jacobson, Theron DesRosier, Nils Peterson with help from Jack Wacknitz
Previously we showed examples of how a transformed grade book (or Harvesting grade book ) might look from the perspective of a piece of student work seeking feedback. That demonstration was hand built and is not practical to scale up to the level of an academic program. Even the smaller scale use such as that suggested by a recent piece in the Chronicle entitled Portfolios Are Replacing Qualifying Exams would benefit from some automation. This post outlines a design for a software application that could plug into the Skylight Matrix Survey System (a new survey tool WSU is developing).
There are two workflows that we have previously described in general terms, one for the instructor to register assignments and another for students to request feedback as they work on those assignments. In the figure below we outline a series of screen shots for the instructor registering the assignment. During the registration process the instructor is matching the assignment to the assessment rubric dimensions that are appropriate. We view this registration of the assessments as a possible implementation of Stephen Downes’ idea in Open Source Assessment.
Instructor Dashboard (click image to enlarge)
This examples shows how an instructor might use a dashboard to register and monitor assignments. The workflow shows capturing the assignment, assigning rubric dimensions to be used for assessing both the student work and the assignment itself and ends with routing the assignment for review. This mock-up does not show how the instructor would see the student work created in response to the assignment, or the scores associated with that student work . The next step in the workflow would require an upload of the assignment so that it could be retained in a program-level archive. The assignment could be referenced from that archive for faculty scholarship of teaching (SoTL), as well as for program or accreditation review and other administrative purposes.
Once the assignment has been registered, the student could start from a student dashboard to request a review or feedback and guidance. We are differentiating the idea of a full review (with a rubric) from more informal feedback and guidance. This informal feedback would probably not be fed into a grade book but the captured feedback could be used by a student as evidence in a learning portfolio.
Student Dashboard (click image to enlarge)
The basic workflow for a student would let the student request a rubric-based review for a specific assignment in a specific course. The student would select the course, assignment, and other metadata. Once posted for review, the request would either be routed to multiple reviewers or the student would embed the review into a webpage using the HTML code provided. In the second step there would be an opportunity to upload a document. This might be used in cases where the document had no web incarnation (to give it a URL) or to “turn-in” a copy of the document that would not be subject to further editing, as might be required in some high-stakes assessments.
The Learning 2.0 model is supported in the last step, where the assessment is embedded in a web space still open to modification by the learner (as the previous examples illustrated)
Student-created Rubric-based Survey (click image to enlarge)
Students might want to use their own rubric-based surveys. This mock-up shows how workflow would branch from the previous workflow to allow the student to define rubric dimensions and criteria.
Student-created Simple Feedback Survey (click image to enlarge)
This last example shows how the student would create a simple feedback survey.
State of the Art
Presently there are several tools that might be used to deliver a rubric survey. The challenge is the amount of handwork implied to let each student have a rubric survey for each assignment in each course and to aggregate the data from those surveys by assignment, by course, and by student for reporting. A future post will explore what might be learned by having the data centrally aggregated. If there is value in central aggregation, it will have implications for the tool and method selected for the rubric survey delivery. The Center for Teaching Learning and Technology at WSU already has tools to make the handwork implied tractable for use in a pilot course of 20-30 students. We understand the path to develop further automation, but both pilot test and further automation require investment which requires further analysis of commitment, infrastructure, and resources.
Questions
1. Can this concept provide transformative assessment data that can be used by students, instructors, and programs to advance learning? In addition to assessing student learning, can it provide data for instructor course evaluation and for program level assessment and accreditation?
2. Can the process be made simple enough to be non-obtrusive in terms of overhead in a course’s operations?
3. Is it necessary to implement a feedback and guidance as a central university-hosted tool, or could students implement an adequate solution without more investment on the university's part?
Labels:
assessment,
feedback,
grade book,
instructor evaluation
Thursday, July 3, 2008
Implementing Feedback and Guidance
Previously we (Theron DesRosier, Jayme Jacobson, and I) wrote about implementing our ideas for a transformed grade book. The next step in that discussion was to think about how to render the data. Our question was, “How do we render this data so that the learner can learn from it?”
We’ve been spending time with Grant Wiggins’ Assessing Student Performance, Jossey-Bass, 1993. In Chapter 6 on Feedback, he differentiates ‘feedback’ from ‘guidance’ in several examples and defines feedback as “information that provides the performer with direct, usable insights into current performance, based on tangible differences between current performance and hoped-for performance.”
Wiggins describes ‘guidance’ as looking forward, the roadmap to follow to my goal and ‘feedback’ as looking backward, did my last action keep me on the road or steer me off?
Wiggins points to Peter Elbow, Embracing Contraries and Explorations in Learning and Teaching, “The unspoken premise that permeates much of education is that every performance must be measured and that the most important response to a performance is to measure it.” Both authors go on to suggest that less measurement is needed, and that feedback is the important (often missing) element to substitute for measurement.
Our previous posts on the transformed grade book are designed to be a measurement strategy (that we hoped would also provide feedback to learners), but Wiggins leads me to think that learners need a feedback tool different from a measurement tool (used to populate the grade book).
While bloggers have some experience getting feedback from the blogosphere by informal means, I think that it would be useful to scaffold requesting feedback, for both learners and feedback givers. However, I want the process to be simple and fast for both parties. What I hope to avoid is the too common tendency of student peers to give trite “great job” reviews, or to fall into reviewing mechanical things, such as looking for spelling errors.
To that end, I am exploring a simplified approach from the idea in the last post. Recently, I tried a version of this simple idea in an email to Gary Brown (our director). He had asked me for a report on LMS costs to be shared with a campus committee. I replied with the report and this:
Implicit in the feedback request was my goal of meeting his needs.
Even with this simple feedback + guidance request the question remains, can we render the data that would be collected in a way the learner could learn from it? Below is a hypothetical graph of multiple events (draft and final documents) where I asked Gary for feedback: “Is this useful?” The series makes evident to me (the learner) that initially the feedback I’m getting is not very affirming, and final versions don’t fair much better than drafts. Reflecting on this I have a heart-to-heart talk and afterwards develop a new strategy that improves my feedback results.
Versions of this kind of “Was this helpful?” feedback appear on some online help resources, and I assume that someone is reviewing the feedback and updating the help pages, and could produce graphs similar to the one above, showing improved feedback after specific interventions.
Here is Google's feedback request from a help page found from a Google App:
When you choose the Yes or No feedback, another question appears, and in this case you are giving guidance on what would make the item better, either picking a guidance from a list or providing an open-ended reply.
In addition to comments or trackbacks, please give me feedback and guidance on this post (my form is not as slick as Google's).
We’ve been spending time with Grant Wiggins’ Assessing Student Performance, Jossey-Bass, 1993. In Chapter 6 on Feedback, he differentiates ‘feedback’ from ‘guidance’ in several examples and defines feedback as “information that provides the performer with direct, usable insights into current performance, based on tangible differences between current performance and hoped-for performance.”
Wiggins describes ‘guidance’ as looking forward, the roadmap to follow to my goal and ‘feedback’ as looking backward, did my last action keep me on the road or steer me off?
Wiggins points to Peter Elbow, Embracing Contraries and Explorations in Learning and Teaching, “The unspoken premise that permeates much of education is that every performance must be measured and that the most important response to a performance is to measure it.” Both authors go on to suggest that less measurement is needed, and that feedback is the important (often missing) element to substitute for measurement.
Our previous posts on the transformed grade book are designed to be a measurement strategy (that we hoped would also provide feedback to learners), but Wiggins leads me to think that learners need a feedback tool different from a measurement tool (used to populate the grade book).
While bloggers have some experience getting feedback from the blogosphere by informal means, I think that it would be useful to scaffold requesting feedback, for both learners and feedback givers. However, I want the process to be simple and fast for both parties. What I hope to avoid is the too common tendency of student peers to give trite “great job” reviews, or to fall into reviewing mechanical things, such as looking for spelling errors.
To that end, I am exploring a simplified approach from the idea in the last post. Recently, I tried a version of this simple idea in an email to Gary Brown (our director). He had asked me for a report on LMS costs to be shared with a campus committee. I replied with the report and this:
Feedback request: Is this going to meet your needs for the LMS committee? (Yes/somewhat/no)
Guidance request: what additional issues would you like considered?
Implicit in the feedback request was my goal of meeting his needs.
Even with this simple feedback + guidance request the question remains, can we render the data that would be collected in a way the learner could learn from it? Below is a hypothetical graph of multiple events (draft and final documents) where I asked Gary for feedback: “Is this useful?” The series makes evident to me (the learner) that initially the feedback I’m getting is not very affirming, and final versions don’t fair much better than drafts. Reflecting on this I have a heart-to-heart talk and afterwards develop a new strategy that improves my feedback results.
Versions of this kind of “Was this helpful?” feedback appear on some online help resources, and I assume that someone is reviewing the feedback and updating the help pages, and could produce graphs similar to the one above, showing improved feedback after specific interventions.
Here is Google's feedback request from a help page found from a Google App:
When you choose the Yes or No feedback, another question appears, and in this case you are giving guidance on what would make the item better, either picking a guidance from a list or providing an open-ended reply.
In addition to comments or trackbacks, please give me feedback and guidance on this post (my form is not as slick as Google's).
Subscribe to:
Posts (Atom)