Tuesday, August 12, 2008

Leaving Blogger for Wordpress

We are leaving blogger for WordPress. Please visit us there.

I've been looking into how Blogger handles trackback. Sigh, Blogger doesn't do it, it uses a slower "linkback" technology. But its not clear that linkback from Blogger to other blogs ever happens. Resulting in no conversation between blogs, where the point of blogging is to foster conversation. As a result, we are moving the CTLT blog to Wordpress hosting where trackback/pingback is real.

To add insult to injury, Blogger has now decided our former blog is a spam blog and require us to use a Captcha to edit (after logging in). So much for making blog posts rich in contextualized links.

Thursday, July 31, 2008

Graphing Multidimensional Data for Learning

We are moving out of Blogger. Please link and comment on this post in our Wordpress blog.

Nils Peterson, Theron DesRosier, Jayme Jacobson

This post is in a series exploring issues related to transforming the grade book. Most recently we have been developing elements of the implementation of these ideas, and have been asking if it is credible that students and faculty and academic programs could learn from this data.

Below is a rendering of data previously collected in a rating session on a self-reflection document created by student employees as part of their job performance review. This study was done in collaboration between the WSU Career Center and our office. The rubric had five dimensions, and the scale ranged from 1 to 6, 6 being highest, 4 being competent. Three groups were involved in the rating: students (peers), faculty and external employers. The employers were recruiters from major companies in Washington state, who were on campus as part of their student recruiting work.
(click to enlarge image)

The five dimensions of the rubric are plotted in this radar graph with each dimension having its origin in the center of the pentagon and its highest possible value at the outer edge. The three groups of raters are shown as colored regions. This diagram shows a familiar story: students rate peers more highly than faculty and employers think the least highly of student abilities. The diagram also shows that in teamwork and communication each group agrees students are underprepared (vertices which are below competency). In Mission, students think they are competent while others disagree. In use of Evidence, faculty and students each think students are prepared, but employers disagree. Only in Duties do all three groups agree that students are prepared.

Monday, July 21, 2008

Authentic assessment of learning in global contexts

We are moving out of Blogger. Please link and comment to this post in our Wordpress blog.


Nils Peterson, Gary Brown, Jayme Jacobson, Theron DesRosier


The AAC&U 2009 conference, Global Challenge, College Learning, and America’s Promise asks a series of questions about “the relationship between college learning and society” and the Call for Proposal notes “[College and university] failure to fully engage our publics with the kinds of learning best suited to our current needs…”

The following is an abstract submitted in response to the Call above. It served as an opportunity to “rise above” our work on these topics and reflect on its implications.

---


Assessment of learning by a community inside and outside the classroom is a key component to developing students’ global competencies and building a strong relationship between college learning and society.

In 2007-08 Washington State University conducted an ePortfolio Contest to demonstrate ways to harness the interests and expertise of the WSU community to address real world problems encountered by communities both locally and globally. It called upon contestants to collaborate with community members - institutional, local, or global – to identify a problem, explore solutions, develop a plan, and then take steps toward implementing that plan. Contestants were asked to use electronic portfolios to capture and reflect on their collaborative problem-solving processes and the impact of their projects. Judges from industry, the local community, and WSU used a rubric based on the WSU Critical Thinking Rubric to evaluate the portfolios. Since the contest, we have been distilling design principles for portfolios that facilitate learning.

This exploration has taught us to value the learner consciously leaving a ‘learning trace’ as they work on a problem, and we believe the capturing and sharing of that trace is an important part of documenting learning. A striking example illustrating a learning trace in a portfolio is one of the winners in our contest. Not only does this portfolio exhibit a learning trace, it captures feedback from the community regarding the quality of the work. A recent AAC&U survey of employers supports this bias for richer documentation of learner skills.

Our thinking about portfolios for learning is moving us away from traditional ideas about courses contained in classrooms and toward Stephen Downes’ eLearning 2.0 ideas: “Students' [learning portfolios] are often about something from their own range of interests, rather than on a course topic or assigned project. More importantly, what happens when students [work in this way], is that a network of interactions forms-much like a social network, and much like Wenger's community of practice.” And far from being trivial social networking, our portfolio contest captured rich and substantive learning happening in the community outside the classroom.

But documentation of learning, without feedback, guidance and assessment leaves the learner to work without the support of a community, which has led us to a recognition that grade books are QWERTY artifacts of a Learning 1.0 model. To address that, we have been exploring ways to transform the grade book to support learners working simultaneously within the university and within their communities of practice. This approach to a grade book has the additional benefit for the scholarship of teaching and learning of gathering feedback on the assignment, course and program from the community at the same time that it invites the community to assess a learner’s work. Such feedback can help the university engage its publics in a discussion of the kinds of learning most suited to current needs.

The questions in the AAC&U Call will be used to help the audience highlight and frame the discussion of our ideas.

---

The Call “invites proposals of substantive, engaging sessions that will raise provocative questions, that will engage participants in discussion, and that will create and encourage dialogue--before, during, and after the conference itself.”

In the spirit of the AAC&U call, you are invited to engage this discussion before or after the conference by posting comments here, on the related pages, or by tracking back from your own blog, and then meet us in Seattle to further the conversation.

Wednesday, July 16, 2008

Online survey response rates - effect of more time

The graph below is a comparison of response rates to an online course evaluation in a college at Washington State University. Faculty had hypothesized that the response rate was limited by the amount of time the survey was open to students so the time available was varied in two administrations:

Fall 2007 11/23 to 12/22 (29 days) 10582 possible respondents
Spring 2008 3/31 - 5/5 (35 days) 9216 possible responents

The x-axis in the graph below is time, but normalized to the % of time the survey was open.

The y axis is normalized also, to the number of responses possible, based on course enrollment data, i.e., Y is response rate.


Figure 1 (click to enlarge)
The Fall 2007 survey ran to completion (reached an asymptote) where the spring one was (maybe) still rising at the cut off date. Fall and Spring total response rates are very similar, suggesting that more time when the survey is open has little impact on total response rate. So, contrary to what faculty hypothesized, the same overall response rate was achieved in a longer surveying window. This aligns with other data we have on response rates -- there is some other factor governing response rate that is not yet identified.

Its interesting to note that you can see waves in the spring data, as if faculty exhorted students on Monday and got another increment of response.

Monday, July 14, 2008

Implementing the Transformed Grade Book II

We are leaving Blogger for Wordpress, please join us there.

Jayme Jacobson, Theron DesRosier, Nils Peterson with help from Jack Wacknitz

Previously we showed examples of how a transformed grade book (or Harvesting grade book ) might look from the perspective of a piece of student work seeking feedback. That demonstration was hand built and is not practical to scale up to the level of an academic program. Even the smaller scale use such as that suggested by a recent piece in the Chronicle entitled Portfolios Are Replacing Qualifying Exams would benefit from some automation. This post outlines a design for a software application that could plug into the Skylight Matrix Survey System (a new survey tool WSU is developing).

There are two workflows that we have previously described in general terms, one for the instructor to register assignments and another for students to request feedback as they work on those assignments. In the figure below we outline a series of screen shots for the instructor registering the assignment. During the registration process the instructor is matching the assignment to the assessment rubric dimensions that are appropriate. We view this registration of the assessments as a possible implementation of Stephen Downes’ idea in Open Source Assessment.


Instructor Dashboard (click image to enlarge)


This examples shows how an instructor might use a dashboard to register and monitor assignments. The workflow shows capturing the assignment, assigning rubric dimensions to be used for assessing both the student work and the assignment itself and ends with routing the assignment for review. This mock-up does not show how the instructor would see the student work created in response to the assignment, or the scores associated with that student work . The next step in the workflow would require an upload of the assignment so that it could be retained in a program-level archive. The assignment could be referenced from that archive for faculty scholarship of teaching (SoTL), as well as for program or accreditation review and other administrative purposes.

Once the assignment has been registered, the student could start from a student dashboard to request a review or feedback and guidance. We are differentiating the idea of a full review (with a rubric) from more informal feedback and guidance. This informal feedback would probably not be fed into a grade book but the captured feedback could be used by a student as evidence in a learning portfolio.


Student Dashboard (click image to enlarge)


The basic workflow for a student would let the student request a rubric-based review for a specific assignment in a specific course. The student would select the course, assignment, and other metadata. Once posted for review, the request would either be routed to multiple reviewers or the student would embed the review into a webpage using the HTML code provided. In the second step there would be an opportunity to upload a document. This might be used in cases where the document had no web incarnation (to give it a URL) or to “turn-in” a copy of the document that would not be subject to further editing, as might be required in some high-stakes assessments.

The Learning 2.0 model is supported in the last step, where the assessment is embedded in a web space still open to modification by the learner (as the previous examples illustrated)


Student-created Rubric-based Survey (click image to enlarge)

Students might want to use their own rubric-based surveys. This mock-up shows how workflow would branch from the previous workflow to allow the student to define rubric dimensions and criteria.


Student-created Simple Feedback Survey (click image to enlarge)

This last example shows how the student would create a simple feedback survey.


State of the Art
Presently there are several tools that might be used to deliver a rubric survey. The challenge is the amount of handwork implied to let each student have a rubric survey for each assignment in each course and to aggregate the data from those surveys by assignment, by course, and by student for reporting. A future post will explore what might be learned by having the data centrally aggregated. If there is value in central aggregation, it will have implications for the tool and method selected for the rubric survey delivery. The Center for Teaching Learning and Technology at WSU already has tools to make the handwork implied tractable for use in a pilot course of 20-30 students. We understand the path to develop further automation, but both pilot test and further automation require investment which requires further analysis of commitment, infrastructure, and resources.

Questions
1. Can this concept provide transformative assessment data that can be used by students, instructors, and programs to advance learning? In addition to assessing student learning, can it provide data for instructor course evaluation and for program level assessment and accreditation?

2. Can the process be made simple enough to be non-obtrusive in terms of overhead in a course’s operations?

3. Is it necessary to implement a feedback and guidance as a central university-hosted tool, or could students implement an adequate solution without more investment on the university's part?

Thursday, July 3, 2008

Implementing Feedback and Guidance

Previously we (Theron DesRosier, Jayme Jacobson, and I) wrote about implementing our ideas for a transformed grade book. The next step in that discussion was to think about how to render the data. Our question was, “How do we render this data so that the learner can learn from it?”

We’ve been spending time with Grant Wiggins’ Assessing Student Performance, Jossey-Bass, 1993. In Chapter 6 on Feedback, he differentiates ‘feedback’ from ‘guidance’ in several examples and defines feedback as “information that provides the performer with direct, usable insights into current performance, based on tangible differences between current performance and hoped-for performance.”

Wiggins describes ‘guidance’ as looking forward, the roadmap to follow to my goal and ‘feedback’ as looking backward, did my last action keep me on the road or steer me off?
Wiggins points to Peter Elbow, Embracing Contraries and Explorations in Learning and Teaching, “The unspoken premise that permeates much of education is that every performance must be measured and that the most important response to a performance is to measure it.” Both authors go on to suggest that less measurement is needed, and that feedback is the important (often missing) element to substitute for measurement.

Our previous posts on the transformed grade book are designed to be a measurement strategy (that we hoped would also provide feedback to learners), but Wiggins leads me to think that learners need a feedback tool different from a measurement tool (used to populate the grade book).

While bloggers have some experience getting feedback from the blogosphere by informal means, I think that it would be useful to scaffold requesting feedback, for both learners and feedback givers. However, I want the process to be simple and fast for both parties. What I hope to avoid is the too common tendency of student peers to give trite “great job” reviews, or to fall into reviewing mechanical things, such as looking for spelling errors.

To that end, I am exploring a simplified approach from the idea in the last post. Recently, I tried a version of this simple idea in an email to Gary Brown (our director). He had asked me for a report on LMS costs to be shared with a campus committee. I replied with the report and this:

Feedback request: Is this going to meet your needs for the LMS committee? (Yes/somewhat/no)

Guidance request: what additional issues would you like considered?

Implicit in the feedback request was my goal of meeting his needs.

Even with this simple feedback + guidance request the question remains, can we render the data that would be collected in a way the learner could learn from it? Below is a hypothetical graph of multiple events (draft and final documents) where I asked Gary for feedback: “Is this useful?” The series makes evident to me (the learner) that initially the feedback I’m getting is not very affirming, and final versions don’t fair much better than drafts. Reflecting on this I have a heart-to-heart talk and afterwards develop a new strategy that improves my feedback results.

Versions of this kind of “Was this helpful?” feedback appear on some online help resources, and I assume that someone is reviewing the feedback and updating the help pages, and could produce graphs similar to the one above, showing improved feedback after specific interventions.

Here is Google's feedback request from a help page found from a Google App:
When you choose the Yes or No feedback, another question appears, and in this case you are giving guidance on what would make the item better, either picking a guidance from a list or providing an open-ended reply.

In addition to comments or trackbacks, please give me feedback and guidance on this post (my form is not as slick as Google's).

Friday, June 20, 2008

Implementing the Transformed Grade Book

Theron DesRosier, Jayme Jacobson, Nils Peterson

Previously we described ideas on Transforming the Grade Book, by way of elaborating on Gary Brown’s ideas of a “Harvesting Gradebook.”

Here we demonstrate an implementation of those ideas in the form of a website with student work and embedded assessment. This demonstration is implemented in Microsoft SharePoint with the WSU Critical and Integrative Thinking rubric as the criteria and a Google Doc survey as the data collection vehicle, but other platforms could be used to contain the student work and other assessment rubrics delivered in other survey tools could be developed. (Note- this implementation is built with baling wire and duct tape and the implementation would not scale.)

There are four examples of student work (a Word document with track changes, a blog post, a wiki diff and an email), to illustrate the variety of student work that might be collected and the variety of contexts in which students might be working. This student work might be organized as part of an institutionally sponsored hub-and-spoke style LMS or in an institutionally sponsored ePortfolio (as WSU is doing with SharePoint mySites) or directly in venues controlled by the student (see for example the blog and email examples below) where the student embeds a link to the grade book provided by the academic program.

Examples of Assessing Student Work (aka transformed ‘grading’)

The first example is a Microsoft Word document, stored in SharePoint, and included in this page with Document Viewer web part. You are seeing the track changes and comments in the Word document. In some browsers you will see pop-up notes and the identities of the reviewers.

To the right of the document is the rubric for assessing the work. Clicking on “Expand” in the rubric will open a new window with details of the rubric dimension and a Google Doc survey where you can enter a numeric score and comments of your assessment of the work with this criteria.

This survey also collects information about your role because it is important in our conceptualization of this transformed grade book to have multiple perspectives and to be able to analyze feedback based on its source.

In our description of the workflow that for this assessment process we say:
Instructors start the process by defining assignments for their classes and “registering” them with the academic program. Various metadata are associated with the assignment in the registration process. Registration is important because in the end the process we propose will be able to link up student work, assessment of the work, the assignment that prompted the work, and assessments of the assignment.
This demonstration shows one of the important impacts of the “registration” -- as a reviewer of the student's work, you can follow a link to see the assignment that generated this piece of student work, AND, you can then apply the assessment criteria to the assignment itself.

Finally, as an effort in ongoing improvement of the assessment instrument itself, the survey asks for information about the learning outcome, its description and relevance, with the assumption that the rubric is open for revision over time.

In this demo, you can complete the survey and submit data, but your data will not be visible in later parts of the demo. Rather, specific data for demonstration purposes will be presented elsewhere.

The second example is a blog post, in Blogger, included in the site with SharePoint’s Page Viewer web part. Again, to the right of the post is the rubric implemented in the form of a survey. With the Page Viewer web part the reviewer can navigate around the web from the blog post to see relevant linked items.

While this demonstration has embedded the blog post into a SharePoint site, that is not a requirement. The student could embed the rubric into the footer of the specific blog or in the margin of the whole blog. To demonstrate the former of these ideas, we have embedded a sample at the bottom of this post. Adding criterion-based feedback extends the power of comment and trackback already inherent in blogging.

The third example is a Wiki Diff, again included in the site with SharePoint’s Page Viewer web part. Again, to the right is a rubric implemented in the form of a survey.

The fourth example is an email the student wrote. This was embedded into the SharePoint site, but as with the blog example, the author could have included a link to criterion-based review as part of the footer of the email.

A subsequent post will address visualization of this data by the student, the instructor, and the academic program.

Please use the rubric below to give us feedback on this work:

Online Course Evaluations and Response Rate Considerations

The following is an email exchange between Gary Brown, Nils Peterson of Center for Teaching Learning and Technology at WSU and members of the TLTGroup.

Ehrmann@ TLT: Nils,

 I’ve gotten a couple questions from a subscriber.

Do any WSU colleges conduct student course evaluations exclusively online? All of them?

What kind of response rate does WSU get to online surveys and what strategies seem to work best for that purpose?



Nils: WSU has several colleges that do online surveys exclusively (Engineering & Architecture; Agriculture and Natural & Human Resources; Pharmacy). Response rates vary by course from very low to 100%. Gary Brown can take up this conversation to talk about what we do/don’t know what drives response rates

Gary: As Nils notes, we have several colleges doing online evaluations, some exclusively, more joining all the time. Response rates vary, but maybe more importantly, so do the instruments and, more importantly yet, the way the evaluations are used. I won’t go into detail about the differences in the evaluations instruments we’ve encountered, but online or not, the quality and fit for a variety of pedagogies is for me much more of a concern than the mode of delivery. The way they are used extends validity, because response rates matter little if results are ignored by faculty, misunderstood or difficult to interpret, and, all too common, boiled down to a single number for ranking purposes. It is hard to make arguments about the validity of an instrument and process if it is all capped by use that is itself invalid. But that makes the more important argument—it isn’t response rate and subsequent issue of response bias that matters as much as it ought be making sure that the response is representative and appropriate for the purpose of the process—hopefully for improving students’ learning experiences.

All that aside, response rates:

In our College of Agriculture, the response rate was 53%. But that number varies widely across departments. Here is a picture of response rates across departments from about a year ago:


Needless to say, the variance across departments is mirrored by similar and dramatic variance among courses/faculty, so it is hard for us to attribute the variance exclusively to the medium of delivery. We make other conjectures in our analysis in an article we published a while back. A key to response rates, we note in the article, is that in department with the higher response rates, the chairs of the departments were involved in the design of the instrument and the decision to put it online. So there is something important to be said for leadership and the engagement in the process of that leadership. We also point to other associations with higher rates we tracked in certain classes, mostly associated with the engagement of faculty in the process, their demonstration throughout the term that they listen and respond (not necessarily capitulate) to students’ concerns, and that they work overtly to engage students in the teaching/learning/assessment process.

The issue is pretty hot, too, and there are a number of discussions about response rates:
http://www.utexas.edu/academic/diia/assessment/iar/teaching/gather/method/survey-Response.php

http://books.google.com/books?id=zrjGUewMWHEC&pg=PA92&lpg=PA92&dq=adequate+survey+response+rates&source=web&ots=Q-1Sj0ntID&sig=XvzUTqM5dv5NjHIC0FX4CAr2LxM#PPA92,M1

http://books.google.com/books?id=H0Uexcg9xBcC&pg=PA42&lpg=PA42&dq=adequate+survey+response+rates&source=web&ots=aLzsrkerPO&sig=Prza517KiMb_Cf2jGenNXhKG5Dk#PPA46,M1

http://www.aapor.org/bestpractices

Most of these suggest, as you will see, that 50% is adequate, if not stellar. (The most authoritative is the last link, and they say, too, that 50% is ok.) The larger concerns I infer from your note is the utility of responses at low rates (we’ll let others worry for the moment about the implications of comparing results, as some chairs do, when the response rates differ significantly).

But our own work here at WSU with the College of Engineering suggests that the response bias may be less salient than one would presume.

We have not written this up yet, but here is a comparison of online versus paper done with the college of engineering at WSU. We have shared this with a work group from the American Evaluation Association (AEA) and are finding others who report the same phenomenon. The response rate online was about 51%, paper in class at about 71% (which is much lower than most people believe is the case for traditional paper-based, with the presumption that it runs closer to mid 90s). The samples are convenience samples based upon faculty preference for using paper or trying the online. The graph reflects 26 student evaluations randomly drawn from each of the three groups. If there is some kind of response bias, the picture here does not reveal it. We have been monitoring this as we move more and more online and remain interested in exploring the distinctions we may get (or not) when populations complete the instruments voluntarily, for extra credit, or when they are required to do so.

Thursday, June 12, 2008

Transforming the Grade Book

We are leaving blogger for WordPress. Please visit us there.

Nils Peterson, Theron Desrosier, Jayme Jacobson

CTLT has been thinking about portfolios for learning and their relationship to institutionally supported learning tools and course designs. This thinking has us moving away from the traditional LMS. It has also led to a recognition that grade books are QWERTY artifacts of Learning 1.0. In a recent Campus Technology interview Gary Brown introduced the term “harvesting gradebook” to describe the grade book that a faculty needs to work in these decentralized environments.
“Right now at WSU, one of the things we're developing in collaboration with Microsoft is a 'harvesting' gradebook. So as an instructor in an environment like this, my gradebook for you as a student has links to all the different things that are required of you in order for me to credit you for completing the work in my class. But you may have worked up one of the assignments in Flickr, another in Google Groups, another in Picasa, and another in a wiki.”
This post will provide more definition and a potential implementation for this new kind of transformed grade book. It is the result of a conversation between Nils Peterson, Theron DesRosier and Jayme Jacobson diagrammed here.


Figure 1: White board used for drafting these ideas. Black ink is “traditional” model, Blue is a first variation, Red is a second variation.

The process begins with a set of criteria that is agreed by to be useful by a community and is adopted across an academic program. An example is WSU’s Critical Thinking Rubric. This rubric was developed by the processes of a “traditional” academic community. How the process changes as the community changes will be discussed below.

Instructors start the process by defining assignments for their classes and “registering” them with the program. Various metadata are associated with the assignment in the registration process. Registration is important because in the end the process we propose will be able to link up student work, assessment of the work, the assignment that prompted the work, and assessments of the assignment. More implications of this registration will be seen below.

The student works the assignment and produces a solution in any number of media and venues, which might include the student’s ePortfolio (we define ePortfolio broadly). The student combines their work with the program’s rubric (in a survey format). The rubric survey is administered to either a specifically selected list of reviewers or to an ad hoc group. (We have been experimenting with two mechanisms for doing this “combining.” One places the rubric survey on the page with the student’s work as a sidebar or footer (analogous to a Comment feature, or the “Was this helpful?” survey included in some online resources). This approach is public to anyone who can access the web page. The other strategy imbeds a link to the student’s work in a survey, it can be targeted to a specific reviewer. This example comes from the judging CTLT’s 2nd ePortfolio contest.

In either case the survey collects a score and qualitative feedback for the student’s work. We are imagining the survey engine is centrally hosted so that all the data is compiled into a single location and therefore is accessible to the academic program. Data can be organized by student, assignment, academic term, or course. A tool we are developing that can do this is called Skylight Matrix Survey System, which is rebranded as Flashlight Online 2.0 by the TLT Group. The important properties of Skylight for this application are the ability to render a rubric question type and the ability to have many survey instances (respondent pools) within one survey and both report instances individually and aggregate the data across some/all the instances.

Audiences for this data
The transformative aspects of this strategy arise from the multiple audiences for the resulting data. We have labeled these collections of data, and the capacities to present the data to audiences “assessment necklaces”

Figure 2: Diagram of rubric-based assessment. Learners, peers, and faculty are shown collecting data from rubric-based assessment of portfolios, then reflecting on and presenting the multiple data points (necklaces) in contexts important to them.

Students can review the data for self-reflection and can use the data as evidence in a learning portfolio. We are exploring ideas like Google’s Motion Chart gadget (aka Trednalyzer/Gapminder) to help visualize this data over time. They can also learn from giving rubric-based reviews to peers and by comparing themselves to aggregates of peer data.

Instructors can use the data (probably presented in the student’s course portfolio) for “grading” in a course. It’s worth noting that the Instructor’s assignments can be assessed with the same rubric, asking, “To what extent does this assignment advance each the goals of this rubric?” With the assignment rated, instructors can review the data across multiple students, assignments, and semesters for their own scholarship of teaching and learning (SoTL). Here the instructor can combine the rubric score of an assignment with the student performance on the assignment to improve the assignment. Instructors might also present this comparison data in a portfolio for more authentic teaching evaluations.

In this example the assignment might be rated by students or the instructor’s peers. Below, the rating of the assignment by wider communities will be explored.

Academic Programs can look across multiple courses and terms, for program-level learning outcomes and SoTL. They can also present the data in showcase portfolios used for recruiting students and faculty, funding and partners. This is where the collective registration of the assignment becomes important. The program can access the assignment in the context of the program, with an eye to coordinate assignments and courses to improve the coherence of the program outcomes.

The community, which might include accrediting bodies, employers and others, can use the data, as presented in portfolios by students, instructors, and the academic program, to reflect on, or give feedback to, the academic program. Over time, an important effect of this feedback should be to open dialogs that lead to changes in the rubric.

Variations on this model
The description above is still traditional in at least two important ways: the program (ie faculty) develop the rubric and the instructor decides the assignment. Variants are possible where outside interested parties participate in these activities.

First variation. WSU and University of Idaho run a joint program in Food Science. We have observed that the program enrolls a significant number of international students, from nations where food security is a pressing issue. We imagine that those nations view training food scientists as a national strategy for economic development.

We have imagined a model where the students (in conjunction with their sponsoring country), and interested NGOs, bring problem statements to the program and the program designs itself so that students are working on aspects of their problem while studying. The sponsors would also have an interest in the rubric, and students would be encouraged (required?) to maintain contacts with sponsors and NGOs and cultivate among them people to provide evaluations using the rubric.

The processes and activities described above would be similar, but the input from stakeholders would be more prominent than in the traditional university course. Review of the assignments, and decisions about the rubric, would be done within this wider community (two universities, national sponsors and NGOs). The review of assignments and the assessment of the relationship of assignments and learning products creates a very rich course evaluation, well beyond the satisfaction models presently used in traditional courses.

Second variation. This option opens the process up further and provides a model to implement Stephen Downes’ idea in Open Source Assessment. Downes says “were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.”

In our idea of this model, the learner would come with the problem, or find a problem, and following Downes, learners would present aspects of their work to be evaluated with the program’s rubric, and the institution would credential the work based on its (and the community’s) judging of the problem/solution with the rubric. This sounds a lot like graduate education, the learner defines a problem of significance to a community and addresses that problem to the satisfaction of the community. In our proposed implementation, the ways that the community has access to the process are made more explicit.

In this variant, the decision about the rubric is an even broader community dialog and the assessment of the instructor (now mentor/learning coach) will be done by the community, both in terms of the skills demonstrated by students that the instructor mentored, and by the nature of the problems/approaches/solutions that were a result of the mentoring. The latter asks, is the instructor mentoring the student toward problems that are leading or lagging the thinking of the community?

Examples
For some sense of learning portfolios created by the processes above, consider these winners from CTLT’s 2007-08 ePortfolio contest.

The following two winners are examples of the second variant, where students were paired with a problem from a sponsor:

The Kayafungo Women’s Water Project documents the efforts of Engineers Without Borders at WSU (EWB@WSU) who partnered with the Student Movement for Real Change to provide clean water to 35,000 people in Kayafungo, Kenya.

The EEG Patient Monitoring Device portfolio follows the learning process of four MBA students who collaborated with faculty, the WSU Research Foundation, inventors, and engineers to develop a business plan for a wireless EEG patient monitoring device.

The next two are examples of the second variant -- student defined problems assessed by the community. In the latter case, the student is using the work, both her activism in the community and her study-in-action as her dissertation:

The Grace Foundation started with a vision to create a non-profit organization that would assist poor and disenfranchised communities across Nigeria in four areas: Education, Health, Entrepreneurship and Advocacy. The author used the UN online volunteering program to form a team to develop a participatory model of development that addresses issues of poverty eradication from a holistic manner.

El Calaboz Portfolio chronicles the use of Internet and media strategies by the Lipan Apache Women's Defense, a group that has grown in national and international prominence the last 75 days, from less than 10 people in August, to an e-organization of over 312 individuals currently working collectively. It now includes NGO leaders, tribal leaders, media experts, environmentalists, artists, and lawyers from the Center for Human Rights and Constitutional Law. It recently received official organization status at the UN.

The next steps in this work at WSU are to build worked examples of these software tools and to recruit faculty partners to collaborate in a small scale pilot implementation.

Wednesday, June 4, 2008

Hub and Spoke Model of Course Design

We have been exploring the idea of "hub and spoke" course designs where the learners are using ePortfolios and Web 2.0 tools and working in communities and contexts where their chosen problem is being addressed. For such a course, we have been using the term Hub and Spoke to describe how the institutionally-operated course space (hub) relates to the learners and the learner's electronic spaces. (see: Out of the Classroom and Into the Boardroom (PDF),
Out of the Classroom and Beyond, Case Study of Electronic Portfolios, and ePortfolio as the Core Learning Application.
Recently Blackboard has been adding "Web 2.0" features, so we had a discussion to delineate the reasons to use SharePoint rather than Blackboard as the hub in a hub and spoke course design.

Worldware
Worldware is a double reason. First, students are learning skills in SharePoint that they can later in work contexts, where Blackboard skills are not useful outside the school context. Second, as our university adopts SharePoint for a variety of administrative purposes, there become a larger group of SharePoint experts who can provide support to both faculty and students using SharePoint as a learning platform.


Document Library and Tagging
SharePoint's document libraries are very flexible, allowing users to add metadata that suites their purposes. In CTLT's ePortfolio contest ctlt.wsu.edu/contest07/ we have had several examples of this, perhaps of the most developed is in this winner's portfolio. (Its also worth noting that this contestant used email to send documents to the library (a SharePoint feature that integrated the "collect" phase of her portfolio work more completely with her other project work). We are now exploring how to mashup SharePoint document libraries with other tools to create timelines showing the evolution of ideas in the portfolio.

Authorization controls
While WSU has a mechanism for outsiders to gain an identity and login to university systems, as we have Blackboard configured, instructors can only authorize people into courses in the role of Teaching Assistant. Further, authorization to a Blackboard course gives access to the whole course, there is no fine-grained control to specific parts of the course. Finally, a SharePoint site can be configured for anonymous read, opening (portions of) the course to the world if needed.

"Pre-cooked" webparts and tools
SharePoint has a concept for exporting sites and elements of sites (libraries, web parts, surveys, etc) as .STP files and then re-importing these into other sites or adding them to templates for users to choose. This allows time-savings such as configuring a document library with specific columns, or an RSS reader with specific feeds pre-installed.

Adding more tools
Finally, SharePoint's architecture enables other linkages and mashups. It is a source and consumer of RSS, will support embedding of other Web 2.0 resources in its pages, and can capture email and originate email alerts. And with the SharePoint mySite, where the student is the owner of the SharePoint site over the span of their career, there is greater flexibility to support the hub and spoke models.

Tuesday, March 25, 2008

Measuring Social Capital

Our interview with John Gardner (long mp3 file) as part of these case studies captured an interesting question from John regarding Social Capital -- he was interested in thinking about how to measure it (assessment being a driver to guide action).

Theron and I've been thinking about the question about the question.

Traditionally university extension has focused on getting knowledge from the university to the periphery, and we would have measured our capital by the success of that (mostly broadcast) model.

Web 2.0 has us thinking about how to also get the knowledge of the periphery in to the university, and measuring capital by two-way dialog.

I think John pointed to this in his talk (last December) to Crops and Soils Dept when he was exploring the implications of centrally produced ideas like Roundup Ready vs the understandings of local conditions (both growing conditions and human ecology).

Increasing sustainability seems to require increasing ability to adapt to niches, and that is likely to involve moving knowledge of successes from the periphery to other niches on other parts of the periphery. The university could play a role, but not likely a broadcast role (more like narrowcast) in this.

This does not answer the question of how to measure social capital, but does suggest that the measurement won't be based on Web 1.0 ideas

Wednesday, March 19, 2008

Learning Portfolio Strategy: Be Public

This is one of the "portfolio patterns" (borrowing from the "Pattern Language" of Christopher Alexander) we have distilled from our case study of users creating learning portfolios.

A strategy that users of a learning portfolio adopt is to be public on the Internet. The goal is to create and join public communities, and by interlinking, raise the “Google Rank” of the problem and the problem solvers.

Information Scarcity vs Information Abundance

In a conversation (long .mp3 file) with Dennis Haarsager, Interim CEO of National Public Radio, he described the Internet as "'anti-scarcity', it’s about information abundance. The way to obtain value is not in controlling a scarce resource, the value is to be had in the ability to extract value from the mass of information, by organizing it, filtering it, 'chunking' it" What he called an 'information [organization] theory of value.'

The opposite perspective is the value of scarcity. “While some industry pundits have proclaimed print-on-demand to be the future of publishing, there will always be a positional advantage to the conventional book. It says somebody thought enough of this writing to run off a whole batch. In sum … the Web will never destroy older media because their technical difficulties and risks help create glamor and interest. At the same time, however, the Web does nibble at their base…” Edward Tenner The Prestigious Inconvenience of Print, The Chronicle Review, B7-B8, March 9 2007. One of the values Tenner claims is that print content undergoes more review and scrutiny in the production process. If production cost, risk, and technical difficulties enhance the prestige of older communication media, one of the problems the traditional book still faces is lag times.

The role of the public showcase in the learning portfolio

In our interview, Haarsager argued for the public lectures he gives on his chosen problem. The lecture is a showcase portfolio of Haarsager’s current, best thinking. The medium is mostly broadcast, but he feels it allows him to reach new audiences, and to get kinds of feedback about his ideas that he does not get in comments on his blog.

Tamez is also creating showcase “mini-portfolios” in the form of printed fliers and media interviews. These productions may have some of the risk-related prestige that Tenner ascribes to printed books, while at the same time having the new audience-reaching and immediacy values that Haarsager associates with his lectures. In her learning portfolio, these mini-portfolios document where Tamez’ thinking was at points in her learning trajectory.

The learner who works in public, in order to gain any value from working in public, must participate in collaborative efforts to extract (or make) value from the information richness of the Internet. This kind of strategy has been called Learning 2.0 by Stephen Downes, who has created this diagram to describe differences between groups and networks as organizational strategies for learning (original sketch midway down this post).






Group (Learning 1.0)Network (Learning 2.0)
Groups require unityNetworks require diversity
Groups require coherenceNetworks require autonomy
Groups require privacy or segregationNetworks require openness
Groups require focus of voiceNetworks require interaction
Downes also suggests that this working in public is not necessarily "more" work, rather its an attitude to push the work you are doing for various more private audiences into a public arena. Tamez demonstrates a variant of this in her portfolio, where she adapted a strategy of cc:ing her portfolio when writing email.

Another value of learning in public is the assessment that comes from dialog in community. This assessment takes two kinds of forms: assessment of the idea, testing its merits, and assessment of the individual, in the sense of reputation and social capital. These ideas about assessment will be addressed in a later section of this analysis.

Link and be linked; Tag and “Digg” Making value among plenty

“But Google and its ilk notwithstanding, the sheer volume of information, its global origins, and especially the dynamic, real-time nature of information today is simply overwhelming our traditional, centralized institutions of information screening and management – whether research libraries, book and journal publishers, or newspapers and other news media.” Peter J.M. Nicholson, “The Intellectual in the Infosphere” The Chronicle Review, B6-B7 March 9, 2007.

Saving all one’s work in a portfolio creates the same problem of information saturation on a more personal scale. One challenge is to “rise above” the information to see the ideas, and from the ideas, rise to action.

In his video “The machine is (us)ing us”, David Wesch points to a consequence of the networking ideas Downes is suggesting. Wesch explores the implications of linking from one document to another. Linking provides a kind of metadata for the thing linked. It says that the two web pages share some relationship between them, but it does not say what that relationship is (e.g., citation, example, counter example, refutation).

Linking is one of the traits Google uses to determine page rank. As a result, a Google search is one means to perform a “rise above” on a large collection of related material. What the Google rank does is call attention to an item in the collection of documents. The weakness is that it points to a specific item, it does not present a gestalt across many items that could surface emergent themes.

Tagging is a mechanism for adding metadata to your own materials (e.g., blog posts or wiki pages) to organize them. It is also a mechanism for adding metadata to other people’s content, for your personal organizational purposes. Patterns emerge from group tagging of the same thing and related things. Tag Clouds are visual representations of the frequency of appearance of tags among a collection of documents, and serve as a way to see emergent patterns. The Flickr photo sharing site has a page of tag clouds, including one for this week and last 24 hours. As I write "ArthurCClark" is among the tags of the week, Clark having passed away recently.

Chirag Metha gives an example of extending the tag idea to an analysis of all the words in a text to show emergent themes. His site has an “aging tag cloud” analysis of US presidential documents. The engine has the usual tag cloud text graphical analysis, but Metha extends the tag cloud idea by trying “to figure out how long ago a given word hit its peak usage and brighten[ing] the recently used words while fading away words haven't been used in a while.” The result is a trend analysis over time of words used in American politics.

Other tools for rising above can be seen in Gapminder, which is a demonstration the Trendalyzer engine’s rendering of UN demographic data over time. By sweeping across time in the visual rendering of the data, one can find both big trends (the 3rd world is getting richer and having fewer children per woman) and local events like the genocide in Rwanda. These observations in masses of data can lead to hypothesis generation and exploration.

While the idea of “rising above” the data in a portfolio is an appealing one, among the cases we’ve examined so far, we do not see evidence of use of any these strategies to help the portfolio author guide their reflection or future action. The values of working in public seem to be coming from collaboration on a problem and assessment arising in that collaboration.

(Updated with new pp just below table on 7/15/08)

Friday, March 14, 2008

Goal for a Learning Portfolio: Solve a problem

This is one of the "portfolio patterns" (borrowing from the "Pattern Language" of Christopher Alexander) we have distilled from our case study of users creating learning portfolios

The goal for, and perhaps a defining characteristic of, a learning portfolio is to be a workspace in which to solve a problem. That is the feature that makes it like a Personal Learning Environment (PLE) and unlike the common showcase portfolio.

In an article in the Chronicle appearing March 7, 2008, James Barker gives an example of this type of workspace as a physical place, the architecture studio:

“In my view, the architecture design studio is the best learning experience ever invented to produce the kind of deep, engaged learning and creative graduates that are so needed today. Small groups of students work with a master teacher on a semester-long or yearlong team project to design solutions to a specific problem or to meet a particular need…

“For example, our students in planning and design have helped communities throughout our state preserve historic buildings, revitalize dying town centers, and plan new parks, bikeways, and green space. For every project, they interview the key people involved; gather statistics on demographics and traffic patterns; collect previous plans, deeds, and plats; photograph the site from every conceivable angle; and put all of those data on a computer.

“Eventually they brainstorm ideas, discuss them, refine them, and present them to their teachers and clients in a process that we, in architecture, call a "design charette." Then, and only then, are the best ideas sifted through the filter of what is possible…

“In the process of doing such public-service projects, our students learn about research, communication, interpersonal relationships, culture, politics, municipal government, creativity (its power and its limits), and teamwork."
The studio space in which this work is done becomes a walk-in portfolio, with sketches, photos, models displayed on every surface and open for comment by all passers-by.

Here are some examples of learning portfolio questions

A Learning Portfolio tracks growth over time

In order to be an aid to the learner solving a problem, the portfolio needs to track the artifacts of the work, and needs to facilitate and encourage the learner to look at their growth over time. As we will see, this same property of tracking growth will play a role in assessing the learning outcomes.

Carol Dweck writes in The Secret to Raising Smart Kids: "Many people assume that superior intelligence or ability is a key to success. But more than three decades of research shows that an overemphasis on intellect or talent—and the implication that such traits are innate and fixed—leaves people vulnerable to failure, fearful of challenges and unmotivated to learn.

Teaching people to have a 'growth mind-set,' which encourages a focus on effort rather than on intelligence or talent, produces high achievers in school and in life."

So what implications, if any, would these concepts have on the design of learning opportunities. For starters, what if the interface to view growth focused more on learning over time instead of grades on high stakes tests? We have been looking at tools like Gapminder (Trendalyzer) and Microsoft's Photosynth for inspirations of tools could allow learners to see patterns hidden within the data of their learning record. We are imagining a learning tool that would provide the student a way to view dynamic representations of their approach/efforts/ learning over time instead of a series of grades in a drop box.

Learning inside/ Outside the University
Self-directed vs Teacher Directed Learning
In Imagine there's no courses, Jeff Cobb points at George Seimens' World without courses (voice over slides) and each explore the question of how organizations and institutions measure and derive value from unstructured, informal learning activities. Examining learning portfolios has us looking an these issues and the ways informal learners (e.g. Hotz) create learning communities and tackle and solve complex problems and how they derive rewards from the learning gained.

Stephen Downes' writing on the 'ideal open online course' concludes that it would 'not look like a course at all, just the assessment.' He postulates, "were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources."

And Clay Burrell echos some of Downes' ideas when he writes about learning writing in a a blog (aka learning portfolio) situated in community and in context, "students would write self-directed blogs. No homework assignments allowed in terms of subject matter, though standards of style and conventions would be set... assessment would be based on readership, comments, subscriptions, visitor stats, Technorati authority ranking... self-assessment (italics added) and other non-authoritarian, teacher-gives-grades assessment styles." Here Burrell is rephrasing Downes "in a non-reductive system [of assessment] accomplishment in a discipline is recognized."

It is this latter behavior that we are seeing in some of the learning portfolios we are studying, coaches within the institution helping students learning outside of it but the measure of accomplishment is recognition by and among communities.

And this thinking leads us to rethink the relationship of the institution and its students to its alumni. For example, graduate student Dana Desoto met with Theron Desrosier and Jayme Jacobson recently about an alumni web site for the WSU School of Communications. This led to a good brainstorm about the goals of the project, the value for the university and for the alumni, and the assumptions we have about the relationship of the Alumni to the university.

The idea they developed was a collaborative learning portfolio that connects alumni and students with common interests and promotes the flow of intellectual capital between the Communications School and the professional community.

Value to Alumni:
  1. Alumni use comm. students as a economical source of innovation.
    1. Alumni propose projects for teams of students (build a web 2.0 marketing strategy for my company)
    2. Alumni who are looking for new employees get a more authentic picture of skills and abilities.

  2. Alumni are valued as more than a deep pocket.
Value to Students:
  1. Comm. students use alumni as a source of authentic activities, advice, connection to the profession, and feedback.
    1. Comm students build eportfolios around real projects as evidence for learning and/or hiring.
Value for Communications School:
  1. Comm School uses this symbiosis as a source of feedback on the alignment and relevance of curriculum, learning outcomes, and activity design.
  2. The professional community is a partner in the continual improvement of the program.
Who is the Learner/Who is the Audience?
Blurring the boundaries of the university by facilitating students working on authentic problems situated in communities outside the university, and assessing their work, not with reductive tests, but with the level of recognition and accomplishment the student achieves does something else. It blurs the line between the learner and the audience.

We are seeing this blurring in the portfolios we have studied. We are beginning to talk about learning communities where members play differing roles in supporting the learning growth of the whole. George Hotz honors this learning community when he credits his collaborators even as the national press is focused on him.

Case Studies of Electronic Portfolios for Learning

This is the first in a series of posts describing some work funded by Microsoft. We are posting in this format to invite reader comment and trackback. The work described below is an example of a learning portfolio, and this post is our problem statement.


Nils Peterson, Theron DesRosier, Jayme Jacobson, Gary Brown

Introduction

We have written about students’ changing technology proclivities and the changing landscape for Learning Management Systems (LMS) in this Microsoft white paper for EDUCAUSE 2007, in JOLT, Innovate, this blog, and in this interview). This document begins a case study of learners who use electronic portfolios to advance their learning. It does not explore uses of electronic portfolios as “showcases” of best work. The latter uses are facilitated by ePortfolio tools in several of the common LMS products and in several widely used Student Information Systems whose common trait is to facilitate institutional assessment, not learning.

The kinds of uses of ePortfolios we are examining are closely aligned with Personal Learning Environments (PLE). What we are finding in the cases that follow are users implementing what is suggested in Scott Wilson’s Future VLE diagram; an ad hoc, assemblage of Web 2.0 components (the term "Worldware" applies to the components). (Scott refers to a “VLE” (virtual learning environment) which might be either a personal or institutional learning environment. For our purposes here, read Scott as proposing a PLE.)

One of the questions we are exploring in this work is the potential of Microsoft SharePoint 2007 MySite Subsites (WSS) to serve as the central building block in Wilson’s Future VLE, a hub for the learner, and potentially a collaboration and/or presentation space for the learner or learner and segment of the community.

In this document, we prefer to retain the term “portfolio,” rather than PLE, for these activities because we want to connect to a body of literature on portfolio practices, including the commonly offered mantra: collect, select, reflect, connect and project (into the world). We draw a sharp distinction between the learning portfolio discussed here and the “showcase” or summative portfolio, especially when the creation of the portfolio is at the request of a third party for summative assessment purposes.

We also prefer the portfolio language to that of PLE because we value the learner consciously leaving a ‘learning trace’ as they work on a problem in the space, and we see the capturing and sharing of that trace is an important part of documenting learning. A recent employer poll supports this bias for richer documentation of learner skills. Some of our interest in this work began by documenting the learning trace that is evident in Hotz’ blog of his collaboration to unlock the iPhone.

In addition to Hotz, we have been examining electronic learning portfolios created by students and professionals at Washington State University, conducting interviews of them, which were captured with audio recordings, white board diagrams and/or both.

Several themes arise from studying these cases:
  • Learning portfolios have a goal, or problem to solve;
  • They adopt strategies that are public;
  • They are implemented in multiple tools and spaces where collaborators are already present, or can be expected to congregate;
  • They understand Social Capital, and the portfolio practitioner seeks to develop and leverage it;
  • A key use of Social Capital and a reason to work in public is to develop an assessment community that can provide feedback and insights;
  • The portfolio, especially its repository strategies attempt to facilitate reflection and synthesis to move the learner (and community) from information to idea to action;
  • Users of Learning Portfolios work in multiple modes, including the arts, to convey their synthesis and call to action.

Other posts in this series can be found in this blog, under the tag Learning Portfolio.

Framing the Problem








This blog will attempt to emulate what we think we've learned from George Hotz, how to be a node in a learning community working on a problem. Our statements of the problem(s) we are working on are tagged here. We view this space as one element in our Learning Portfolio, and will link to other portions of our portfolio among systems we host and world systems we have adopted. From time to time we anticipate writing reflections on our use of this space and on our changing understanding of portfolios for learning.

Our organization, CTLT, is committed to the advancement of authentic learning—learning that takes place in and beyond the classroom; that encourages the exchange of knowledge across disciplinary, institutional, and national boundaries; and that recognizes the need for participation in the global dialogue.

The problems that we are exploring include:
Learning portfolios (ePortfolios)
Assessment Communities and community assessment
Identification of (and learning about) global competancies

We invite your comments and trackbacks to connect our work to your thinking and to a community of like-minded explorers.