Exploring the Future (and Present) of College Credentials with Sean Gallagher

Barry Fishman, Faculty Innovator-In-Residence, Arthur F. Thurnau Professor of Information and Education

For our fourth and final AIM:TRUE session of the 2018-19 academic year, we were joined by Dr. Sean Gallagher, founder and Executive Director of Northeastern University’s Center for the Future of Higher Education and Talent Strategy and Executive Professor of Educational Policy at Northeastern. Sean is a national thought leader on the topic of alternative models for college credentials and degrees with more than two decades of experience in higher education.

AIM:TRUE stands for “Academic Innovation at Michigan: Transforming Residential Undergraduate Education.” The talk series is part of an effort to rethink the premises of undergraduate education that we call “The Big Idea” (to be expanded upon later). The Big Idea is a response to the following question: “Given the capabilities and resources of a major public research university like the University of Michigan, how would you design residential undergraduate education, if you could start with a blank slate?” It is an intriguing question, full of possibilities. A group of faculty, staff, and students from across the University of Michigan is pursuing those possibilities, with hopes of launching a new kind of undergraduate program.

The AIM:TRUE talk series is designed to inform and provoke our thinking around what is possible in education. Our first session featured the leaders of the Mastery Transcript Consortium, a group working to design a new transcript for college applicants that reflects students as unique individuals and not as a set of numbers and list of activities. Our second session was a talk by Dr. David Scobey, Director of the Bringing Theory to Practice Project at the Association of American Colleges and Universities. Dr. Scobey asked us to consider the multiple challenges facing higher education, especially in relation to the multiple publics it is supposed to serve. Our third session featured a range of projects in which the University Library is partnering with instructors across campus to support innovative forms of pedagogy aligned with Big Idea learning goals.

We invited Sean Gallagher to campus to talk with us about the changing landscape for higher education degrees and credentials, in a session entitled: “What does college prepare students for? Employer demand and online credentials.” In his presentation, Sean shared evidence of how the landscape for educational credentials is evolving, based on surveys and research from a range of sources, including his own center at Northeastern. Employer demand for educational credentials is growing, and the changing nature of work is driving an escalation in the level of educational qualifications required for jobs. A job that used to only prefer a high school degree now may require a bachelor’s. Jobs that often required a bachelor’s in the past now prefer a master’s. And beyond that, a majority of employers now believe that their employees will—at all educational levels—need continuous lifelong learning to stay current, and this translates into more credentials.

What does all this mean for higher education? Will university degrees be threatened by the rise of “micro-credentials” (digital badges) and blockchain? For the moment, this issue seems more hypothetical than existential for universities. Employers do not yet really understand or value micro-credentials (though awareness and experience continue to grow), and while it is becoming possible to be hired without a university degree, employers still seem to prefer degrees. In this environment, universities seem to be focusing on making their degrees available to more people, and a primary vehicle for this is online degrees, often delivered via MOOC platforms; one example is the new Master’s in Applied Data Science online program from the U-M School of Information. Gallagher has observed in a recent EdSurge column that, as interest in online degrees grows, “prestigious” institutions are leading the way. And as we move down this path, it is important to pay attention to other emerging trends, such as hiring practices that focus on specific job candidate skills that are usually obscured behind degrees. The rise of digital platforms to support online degrees can also support more granular evidence of student learning, performance, and accomplishment. As we build our online offerings, are we also positioning ourselves to support the new world of hiring?

To learn more, you can view Dr. Gallagher’s slide deck here or watch a recording of his talk below. 

We also recommend Sean’s excellent book, “The Future of University Credentials: New Developments at the Intersection of Higher Education and Hiring.

Finally, we are pleased to announce that the Big Idea project is getting a new name. From here forward, we will be referring to this effort as the TRUE program. Which stands for (as you know by now), Transforming Residential Undergraduate Education. Stay tuned for more talks in the AIM:TRUE series in the next academic year, and for more announcements and posts related to the TRUE program. Stay TRUE!

Previewing the 2019 Gameful Learning Summer Institute

Evan Straub, Learning Experience Designer – Gameful/Connect

When I talk to people about gameful learning, frequently I hear skepticism about the idea of turning school into a “game” or into something “fun.” To consider what makes gameful learning so successful, we have to unpack why we consider games “fun.” Although we frequently associate a positive emotional reaction with playing a game, some games are more entertaining than others. Therefore, we have to reflect on when or why playing a game becomes enjoyment.

Ask a competitive athlete in the middle of a tough competition as to whether or not they are “having fun” at that moment. Most likely, they’re engaged with thinking about the game play, the strategy, and optimizing their physical performance within the competition that the idea of “enjoyment” is probably not the first idea that comes to mind. Instead, we talk about the experience of “challenge.” While challenge in and of itself is not generally considered pleasant (at least at the moment), it can make success extremely satisfying.

men and women sitting around tables with laptops listening to a lecture

Evan Straub, Learning Experience Designer, leading the GradeCraft workshop at the 2018 Gameful Learning Summer Institute

For example, consider a game that is based primarily on luck versus one based on skill. For example, compare the what it takes to win the card game “war” where winning the game is whoever happens to have the high card at that moment — to a game like chess which involves deep strategy to defeat an opponent. Winning at chess will most likely be more of a satisfying experience. Often, you will even hear a defeated player be grateful for the experience of playing a well-matched opponent; the challenge was the reward, regardless of the outcome.

Gameful learning is trying to capture that same experience. Learning should be challenging. Learning science research consistently suggests that we learn more when we are actively engaged in the topic and when the learning is appropriately challenging. Similar to how games encourage differentiated paths, we know that every instructor is a little different as well. Therefore, at the 2019 Gameful Learning Summer Institute, we are so pleased to highlight educators who are willing to share what they have learned in their own classroom.

woman working on a laptop and a man in the background onlooking

Attendees of the GradeCraft workshop at the 2018 Gameful Learning Summer Institute.

One theme that has emerged from our presenters is how to use role-play in the classroom. Naomi Norman, Associate Vice President for Instruction at University of Georgia, and T. Chase Hagood, Director of Division of Academic Enhancement at University of Georgia, are returning to talk about their use of “Reacting to the Past,” which puts learners in control of a situation, guided by historic texts. A similar session last year was incredibly popular so we are thrilled they are returning! Similarly, Nick Noel, Instructional Designer and Media Producer at MSU Information Technology Services, is exploring how educators are similar to “game-masters” and how tabletop role-playing games could be adapted for education settings.

We are also so excited to have Angie Romines, Senior Lecturer of English at The Ohio State University, presenting on how to create and use escape rooms as a collaborative pedagogy. Hopefully, an escape room is never a true authentic task, however Angie notes that success “can only happen when participants collaborate, bounce ideas off of each other, and get comfortable with failure.”  

Women at lectern giving a presentation to attendees of a conference seated at banquet tables

Erin Baumann, Associate Director of Professional Pedagogy at the Harvard Kennedy School gives the keynote presentation at the 2018 Gameful Learning Summer Institute.

Finally, I would be remiss if I didn’t mention our exciting keynote speaker, William (Bill) Watson, Associate Professor of Learning Design and Technology at Purdue University. His work with the Serious Gaming Center at Purdue University researches how games can create engaging and innovative educational opportunities at all levels of instruction.

Of course, there are many more opportunities for new ideas at the 2019 Gameful Learning Summer Institute. We hope that you will join us, feel challenged to take some ideas back integrate into your own teaching, and have fun.



Register for the 2019 Gameful Learning Summer Institute at the event registration page by Friday, July 5!

202 – Mr. and Ms. So-and-So

In this episode, Romando discusses what it means to be somebody, inside and outside the walls. He advises those coming home and their families to communicate clearly about their needs. He tells us the story of his arrest as a veteran suffering from PTSD, and his decision to cut off his addictions while incarcerated. Maltreatment in the prison has caused Romando partial blindness since coming home, but he talks about leaning on his newfound faith in Islam to find community and hope.

201 – To All My Sisters

In this episode, Toni talks to women and survivors everywhere. She shares what she has learned about trauma and informed practices of healing and dealing with triggers. She advises fellow survivors on how to focus on themselves and speaks directly to her daughter, discussing who she wants to be as a mother and the hope she sees for her family.

Reimagining Assessment in Undergraduate Education: Progress, Resilience, Mastery

Barry Fishman, Faculty Innovator-In-Residence, Arthur F. Thurnau Professor of Information and Education

Cynthia Finelli, Director, Engineering Education Research Program; Associate Professor, EECS and Education

Melissa Gross, Associate Professor of Movement Science, School of Kinesiology, and Art and Design, Penny W. Stamps School of Art and Design

Larry Gruppen, Director, Master of Health Professions Education program, Professor, Dept. of Learning Health Sciences

Leslie Rupert Herrenkohl, Professor, Educational Studies, School of Education

Tracy de Peralta, Director of Curriculum and Assessment Integration, Clinical Associate Professor, Dental School

Margaret Wooldridge, Director, Dow Sustainability Fellows Program, Arthur F. Thurnau Professor, Departments of Mechanical Engineering and Aerospace Engineering


Given the resources of a major public research university, how would you design undergraduate education if you were starting with a blank page? That is the question we explore in The Big Idea project, and our answer is that undergraduate education should reflect the true strengths of the institution. In the case of the University of Michigan, that is research, discovery, and real-world impact. The Big Idea undergraduate program is problem-focused, with students directly engaged in research and scholarship as they work towards ambitious learning goals. Learners who meet these goals will be prepared to address the world’s pressing—and often ambiguous—problems. The program is designed to emphasize progress towards mastery — thus, it does not use grades, courses, or credit hours to mark student progress or readiness-to-graduate. In this post, we first discuss some critical features of the current higher education landscape and explain why we think they are ripe for re-examination, and then we outline how assessment for and of learning will work in the Big Idea program.


How would you design undergraduate education if you were starting with a blank page?


How Business-as-Usual in Higher Education Works… Just Not for Learning

The traditional mechanisms for measuring progress and readiness-to-graduate in higher education are grades, grade-point averages (GPAs), and credit hours. If you mention these terms, almost anyone involved with higher education will know what you are talking about. These are combined with course sequences that define both (a) the general education or “lower division” phase of college, where students pursue the broad distribution requirements usually associated with the liberal arts, and (b) the “upper division” years of college, where students pursue a focused major. These mechanisms have served higher education for more than a century, and they offer a relatively straightforward way to direct student learning pathways and to rank the relative performance of learners within those programs. However, in providing structure, these mechanisms also limit learners and curtail the potential of educational programs in key ways. In this section, we explore these limitations to explain why we are taking The Big Idea in a new direction.


Grades and GPAs: What Do They Measure?

Assigning grades as a way to record or report students’ performance feels like a naturally-occurring practice in formal education. But the use of grades, like many components of modern education practice, did not become widespread until the early-mid 1900s (Brookhart et al., 2016). Grades were part of a movement to make education more “scientific.” These efforts were led by Edward Thorndike, a prominent behavioral scientist who was instrumental to the spread of the scientific management and measurement in education. The work of Thorndike and others led directly to the standardized testing movement that defines much of K-12 education and college admissions in the United States today (Rose, 2016). The history of 20th century education reform has been described as a struggle between the constructivist and project-based ideas of John Dewey and the instructivist and standardized approaches of Thorndike. The consensus view is that Thorndike won and Dewey lost (Lagemann, 2000). As a result, the current structure of undergraduate education is more about the sorting of students for purposes that come later—such as college, graduate school admissions, or employment—than it is about supporting the learning of the students involved.

The use of grades actually removes information about student learning from the academic record-keeping process. Even when a letter grade is based upon a strict set of criteria about what a student has learned, the letter itself communicates little to the world beyond the classroom where it was issued. If a student has earned an “A,” we know that they did well in the course, learning (hopefully) the majority of the material taught. But what does a “B” mean? We generally assume that it means the student has learned roughly 85-90% of the material, but what material was not learned? If the course is part of a sequence, later instructors are given little information about the understanding or capability of their incoming students. “Grading on the curve,” meant to normalize outcomes across students, further masks information about student learning. A grading curve may communicate comparative performance, but without reference to the goals of the course.


The use of grades actually removes information from the academic record-keeping process.


The problem becomes even more pointed when trying to gauge a student’s overall learning in college. Currently, the GPA is the primary tool for reporting overall accomplishment. High GPAs are seen as signs of academic accomplishment, and they are rewarded with honors and other recognitions. Employers and graduate schools often use GPA as a sorting mechanism. This is problematic for several reasons. One problem is that GPAs can be “gamed,” with students seeking to take courses solely for the purpose of boosting their overall GPA. Another problem is that a great deal of the variance in a student’s final GPA can result from a student’s first-year grades, which are earned as students adjust to the new environment of college. This places first-generation students or others facing transitional challenges in college at a long-term disadvantage that does not reflect their actual capability upon graduation. Some institutions, such as Swarthmore College, have declared that all classes in a student’s first semester will be recorded only as “Credit/No Credit” to reduce anxiety and encourage risk-taking during this important transitional period.

Another reason that GPAs are problematic involves the growing concern that grades—or more accurately, a focus on grades as indicators of self-worth or future opportunity—are a contributor to the growing mental health crisis on college campuses. Many students, especially at selective universities, have never had an experience of academic “failure” in the form of a low grade or standardized test score. Yet learning scientists have long known that “failure” is a key component of successful learning. If a learner only ever gets the right answer or only ever performs at a high level, there is a good chance that they weren’t truly being challenged in the first place. Furthermore, knowing the “right” solution to a problem or challenge is often not as revelatory as working to understand why a solution is “wrong” and how to repair it. Deep learning is not about knowing answers, as much it is about understanding problem and solution spaces. The typical approach to grading in higher education imposes penalties on learners for taking on challenges and not succeeding on the first try, instead of encouraging continuous progress towards mastering the material. Think, for instance, of courses where the entire grade is based on a single or small set of exams.

Healthier and more productive grading systems (such as the gameful learning approach pioneered at U-M) emphasize progress and “safe” or “productive” failure, encouraging students to work beyond their comfort zones with confidence that they are not being adjudicated on each result. In the Big Idea program, we choose mastery-based assessment over traditional grading systems to support continuous effort and progress, and reflect true student accomplishment over averages or comparison.


Credit Hours: The Time Clock of Education

The credit hour, sometimes referred to as a “Carnegie Unit,” was created in the early twentieth century by the Carnegie Foundation for the Advancement of Teaching as part of their effort to create a pension system for college professors. To qualify for the Carnegie pension system, institutions were required to adopt a range of standards, including the newly-introduced Carnegie Unit (Silva, White, & Toch, 2015). From today’s vantage point, those who originally conceived of the credit hour might be surprised to see how its use has expanded to become the “time clock” for virtually all aspects of educational programs. The original metric was useful for determining the amount of time students spent in contact with instruction, but with expanding options for co-curricular, online, and other forms of learning, the question of what should “count” as learning or instruction has become muddied. Furthermore, the amount of time devoted to instruction does not necessarily translate to learning. As we’ve heard at least one critic observe, “If we’re measuring seat time, that’s the wrong end of the student.”


The question of what should “count” as learning or instruction has become muddied in higher education.


The Big Idea is based not on time, but on student accomplishment. We expect students to move at different paces and to approach their learning in a sequence that reflects personalized pathways. Therefore, we are designing the program in a way that we believe will take students roughly the same amount of time to complete as a “traditional” bachelor’s, but does not use time to regulate progress.


General Education, Majors, and Degrees

Lower-division undergraduates take a “distribution” of courses intended to expose them to different ways of thinking and expression, in order to develop intellectual breadth. Upper-division students engage in a series of courses defined by their major, designed to develop intellectual depth. Readiness for graduation from the university is measured by having accomplished at least a passing grade in all of the courses specified by the selected program of study and accruing the required number of credit hours. This system is designed to create both guidance to students in selecting courses and managerial efficiency in terms of university planning. Unfortunately, it also does little to promote learning and can suppress individual agency.

Within the system of distribution requirements and majors, it is unusual for a degree-granting program to employ program-level assessment of individual student learning as the qualification for graduation. Normally, assessment is conducted at the course level, with students being assigned grades based upon their performance within courses, and with those grades being compiled and reported as an average GPA across courses. If a student takes a predefined set of courses and maintains an acceptable GPA, that student may graduate with a degree in that area. In specialized cases, undergraduates may complete a capstone or summative project, such as a thesis. But for the majority of programs these are optional exercises, and the assessment criteria for summative projects are not usually aligned with any program-wide criteria for learning. In the Big Idea project, we have come to refer to business-as-usual as “faith-based” education, because one needs to have faith that students are learning something, even if the assessment design of the program is not designed to record or report that learning.

Charles Muscatine, in his book “Fixing College Education,” argues that the current system of majors exists in no small part to create “peace among the departments” (Muscatine, 2009, p. 39), ensuring that students continue to take courses in different areas of the university, even if the divisions that are created are not grounded in reality. As Muscatine pointed out, there has never been research conducted on whether the division of learning into sectors like “humanities,” “science,” and “social science” has real benefit to learners, or even reflects a valid distinction among different ways of thinking. He recommends distinguishing between disciplines in terms of the methods employed for sensemaking—“logical, empirical, statistical, historical, critical (analytic, evaluative), creative” (Muscatine, 2009, p. 40)—and making sure students have practice in each. This is what might once have been described as a truly liberal education, meant to liberate the mind and broaden our ability to think in flexible ways.


Assessment in the Big Idea: Business-as-Unusual

As we introduce our plans for assessment in the Big Idea, we note that, as with other elements of our design, the ideas presented here are not necessarily new; similar approaches have existed in different educational contexts in the past, and similar ideas continue to be employed in specialized applications today. However, the use of these practices at the core of a degree-granting undergraduate program within the modern research university is unusual.

In contrast to the checklist approach to graduating with a major described in the previous section, the Big Idea program starts with an ambitious set of program-wide learning goals, and has only one criteria for graduation: that a student has met all the goals at an acceptable level, as evidenced his/her “mastery transcript,” a document designed to: record accomplishment, provide evidence of accomplishment, and allow for tailoring to meet different student, program, employer, or other needs. (Our thinking about this kind of documentation is inspired by the work of the Mastery Transcript Consortium.) There are no required courses, and no minimum GPA. In fact, we do not intend to record or report grades for students in the Big Idea program. Nor is progress measured by the number of credit-hours completed, which is a metric of time rather than learning. What we are interested in is the progress a student makes towards mastery of the learning goals.


In the Big Idea program, there will be no required courses, and no grades… what counts is the progress a student makes towards mastery of the learning goals.


Our approach to assessment is designed to promote personalization of learning pathways, agency, and self-authorship. It is designed to promote resilience and personal growth. It is designed to support learners from diverse backgrounds. Where most current assessment paradigms are geared towards ranking and sorting students, the Big Idea assessment model is designed for transparency with respect to learning.


Assessment for Learning

There are two primary types of assessment: formative and summative. Summative assessment is a report of student knowledge, skill, or accomplishment at the end of some defined period or event, such as a course, high school, etc. Examples of summative assessment include final exams, which are typically used to measure end-of-course understanding and serve as the end of a student’s relationship with a course and instructor, and the SAT or ACT tests, which are meant to “sum up” a student’s academic potential at the end of secondary school and provide a measure of the student’s their readiness for post-secondary education. Formative assessment, on the other hand, is meant to inform learning, to give feedback on student progress, or to serve as a milestone towards a larger goal. Assessment in the Big Idea program is, by design, intended to be almost entirely formative.

Students in the Big Idea are expected to always be making progress towards the learning goals. Feedback on learning will come from many different places in the program — research supervisors, faculty across different learning experiences (including courses), and members of various communities where students conduct research and learning. The learning goals themselves are not meant to be a “final” report of a student’s potential or accomplishment; rather, they represent a certain level of attainment that can continue to be built upon throughout a learner’s life and career.


Students in the Big Idea are expected to always be making progress towards the learning goals.


Paths Towards Competence and Mastery

To emphasize the importance of progress and practice towards mastering the learning goals (no learning goal is a box to be checked), we describe paths towards mastery as encompassing several levels of proficiency: Awareness, Literacy, Competency, and Mastery. These terms are defined as follows in the Big Idea program:

  • Awareness

Students who achieve this goal know that this goal, practice, or skill exists, and they understand how it fits within the larger field or profession.

  • Literacy

Students who achieve this level are in the middle of learning this goal, practice, or skill, they understand its dimensions, they can apply or demonstrate it in a basic manner, and they are able to learn more about it through additional work. Such a student could play a supporting role in a project that employs the skills or practices inherent in this goal where someone else is leading and would understand what the other person was doing.

  • Competency

Students who achieve this level are ready to begin professional work requiring this practice or skill. A competent student can do a task requiring this skill on his/her own, he/she knows how to ask well-formed questions in the area or provide sound answers to others’ questions, and he/she could advance their understanding through self-study.

  • Mastery

Students who achieve this level are ready to employ the practices or skills embodied in this learning goal in the real world. Such students could supervise, guide, or teach others with respect to this goal. We note that “Mastery” for the purposes of graduation from the Big Idea program is not necessarily lifelong mastery; our rubrics will emphasize areas for ongoing growth even beyond our program goals.

Note that while we expect all learners in the program to achieve competency in all the learning goals, we expect each learner to achieve mastery in only a subset of the overall learning goals. Which goals a learner masters will depend on their particular focus and choices made during undergraduate study.


Getting Started (even before the “start”)

We expect that students in the Big Idea will arrive with some initial proficiency in many of the learning goals for the program, and we will invite students to present evidence of that learning. This would be a natural outgrowth of the kinds of experiences one might have had in secondary education (including extra-curricular experiences) that might lead a student to be interested in the Big Idea in the first place. We note, however, that traditional markers of student “accomplishment,” such as Advanced Placement scores, will not be considered as evidence of learning in and of themselves. We will invite students to present a case for their current level of learning that might include information about courses taken or tests passed, but that also requires demonstration of their actual knowledge or skills.

One of the first formalized activities of the Big Idea curriculum, to be conducted as part of a “Forum” experience (this is meant as a multi-year home base for students—similar to homeroom in secondary education—with access to more experienced peers and an advising faculty member), involves helping students become familiar with the learning goals, understanding why the goals were chosen and how they are to be assessed, and gaining exposure to a range of examples (perhaps provided by the more advanced students in the program). We will also engage learners in self-assessment activities to allow them to calibrate their current level of proficiency in each of the learning goals.


Demonstrating Achievement

We will develop detailed rubrics for each learning goal, providing descriptions and examples at each level of learning towards each goal. The rubric is meant as a guideline for learning and for assessment, not an “answer key” or template. We expect broad individual variation in the way each learner expresses their current levels of learning, to reflect individualized interests, choices, and pathways.

Students will use an electronic portfolio as a tool to record their work and accomplishments, to share/present those accomplishments for evaluation and feedback, and (following graduation) to offer evidence of learning in the future. We intend the portfolio to be a “mastery transcript” as it will serve as a replacement for traditional transcripts, among other uses. Portfolios have long been used as a means to encourage self-authorship by students, allowing them to shape the narrative of their accomplishments, and even customize that narrative for different audiences and purposes. Electronic portfolios also allow for the inclusion of many different forms of evidence in support of claims of learning, including links to assessment evidence and they can be assembled in different ways for different needs an audiences. Understanding how to communicate one’s own abilities using the portfolio (and underlying data) will be an important component of the Big Idea program, related to the “Communication” learning goal.


We expect broad individual variation in the way each learner expresses their current levels of learning.


Digital badges, also known as micro-credentials, are one mechanism for tracking and reporting accomplishment that can be employed in an electronic portfolio. Digital badges have many useful affordances. They can be used to establish pathways towards complex learning, to record progress, and to (when used as credentials), to signal accomplishment. Digital badges can also be used to expand assessment beyond our formal processes by encouraging students to make progress towards the learning goals in all areas of their life, whether part of the formal activities of the Big Idea program or elsewhere.

In addition to the portfolio-based mastery transcript, assessment in the Big Idea will include in-person interviews or performances. The sessions will be personalized to each student, allowing the individual cases demonstrate achievement in individual ways.


Formal (and Formative) Assessment in the Big Idea

Students in the Big Idea program are responsible for making an evidence-supported case for their progress towards or accomplishment of learning goals. There will be two levels of panel review for students to demonstrate achievement, both of which include review of the mastery transcript and in-person interviews or performances. As part of the overall learning process within the Big Idea, these two levels of review allow personalized assessment and feedback at a large scale through a manageable workload. The process is managed by the Forum/homeroom instructor, who (together with more advanced students in the Forum) can give advice to students about how to assemble their materials for review and when they are ready to start the process.

The first level involves review by a panel comprised of more advanced, “arm’s length” students, working under the supervision of a faculty member. This arrangement allows for frequent assessment of student progress at a large scale and is a key part of the learning process for student panelists, as they learn to give constructive feedback to more junior students with respect to each of the learning goals. The goals of this first level review are both to mark progress and, more importantly, to provide feedback to the student. This review level is expected to be employed regularly for students at the awareness and literacy stages of proficiency. While the actual number of reviews might vary for each student according to their needs and pace, we would expect students to engage in this first-level review several times a semester with respect to different learning goals. We also anticipate that not all students will succeed in each review step (though we hope that feedback from Forum instructors and peers helps mitigate this). The Big Idea is designed to support progress towards meeting the learning goals, so we would not necessarily consider this a “failure,” but rather progress towards eventual success.

As an example of how the first-level review might work, consider a student who believes she is making good progress on the statistical and computational learning goals (Ways of Knowing) and also the resilience goal (Personal Good), because of various difficulties she encountered towards learning these goals. The student discusses her readiness to be reviewed with her advisor and other students in Forum, collecting input on what evidence to include in her mastery transcript and how to assemble it. This evidence could include examples of the work done in statistics and computation, including a computer program that could be used to display statistical analyses related to a public health dataset about water quality (the dataset comes from a project the student is working on led by a faculty member in the School of Public Health) and a reflection statement about what was learned and how it represents progress towards achieving competency on those learning goals. To present evidence of resilience, the student writes a narrative discussing various challenges faced as she worked with this data and how she worked through those challenges. The panel reviews the material and provides feedback and questions to the student, and the student is invited to respond in writing. An in-person conversation could be scheduled for the student to meet with the panel for further discussion, if needed. Finally, the panel would issue feedback and a decision about what level of proficiency the student had reached for each learning goal. Students who believe they are ready to be reviewed for competence or mastery in particular learning goals would also use this student-run panel, and the panel would “approve” portfolios for review at the second level.

The second, more advanced level for assessment, involves a panel of faculty, advanced students, community members, alumni, etc. who will review and give feedback to students. This panel will primarily hear cases at the competency or mastery level of learning goals. Students will be encouraged to present cases that combine multiple learning goals (though we do not expect any single case to contain all learning goals), and they would be expected to engage in this second level of review for each learning goal, with the expectation that any review should include multiple goals to represent the full range of learning goals. This stage will also include a public presentation and discussion, similar to a doctoral thesis defense. As part of the Forum activities, we would work to prepare students to be proficient in a range of presentation modalities (again, the Communication learning goal), recognizing that this is another area where students will vary. We hope to make these community events. Once the program is operating at scale and the scheduling of such events is difficult, we envision an annual (or semi-annual) public celebration involving a poster fair, talks, and panels of students and others involved in the research.


The assessment infrastructure of higher education has evolved to emphasize efficiency and to simplify the management of learners, courses, and programs, but without a focus on learning or support for student individual differences or independence.



The use of grades, credit hours, and majors has served to help the modern university “manage” the processes of undergraduate education, but these tools were not designed to support learning or make it more transparent. Eventually, these structures became the tail that wags the dog. The assessment infrastructure of higher education has evolved to emphasize efficiency and to simplify the management of learners, courses, and programs, but without a focus on learning or support for student individual differences or independence. Periodic attempts at reform often focus on the re-introduction of learner-focused ideas such project- or problem-based learning, but these efforts strain against the boundaries of the existing infrastructure. We need “infrastructuring” work (Star & Ruhleder, 1996), a conscious reshaping of the practices, technological supports, and cultural norms that guide our thinking about assessment and support for learning in education. Our proposal for assessment in The Big Idea requires us to re-engineer the infrastructure for higher education assessment to emphasize progress, resilience, and eventually mastery of ambitious learning goals. Re-shaping these structures is a key component of our plan to design undergraduate education to take best advantage of the resources and opportunities of a major public research university.



Brookhart, S. M., Guskey, T. R., Bowers, A. J., McMillan, J. H., Smith, J. K., Smith, L. F., … Welsh, M. E. (2016). A Century of Grading Research: Meaning and Value in the Most Common Educational Measure. Review of Educational Research, 86(4), 803–848. https://doi.org/10.3102/0034654316672069

Lagemann, E. C. (2000). An elusive science: The troubling history of education research. Chicago: University of Chicago Press.

Muscatine, C. (2009). Fixing College Education: A New Curriculum for the Twenty-first Century. Charlottesville: University of Virginia Press.

Rose, T. (2016). The end of average: How we succeed in a world that values sameness. New York: HarperOne.

Silva, E., White, T., & Toch, T. (2015). The Carnegie Unit: A century-old standard in a changing education landscape. Retrieved from http://www.carnegiefoundation.org/resources/publications/carnegie-unit/

Star, S. L., & Ruhleder, K. (1996). Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces. Information Systems Research, 7(1), 111–134. https://doi.org/10.1287/isre.7.1.111

See What’s Next for our Graduating Student Fellows

Hannah Brauer, Communications Writing Fellow

Student fellows power our work at Academic Innovation — their hard work and commitment help to further our mission to improve education at the University of Michigan and support lifelong learners around the world. This year, we would like to thank the 24 student fellows who are graduating this May and recognize their significant contributions to innovation at U-M.

Our graduating fellows are heading across the country after graduation, including JPMorgan Chase & Co., Carnegie Mellon University, and a start-up company in the San Francisco Bay Area. Some are even sticking around as post-graduate research fellows at Academic Innovation.

Xucong Zhan, U-M College of Engineering ‘19, is graduating after working as a Software Development Fellow. After graduation, he will work for Academic Innovation as a full-time Software Developer after his positive experience with the fellowship program.

“I got a chance to work on something I think is meaningful,” he said. “It was encouraging to work with a lot of people with the same goal as me.”

Xucong Zhen presenting at a podium with a presentation in the backgroundZhan recounted his work with Academic Reporting Tools (ART 2.0), which allows students to easily research U-M courses and instructors. In a lightning talk at the Academic Innovation Student Showcase last month, Zhan presented his work and his experience using Django and Python to add new features to the tool. He said the Academic Innovation Student Showcase was among one of his greatest learning experiences, as it provided him with public speaking skills he could apply to future positions.

“I had never done something like [the lightning talk] before,” he said. “The [public speaking] workshops helped me to decide my topics and present more confidently. Those were definitely helpful.”

Wenfei Yan smiling at the academic innovation student showcaseWenfei Yan, U-M College of Engineering ‘19, worked as a Data Science Fellow this year and is now heading to Carnegie Mellon University to complete her Master of Computational Data Science degree. Like Zhan, the Academic Innovation fellowship program helped her with her communication skills.

“Working with the awesome people at [Academic Innovation], I got better at explaining my analysis to others, and I also became a more joyful person,” she said.

Zhan worked on a case study for the Privacy, Reputation, and Identity in a Digital Age Teach-Out, which uncovered how digital reputation shapes social interaction.

“We explored the learner engagement pattern through the lens of the discussion forums,” Zhan said. She identified the top keywords users mentioned in discussions and if learners were staying on-topic.

Brandon Punturo explaining a project to two other men at the academic innovation student showcaseBrandon Punturo, U-M School of Information ‘19, also worked as a Data Science Fellow this year. Punturo shared his work with ViewPoint, a role-play simulation tool, at the Academic Innovation Student Showcase poster session. His project utilized Natural Language Processing to understand users in a policy simulation through the U-M Ford School of Public Policy.

After graduation, Punturo will continue his growth as a Data Scientist at JPMorgan Chase & Co. in New York. He said he hopes to work in a similar environment to Academic Innovation in his future job.

“I think that’s the really cool thing about [Academic Innovation] — you’re doing a lot of important work, but it’s not a stressful environment,” he said. “You feel like you’re at home, even though you’re at work.”

The student fellowship program at Academic Innovation strives to provide students with work experience in their field as an equal member of our team. Since the program began in 2015, 89 student fellows have worked on foundational projects in software development, graphic design, UX design, data science, educational technology, and public engagement.

Marissa Reid, Student Program Coordinator, is proud of the 24 student fellows graduating this year.

Student fellows are the backbone of Academic Innovation,” she said. “They keep us moving forward and working toward our goals every day.”

If you are interested in our fellowship program, there are various positions open for the upcoming academic year. We’re looking for well-rounded, innovative, responsible, driven, and flexible fellows to help us design and develop digital applications that facilitate engaged learning at the University of Michigan.

Our fellows range from undergraduates, graduate students, doctoral students, and recent graduates. U-M student fellows are appointed for one academic term with the possibility of extending to multiple terms. Visit our student opportunities page for more information.