Creating Community for the Academic Innovation Summer Fellows

Deirdre Lee, Communications Writing Fellow

Amid family barbeques, summer classes, and festivals, about 30 University of Michigan undergraduate and graduate students dedicate their summer to innovation working as student fellows in the Office of Academic Innovation.

While they are in the office, they frequently interact with each other either on a professional or personal level, ranging from cross-team collaboration to chatting while coffee brews. 

Marissa Reid, Student Program Coordinator at Academic Innovation, works closely with the fellows to foster a sense of community and engagement among student innovators. In order to build a sense of community, Reid coordinates multiple social events each month for fellows.  

For the month of June, fellows gathered to paint the “Rock,” located on the corner of Hill Street and Washtenaw Avenue in Ann Arbor. Painting the “Rock” is a well-established University of Michigan tradition among many student organizations on campus after it was originally placed in 1932 as a monument to commemorate the bicentennial of George Washington’s birth.  

Reid said none of the 15 fellows in attendance had previously painted the “Rock.”  

“The ‘Rock’ painting was [a social event] because if you’re not in Greek life or part of a large social organization, you may not have done it,” she said. “If you’ve never painted it, you get to come together as a social group.”

Reid described the experience as “organized chaos” — in a good way. She said fellows socialized and talked about topics outside of their work in the office among the flurry of masks, gloves, and spray paint in the air.

“During that time, the students were just talking to each other [about] their days, life, [and] projects. It was a more socializing event,” Reid said.

Shaelyn Albrecht, School of Information Senior and User Experience Research Fellow, said she enjoyed painting the “Rock” because she could socialize with other fellows in a stress-free environment outside the office.

Albrecht said she has worked as a fellow for Academic Innovation during both the academic year and summer. She said her summer fellowship has been more social than the academic year since scheduling constraints caused her to balance her fellowship, class schedule, and other student involvements, prompting her to focus more on the core functions and responsibilities of her fellowship and less on social opportunities.

“I was less inclined to be social [during the academic year] because I was only here [at the office] for a certain amount of time and had academic obligations,” Albrecht said.

Two students painting the rock with a michigan MMarcus Hall, Ford School of Public Policy and School of Information Graduate Student Senior and Design Management Fellow, said time expectations are different for the academic year and summer.

“More during the [winter] semester I was working ten hours, and more during the summer, closer to full time hours, or full time hours themselves,” he said.

With this increase in hours, Hall also said summer fellowships experience an increased frequency of contact among fellows and full-time staff working in the office, which allows for more collaborative opportunities.

“That community has helped me kind of move forward with a lot of projects and initiatives that I was thinking about,” Hall said.

Hall’s latest project is called the “CRP triad” — an unofficial term coined by Hall and an acronym for “concept, reminder, practice.” The triad consists of an instructional video that teaches ideas and practices about inclusion and accessibility for Teach-Out content directed toward guest experts, an infographic related to the instructional video, and pre-filming training focusing on awareness and skill-building.

Hall said his supervisor encourages cross-team collaboration to bring awareness to the project but also to have him learn about the different teams.

“The multidisciplinary, or cross-disciplinary nature of working across different teams, I think, has been really, really important, and the reason I say that is because…when people talk about working across teams [we’re] putting on a cap from a different perspective [and] learning how to get into the mindset of someone from a different team,” Hall said.

Hall said several significant learning experiences he has had included facilitating and organizing meetings and collaborating with the different Academic Innovation teams.

“I think that the cross-disciplinary nature of [Academic Innovation] helps…that people like to work with each other on different teams,” he said. “So I think that that cultural ethos helps somebody like myself come in and say, ‘Oh, I have this idea… I think you have expertise in this. Can we meet so I can talk through this a little bit more? Are you interested in reviewing this for me?’”

 

 If you are interested in our fellowship program, there are various positions open for the upcoming academic year. We’re looking for well-rounded, innovative, responsible, driven, and flexible fellows to help us design and develop digital applications that facilitate engaged learning at the University of Michigan.

Our fellows include undergraduates, graduate students, doctoral students, and recent graduates. U-M student fellows are appointed for one academic term with the possibility of extending to multiple terms. Visit our student opportunities page for more information.

 

Academic Innovation Partners with Washtenaw Literacy for ESL Tutoring

Hannah Brauer, Communications Writing Fellow Alum
@brauhan

Deirdre Lee, Communications Writing Fellow

While the Office of Academic Innovation’s work usually focuses on the use of technology to expand access to education, one group of volunteers take a weekly break from their screens to support face-to-face education in the local community.

Open book. Washtenaw Literacy. Opens Worlds. Staff volunteers from Academic Innovation make a weekly commitment to support local education by serving as English as a Second Language (ESL) tutors with Washtenaw Literacy, a non-profit organization providing free literacy instruction to adults throughout Washtenaw County. Washtenaw Literacy began in 1971 under the belief that “literacy is the foundation for a sustainable community” and has reached more than 20,000 learners through the work of more than 10,000 volunteer tutors.

Kati Bauer, Budget and Financial Lead for Academic Innovation, first learned about Washtenaw Literacy through her work with the Rotary Club of Ann Arbor. She coordinated the partnership with Academic Innovation and provided an opportunity for ten volunteers to complete training for ESL Open Tutoring in late 2018.

The office now sends three to four volunteers to work with learners each week at the Ann Arbor District Library. Bauer noted these learners range from parents of University of Michigan students to visiting professors looking to improve their English communication skills. 

“[It’s] a really well-educated group that we get,” Bauer said. “They want to make their skills even better and they want to improve themselves, and I’m all for that.”

Learners can personalize the tutoring sessions to fit their needs, which can range from basic skill-building to academic tutoring to completing job applications. Bauer said many of her learners just want to practice speaking English.

“Most of them are really literate as far as reading and writing goes, it’s the idioms in everyday language that we use that they don’t understand, or the speed with which we talk,” Bauer said.

While about half of the volunteers had teaching backgrounds, including Bauer, others had minimal experience. Trevor Parnell, Events and Marketing Specialist at Academic Innovation, said he felt nervous about tutoring because he had never been an instructor before. 

Parnell said the Washtenaw Literacy training sessions helped him feel prepared to be a tutor.

“[Washtenaw Literacy] provided us with information and resources to put together lesson plans,” Parnell said.

Parnell decided to help coordinate the tutoring group after David Christensen, Program Coordinator at Washtenaw Literacy, spoke to the staff about the opportunity. Christensen supported the partnership upon recognizing shared values between the organizations.

“The connection that enabled our partnership with Academic Innovation is the shared dedication to public service and the empowerment for individuals,” Christensen said. “[This comes] through increased speaking and listening knowledge and confidence, as well as cultural competency.”

Though the partnership between Washtenaw Literacy and Academic Innovation is new, both organizations intend to work together as they grow into the future.

Christensen said he hopes Washtenaw Literacy will expand into other parts of the community and, as more Academic Innovation staff members join the volunteer group, they will tutor on multiple days. 

“I see [Academic Innovation] volunteers helping to facilitate this future through their efforts in tutoring as well as with their extensive professional expertise,” Christensen said.

Bauer and Parnell both said they find the experience to be rewarding, which keeps them volunteering. They said tutoring gave them a new outlook on literacy and the importance of communication.

“We take communication for granted…something as simple to us as just speaking,” Parnell said. “I didn’t think I had a ton to offer [the learners], and now that I’ve done it a few times, I realized that I actually do.”

Bauer said she volunteers not only for the enjoyment, but also as a learning experience.

“I always think teaching is the best way to learn something,” she said.

 

Visit the Washtenaw Literacy website to volunteer as a tutor or mentor.

Let’s Go Vue: Keeping Up with Front-End Web Technology at Academic Innovation

Nathan Magyar, User Experience Designer and Frontend Developer

From GradeCraft to Problem Roulette, the digital tools and applications produced by the Office of Academic Innovation seek to support personalized learning at scale. However, as the features of our sites become more robust, and the amount of data being used and consumed continues to grow, our development team faces the exciting challenge of meeting these needs in the best ways possible. A big part of doing so is choosing the right tool for the job.

Traditionally, an Academic Innovation software developer’s toolkit consists of the following languages: HTML, used for creating the “skeletal bones” of a webpage (headings, paragraphs, etc); CSS, used for making the site visually appealing (font styles, page layouts, etc); Javascript, used for providing interactivity (clickable buttons, pop-up windows, etc); and Python, used for storing and managing data in a database.

Just like any spoken language, the above programming languages are constantly changing; and just like other professional fields, new tools are always emerging in web development to help us work faster and more efficiently. Sometimes these tools take the form of a framework, a collection of files that are written in an existing programming language, like JavaScript or Python, and allow for faster and more efficient development. For the same reasons that you probably prefer to write documents in Microsoft Word or Google Docs instead of creating your own text editor from scratch, most developers would rather use a framework to create their applications — you can focus on writing your project’s unique content/code instead of having to first create common functionality.

In the past year, Academic Innovation has continued to keep up with modern web development trends by selecting a Javascript framework, Vue, for use in new and existing projects. Created in 2014, Vue is a “progressive” framework, meaning developers can use it in certain parts of their application, or go “all-in” and leverage it to build out their entire project. Currently, Academic Innovation is in the former camp, using Vue in specific parts of two applications, Michigan Online and (soon) GradeCraft. Michigan Online, for example, uses Vue to provide filtering functionality on many of its catalog pages, and the GradeCraft team is now adding Vue to a new administrative page.

Html code written in vue on left, michigan online front end page on right

Example of HTML code written in Vue for Michigan Online’s “Full Catalog” page

Adopting Vue into our projects, however, hasn’t been an instant change. While the framework uses a language that most of our development team is already familiar with, there is still a bit of a learning curve to it.

For me, that journey involved reading a lot of instructional articles, watching YouTube tutorials, and experimenting with practice projects. After three months of self-teaching, I was able to build many pages for the first version of Michigan Online with Vue. This gave me a great new sense of confidence and set of skills, so much so that I was able to help other team members start learning Vue as well.

At the time I was already leading a weekly internal event, JavaScript Working Group, during which user experience designers and backend software developers of all skill levels learned and discussed JavaScript-related topics. Given that Vue also uses JavaScript, and Academic Innovation’s goal of using Vue in more projects, it seemed like the perfect place and audience to offer Vue tutorials. Over the past year, members advanced from solving simple math problem-like challenges with introductory JavaScript concepts to building complex prototypes for an online store with Vue. Each tutorial session usually consisted of live programming and demonstrations, and starting this past January, was supplemented by a step- by- step article I wrote ahead of time on Medium. Two example articles are:

Building an Online Store with Vue CLI?—?Part 1

Simple Photo App with Vue.js, Axios and Flickr API?—?Part 1

The experience of leading this group has not only been a fantastic learning experience for me, but also a valuable resource for other Academic Innovation designers, developers, and fellows. In the 18 months that I have worked for Academic Innovation as a user experience designer and frontend developer, I have grown from a beginner programmer who merely attended these study sessions to a significantly more confident and competent frontend developer who leads the group and provides mentorship to peers. Without the supportive culture of learning and professional development that Academic Innovation fosters, my own advancement and this working group would not have been possible.

As summer approaches, we will soon decide which new projects and features our team will take on for the next few months. We will also identify new opportunities in our projects for using Vue. I look forward to furthering AI’s adoption of Vue, to continuing the growth of our team’s Javascript expertise, and to providing even more engaging, scalable, and personalized learning experiences for University of Michigan students and faculty.

Go Vue and Go Blue!

Designing Simulations Using ViewPoint: Perspectives from Three Graduate Students

Rong Han, Master of Arts in Educational Studies ‘20
@rongh23

Jennifer Ying-Chen Huang, Master of Arts in Educational Studies ‘20
@jenniferychuang

Raven Knudsen, Learning Experience Design Fellow, Master of Arts in Educational Studies ‘20
@rkindseyk96

Simulations are an excellent way to engage students in an interactive learning experience, but what does it take to design one? As graduate students in the Education Studies program in the University of Michigan School of Education, we set out do just that. Working with ViewPoint, a role-playing simulation tool developed at the Office of Academic Innovation with faculty collaborator professor Elisabeth R. Gerber, we designed a role-based college admissions simulation for more than  60 undergraduate students to participate in during an 80-minute class session of “Video Games and Learning” (EDUC 333), a course offered through the School of Education and taught this semester by Rebecca Quintana, PhD, Learning Experience Design Lead in the Office of Academic Innovation.

The Simulation

Knowing we would be working with undergraduate students, we opted to create a simulation around a topic they would be familiar with — college admissions — but from a different perspective than their own experience. Our goal was to have students understand the complex nature of admissions from an institutional perspective. To accomplish this, we used ViewPoint to create “Admissions Reviewer” roles for students, divided them across 10 selective institutions, had them review 10 diverse, fictional student applications, and admit the “most qualified” applicant. When it came to the institutions, we wanted to have diverse options so students could see how the admissions process worked, but also how the process can differ between institutions. As a result, we settled on using Duke University, Harvard University, Michigan State University, Purdue University, University of Michigan, and University of Texas – Austin. We provided unique institutional characteristics, including admissions rates and mission statements, on which students would base their acceptances. While students initially reviewed applications individually, they later discussed within their institutional groups which of the applicants to admit.

To make the simulation more engaging, we decided to cultivate elements that would elicit conflict. Each role was accompanied by a brief bio that explained how the individual role they assumed ended up as an “Admissions Reviewer.” With this, a few roles at each institution were assigned “secret missions,” based on biases we assigned them. Of course, this element was added purely for engagement purposes, and does not reflect the actual admissions process. For instance, one reviewer came from a disadvantaged background; therefore, he or she would have a negative bias toward highly advantaged students and convince other reviewers to admit students like himself or herself. With the added bias, our goal was to help participants feel more connected to the simulation as they tried to convince colleagues to admit applicants based on their biases. The results? As one student indicated in their post-simulation reflection:

The ability to identify with a person and be responsible for conveying their background and biases definitely gives way to engaging the rest of the simulation.

Viewpoint screenshot

In addition, we observed the “Admissions Reviewer” personas were more important for students working for institutions with lower acceptance rates. For example, participants “working” at Purdue -and able to accept six students – had no issues accommodating all assigned biases, whereas the Harvard group – who could only accept one student – had greater difficulty reaching a decision because each assigned bias wanted a different student accepted. Take this student’s feedback from our exit cards for example:

“The decisions don’t surprise me too much. They fell according to what I expected, particularly acceptance rates and expectation with academics.”

Tying it Together with ViewPoint

Our admissions simulation had many moving parts: “Admission Reviewer” roles, institutional characteristics, student applications, and a tight timeline, not to mention a collaborative group discussion. We used ViewPoint to help us manage this complexity. The features in ViewPoint that were most important to us were the ability to share resources in groups, messaging, scheduling, and disseminating resources. Prior to the day-of simulation, we allocated students to their respective institutional groups, each of which had access to a resource outlining specific institutional characteristics. Using the group function of ViewPoint, we were able to guarantee  each group could only view information relevant for their institution. In addition, we assigned role profiles by sending a job offer message through ViewPoint, allowing us to introduce participants to the simulation in a way that was immediately engaging . Furthermore, we our team served in non-participant roles as “Senior Admissions Directors” to whom “Admissions Directors” sent their team’s admissions decisions.. The main purpose of this role was to communicate directions and act as the start/end interaction for the simulation. Keeping in mind we only had a brief period to run our simulation, we used the timeline feature in ViewPoint to create a schedule outlining the description of events, expectations, and deliverables, as pictured below. This schedule was available for students to review prior to the activity and outlined expectations.

Viewpoint screenshot

QuoteA unique feature of ViewPoint is the ability to queue content, which allowed us to provide information, including applications, messages, and news feed information in a timely and controlled manner. This was especially important in the ”news” section of the dashboard, where we attempted to spark controversy within admissions teams by releasing articles related to the recent U.S. admissions scandal (which broke a few days before we ran the simulation) during their group discussions. Finally, we used the messaging system to monitor how students took into consideration institutional characteristics when individually reviewing applicants. As part of the simulation, we asked participants to send their selections, and an explanation of their decision, to a director role within ViewPoint. Using these messages, we could see which participants used institutional factors in their decision-making and how biases were used.

Overall, our simulation was a success. The admissions theme proved engaging as  participants had some familiarity with the process, but not extensive knowledge. The implementation of role identities proved to be one of the more engaging elements of the simulation, as they provided participants the opportunity to free themselves from long-held beliefs and immerse themselves in the activity.

Learn more about ViewPoint in this story from Michigan News, Students thrown into real-world scenarios with ViewPoint, an educational simulation tool.

How Course Journey Maps Help Us Understand Learner Success

Stephanie Haley, Product Manager, Strategic Initiatives

Elizabeth Hanley, Post Graduate Data Science Fellow

Caitlin Holman, Associate Director, Research and Development
@chcholman

“Retired. Needed something to do,” writes one of the learners in the popular online Programming for Everybody course when asked why they enrolled in that particular course. “Interested for the sake of it,” writes another. Collected in a repository of free-text survey data brimming with learner reasons for enrolling in this and other Massive Open Online Courses (MOOCs) hosted by the University of Michigan, here are voices that seem at home among the many learners who report enrolling for fun, as a hobby, or out of curiosity for the subject.

While taking a course for fun or curiosity is certainly not mutually exclusive with the desire to fully complete the course and earn a certification, there is something in these responses that reads like a clue for reframing how we talk about the low completion rate that has been prevalent in MOOCs since their inception in 2012. Alongside learners who enroll in MOOCs with a discrete goal — to finish the course — is a set of students for whom success might not necessarily look like course completion.

One of the challenges in unpacking the nuances of learner success outside the framework of course completion, though, is that it quickly becomes complex to quantify whether or not the needs of different groups of learners are being met. This January, a group of us at Academic Innovation set out to tackle that complexity by finding a more nuanced way of illustrating how groups of learners with varied goals and backgrounds are interacting with MOOCs at the University of Michigan.

Course journey mapEnter the Course Journey Map, a tool for visually representing student interactions with course items much the same way we might represent stops on a subway map. If each course item is represented as a stop, a journey map allows us to tangibly trace the movements of different learners as they interact with, or skip, each item. When we zoom out, we have an illustration of student engagement patterns throughout the course.

The Course Journey Map evolved out of the concept of a journey map, a tool that is utilized in the human-centered design process to visually tell the story of a user’s journey through an experience and/or product. Journey maps provide a shared foundation for conversations around user experience that designers and developers can use to identify and brainstorm around needs, pain points, and opportunities.

At Academic Innovation, we took a data-driven approach to journey mapping, first establishing learner group archetypes via a machine learning clustering technique. At first, using this strategy to establish learner archetypes felt like a gamble. After all, the algorithm we used is referred to in the machine learning world as unsupervised, meaning that there was minimal human input to guide the algorithm in sorting learners based on their patterns of completing various course items.

Would this algorithm, which was blind to learner motivations and demographics, be able to sort learners into meaningful groups based solely on how they move through the course? Essentially, we were betting that learner needs were distinct enough within an online course that we could identify different learners from the bottom up.

As it turns out, learner clusters coalesced into such distinct groups that when we mapped learner motivations and demographics back onto the cluster groups, we were left with an unmistakable taxonomy of learners. In the Programming for Everybody course, for example, we discovered four interrelated learner groups.

To start, there were those who told us they were primarily career-oriented and wanted to complete a certification. Then there were those who were primarily interest-oriented, taking the course just because they wanted to. Under each of those umbrellas were two groups: those who completed most of the course items, and those who completed only a few.

diagram- career oriented vs interest oriented, high item completion vs low item completioThe four groups in Programming for Everybody moved through the course in strikingly different ways. Those who were career-oriented focused on completing graded assignments, which were required for earning a certification. On the other hand, those who were more interest-oriented tended to interact with a broader range of assignments regardless of whether or not the items were required for passing the course. Of those who completed few assignments overall, the interest-oriented group made it further into the course than those who were seeking a certification.

 

Returning to the subway metaphor, our journey mapping process helps us understand learner success in terms of an actual journey. While there may ultimately be a destination in mind, different learners have different needs and interests along the way. Some learners want to reach a given destination as fast as possible, stopping only when necessary. Others prefer to take their time, stopping at every landmark of interest. Some are explorers who may not even have a destination in mind; they just want to see what’s out there.

If we think about a course in these terms, success becomes more about meeting learner’s individual goals than ensuring course completion for every learner. In our case, journey mapping ultimately reaffirms what the learner voices above have been telling us: Not all learners are here for the same thing.

The next step for our Course Journey Maps is to speak directly with these different learner voices. We are reaching out to learners from each cluster to hear about their experiences. Using what we learn from these interviews, we will then build out different learner archetypes for each group depicting characteristics like their goals, how they perceived success in their learning, and how that impacted their approach to engaging with the course material. Keep your eye out for a blog post later this summer describing our findings from the next step in developing the Course Journey Maps.  

Exploring the Future (and Present) of College Credentials with Sean Gallagher

Barry Fishman, Faculty Innovator-In-Residence, Arthur F. Thurnau Professor of Information and Education
@barryfishman

For our fourth and final AIM:TRUE session of the 2018-19 academic year, we were joined by Dr. Sean Gallagher, founder and Executive Director of Northeastern University’s Center for the Future of Higher Education and Talent Strategy and Executive Professor of Educational Policy at Northeastern. Sean is a national thought leader on the topic of alternative models for college credentials and degrees with more than two decades of experience in higher education.

AIM:TRUE stands for “Academic Innovation at Michigan: Transforming Residential Undergraduate Education.” The talk series is part of an effort to rethink the premises of undergraduate education that we call “The Big Idea” (to be expanded upon later). The Big Idea is a response to the following question: “Given the capabilities and resources of a major public research university like the University of Michigan, how would you design residential undergraduate education, if you could start with a blank slate?” It is an intriguing question, full of possibilities. A group of faculty, staff, and students from across the University of Michigan is pursuing those possibilities, with hopes of launching a new kind of undergraduate program.

The AIM:TRUE talk series is designed to inform and provoke our thinking around what is possible in education. Our first session featured the leaders of the Mastery Transcript Consortium, a group working to design a new transcript for college applicants that reflects students as unique individuals and not as a set of numbers and list of activities. Our second session was a talk by Dr. David Scobey, Director of the Bringing Theory to Practice Project at the Association of American Colleges and Universities. Dr. Scobey asked us to consider the multiple challenges facing higher education, especially in relation to the multiple publics it is supposed to serve. Our third session featured a range of projects in which the University Library is partnering with instructors across campus to support innovative forms of pedagogy aligned with Big Idea learning goals.

We invited Sean Gallagher to campus to talk with us about the changing landscape for higher education degrees and credentials, in a session entitled: “What does college prepare students for? Employer demand and online credentials.” In his presentation, Sean shared evidence of how the landscape for educational credentials is evolving, based on surveys and research from a range of sources, including his own center at Northeastern. Employer demand for educational credentials is growing, and the changing nature of work is driving an escalation in the level of educational qualifications required for jobs. A job that used to only prefer a high school degree now may require a bachelor’s. Jobs that often required a bachelor’s in the past now prefer a master’s. And beyond that, a majority of employers now believe that their employees will—at all educational levels—need continuous lifelong learning to stay current, and this translates into more credentials.

What does all this mean for higher education? Will university degrees be threatened by the rise of “micro-credentials” (digital badges) and blockchain? For the moment, this issue seems more hypothetical than existential for universities. Employers do not yet really understand or value micro-credentials (though awareness and experience continue to grow), and while it is becoming possible to be hired without a university degree, employers still seem to prefer degrees. In this environment, universities seem to be focusing on making their degrees available to more people, and a primary vehicle for this is online degrees, often delivered via MOOC platforms; one example is the new Master’s in Applied Data Science online program from the U-M School of Information. Gallagher has observed in a recent EdSurge column that, as interest in online degrees grows, “prestigious” institutions are leading the way. And as we move down this path, it is important to pay attention to other emerging trends, such as hiring practices that focus on specific job candidate skills that are usually obscured behind degrees. The rise of digital platforms to support online degrees can also support more granular evidence of student learning, performance, and accomplishment. As we build our online offerings, are we also positioning ourselves to support the new world of hiring?

To learn more, you can view Dr. Gallagher’s slide deck here or watch a recording of his talk below. 

We also recommend Sean’s excellent book, “The Future of University Credentials: New Developments at the Intersection of Higher Education and Hiring.

Finally, we are pleased to announce that the Big Idea project is getting a new name. From here forward, we will be referring to this effort as the TRUE program. Which stands for (as you know by now), Transforming Residential Undergraduate Education. Stay tuned for more talks in the AIM:TRUE series in the next academic year, and for more announcements and posts related to the TRUE program. Stay TRUE!

Previewing the 2019 Gameful Learning Summer Institute

Evan Straub, Learning Experience Designer – Gameful/Connect
@estraub

When I talk to people about gameful learning, frequently I hear skepticism about the idea of turning school into a “game” or into something “fun.” To consider what makes gameful learning so successful, we have to unpack why we consider games “fun.” Although we frequently associate a positive emotional reaction with playing a game, some games are more entertaining than others. Therefore, we have to reflect on when or why playing a game becomes enjoyment.

Ask a competitive athlete in the middle of a tough competition as to whether or not they are “having fun” at that moment. Most likely, they’re engaged with thinking about the game play, the strategy, and optimizing their physical performance within the competition that the idea of “enjoyment” is probably not the first idea that comes to mind. Instead, we talk about the experience of “challenge.” While challenge in and of itself is not generally considered pleasant (at least at the moment), it can make success extremely satisfying.

men and women sitting around tables with laptops listening to a lecture

Evan Straub, Learning Experience Designer, leading the GradeCraft workshop at the 2018 Gameful Learning Summer Institute

For example, consider a game that is based primarily on luck versus one based on skill. For example, compare the what it takes to win the card game “war” where winning the game is whoever happens to have the high card at that moment — to a game like chess which involves deep strategy to defeat an opponent. Winning at chess will most likely be more of a satisfying experience. Often, you will even hear a defeated player be grateful for the experience of playing a well-matched opponent; the challenge was the reward, regardless of the outcome.

Gameful learning is trying to capture that same experience. Learning should be challenging. Learning science research consistently suggests that we learn more when we are actively engaged in the topic and when the learning is appropriately challenging. Similar to how games encourage differentiated paths, we know that every instructor is a little different as well. Therefore, at the 2019 Gameful Learning Summer Institute, we are so pleased to highlight educators who are willing to share what they have learned in their own classroom.

woman working on a laptop and a man in the background onlooking

Attendees of the GradeCraft workshop at the 2018 Gameful Learning Summer Institute.

One theme that has emerged from our presenters is how to use role-play in the classroom. Naomi Norman, Associate Vice President for Instruction at University of Georgia, and T. Chase Hagood, Director of Division of Academic Enhancement at University of Georgia, are returning to talk about their use of “Reacting to the Past,” which puts learners in control of a situation, guided by historic texts. A similar session last year was incredibly popular so we are thrilled they are returning! Similarly, Nick Noel, Instructional Designer and Media Producer at MSU Information Technology Services, is exploring how educators are similar to “game-masters” and how tabletop role-playing games could be adapted for education settings.

We are also so excited to have Angie Romines, Senior Lecturer of English at The Ohio State University, presenting on how to create and use escape rooms as a collaborative pedagogy. Hopefully, an escape room is never a true authentic task, however Angie notes that success “can only happen when participants collaborate, bounce ideas off of each other, and get comfortable with failure.”  

Women at lectern giving a presentation to attendees of a conference seated at banquet tables

Erin Baumann, Associate Director of Professional Pedagogy at the Harvard Kennedy School gives the keynote presentation at the 2018 Gameful Learning Summer Institute.

Finally, I would be remiss if I didn’t mention our exciting keynote speaker, William (Bill) Watson, Associate Professor of Learning Design and Technology at Purdue University. His work with the Serious Gaming Center at Purdue University researches how games can create engaging and innovative educational opportunities at all levels of instruction.

Of course, there are many more opportunities for new ideas at the 2019 Gameful Learning Summer Institute. We hope that you will join us, feel challenged to take some ideas back integrate into your own teaching, and have fun.

 

 

Register for the 2019 Gameful Learning Summer Institute at the event registration page by Friday, July 5!

Reimagining Assessment in Undergraduate Education: Progress, Resilience, Mastery

Barry Fishman, Faculty Innovator-In-Residence, Arthur F. Thurnau Professor of Information and Education
@barryfishman

Cynthia Finelli, Director, Engineering Education Research Program; Associate Professor, EECS and Education
@cindyfinelli

Melissa Gross, Associate Professor of Movement Science, School of Kinesiology, and Art and Design, Penny W. Stamps School of Art and Design
@MMelissaGross

Larry Gruppen, Director, Master of Health Professions Education program, Professor, Dept. of Learning Health Sciences

Leslie Rupert Herrenkohl, Professor, Educational Studies, School of Education

Tracy de Peralta, Director of Curriculum and Assessment Integration, Clinical Associate Professor, Dental School
@tracydeperalta4

Margaret Wooldridge, Director, Dow Sustainability Fellows Program, Arthur F. Thurnau Professor, Departments of Mechanical Engineering and Aerospace Engineering

 

Given the resources of a major public research university, how would you design undergraduate education if you were starting with a blank page? That is the question we explore in The Big Idea project, and our answer is that undergraduate education should reflect the true strengths of the institution. In the case of the University of Michigan, that is research, discovery, and real-world impact. The Big Idea undergraduate program is problem-focused, with students directly engaged in research and scholarship as they work towards ambitious learning goals. Learners who meet these goals will be prepared to address the world’s pressing—and often ambiguous—problems. The program is designed to emphasize progress towards mastery — thus, it does not use grades, courses, or credit hours to mark student progress or readiness-to-graduate. In this post, we first discuss some critical features of the current higher education landscape and explain why we think they are ripe for re-examination, and then we outline how assessment for and of learning will work in the Big Idea program.

 

How would you design undergraduate education if you were starting with a blank page?

 

How Business-as-Usual in Higher Education Works… Just Not for Learning

The traditional mechanisms for measuring progress and readiness-to-graduate in higher education are grades, grade-point averages (GPAs), and credit hours. If you mention these terms, almost anyone involved with higher education will know what you are talking about. These are combined with course sequences that define both (a) the general education or “lower division” phase of college, where students pursue the broad distribution requirements usually associated with the liberal arts, and (b) the “upper division” years of college, where students pursue a focused major. These mechanisms have served higher education for more than a century, and they offer a relatively straightforward way to direct student learning pathways and to rank the relative performance of learners within those programs. However, in providing structure, these mechanisms also limit learners and curtail the potential of educational programs in key ways. In this section, we explore these limitations to explain why we are taking The Big Idea in a new direction.

 

Grades and GPAs: What Do They Measure?

Assigning grades as a way to record or report students’ performance feels like a naturally-occurring practice in formal education. But the use of grades, like many components of modern education practice, did not become widespread until the early-mid 1900s (Brookhart et al., 2016). Grades were part of a movement to make education more “scientific.” These efforts were led by Edward Thorndike, a prominent behavioral scientist who was instrumental to the spread of the scientific management and measurement in education. The work of Thorndike and others led directly to the standardized testing movement that defines much of K-12 education and college admissions in the United States today (Rose, 2016). The history of 20th century education reform has been described as a struggle between the constructivist and project-based ideas of John Dewey and the instructivist and standardized approaches of Thorndike. The consensus view is that Thorndike won and Dewey lost (Lagemann, 2000). As a result, the current structure of undergraduate education is more about the sorting of students for purposes that come later—such as college, graduate school admissions, or employment—than it is about supporting the learning of the students involved.

The use of grades actually removes information about student learning from the academic record-keeping process. Even when a letter grade is based upon a strict set of criteria about what a student has learned, the letter itself communicates little to the world beyond the classroom where it was issued. If a student has earned an “A,” we know that they did well in the course, learning (hopefully) the majority of the material taught. But what does a “B” mean? We generally assume that it means the student has learned roughly 85-90% of the material, but what material was not learned? If the course is part of a sequence, later instructors are given little information about the understanding or capability of their incoming students. “Grading on the curve,” meant to normalize outcomes across students, further masks information about student learning. A grading curve may communicate comparative performance, but without reference to the goals of the course.

 

The use of grades actually removes information from the academic record-keeping process.

 

The problem becomes even more pointed when trying to gauge a student’s overall learning in college. Currently, the GPA is the primary tool for reporting overall accomplishment. High GPAs are seen as signs of academic accomplishment, and they are rewarded with honors and other recognitions. Employers and graduate schools often use GPA as a sorting mechanism. This is problematic for several reasons. One problem is that GPAs can be “gamed,” with students seeking to take courses solely for the purpose of boosting their overall GPA. Another problem is that a great deal of the variance in a student’s final GPA can result from a student’s first-year grades, which are earned as students adjust to the new environment of college. This places first-generation students or others facing transitional challenges in college at a long-term disadvantage that does not reflect their actual capability upon graduation. Some institutions, such as Swarthmore College, have declared that all classes in a student’s first semester will be recorded only as “Credit/No Credit” to reduce anxiety and encourage risk-taking during this important transitional period.

Another reason that GPAs are problematic involves the growing concern that grades—or more accurately, a focus on grades as indicators of self-worth or future opportunity—are a contributor to the growing mental health crisis on college campuses. Many students, especially at selective universities, have never had an experience of academic “failure” in the form of a low grade or standardized test score. Yet learning scientists have long known that “failure” is a key component of successful learning. If a learner only ever gets the right answer or only ever performs at a high level, there is a good chance that they weren’t truly being challenged in the first place. Furthermore, knowing the “right” solution to a problem or challenge is often not as revelatory as working to understand why a solution is “wrong” and how to repair it. Deep learning is not about knowing answers, as much it is about understanding problem and solution spaces. The typical approach to grading in higher education imposes penalties on learners for taking on challenges and not succeeding on the first try, instead of encouraging continuous progress towards mastering the material. Think, for instance, of courses where the entire grade is based on a single or small set of exams.

Healthier and more productive grading systems (such as the gameful learning approach pioneered at U-M) emphasize progress and “safe” or “productive” failure, encouraging students to work beyond their comfort zones with confidence that they are not being adjudicated on each result. In the Big Idea program, we choose mastery-based assessment over traditional grading systems to support continuous effort and progress, and reflect true student accomplishment over averages or comparison.

 

Credit Hours: The Time Clock of Education

The credit hour, sometimes referred to as a “Carnegie Unit,” was created in the early twentieth century by the Carnegie Foundation for the Advancement of Teaching as part of their effort to create a pension system for college professors. To qualify for the Carnegie pension system, institutions were required to adopt a range of standards, including the newly-introduced Carnegie Unit (Silva, White, & Toch, 2015). From today’s vantage point, those who originally conceived of the credit hour might be surprised to see how its use has expanded to become the “time clock” for virtually all aspects of educational programs. The original metric was useful for determining the amount of time students spent in contact with instruction, but with expanding options for co-curricular, online, and other forms of learning, the question of what should “count” as learning or instruction has become muddied. Furthermore, the amount of time devoted to instruction does not necessarily translate to learning. As we’ve heard at least one critic observe, “If we’re measuring seat time, that’s the wrong end of the student.”

 

The question of what should “count” as learning or instruction has become muddied in higher education.

 

The Big Idea is based not on time, but on student accomplishment. We expect students to move at different paces and to approach their learning in a sequence that reflects personalized pathways. Therefore, we are designing the program in a way that we believe will take students roughly the same amount of time to complete as a “traditional” bachelor’s, but does not use time to regulate progress.

 

General Education, Majors, and Degrees

Lower-division undergraduates take a “distribution” of courses intended to expose them to different ways of thinking and expression, in order to develop intellectual breadth. Upper-division students engage in a series of courses defined by their major, designed to develop intellectual depth. Readiness for graduation from the university is measured by having accomplished at least a passing grade in all of the courses specified by the selected program of study and accruing the required number of credit hours. This system is designed to create both guidance to students in selecting courses and managerial efficiency in terms of university planning. Unfortunately, it also does little to promote learning and can suppress individual agency.

Within the system of distribution requirements and majors, it is unusual for a degree-granting program to employ program-level assessment of individual student learning as the qualification for graduation. Normally, assessment is conducted at the course level, with students being assigned grades based upon their performance within courses, and with those grades being compiled and reported as an average GPA across courses. If a student takes a predefined set of courses and maintains an acceptable GPA, that student may graduate with a degree in that area. In specialized cases, undergraduates may complete a capstone or summative project, such as a thesis. But for the majority of programs these are optional exercises, and the assessment criteria for summative projects are not usually aligned with any program-wide criteria for learning. In the Big Idea project, we have come to refer to business-as-usual as “faith-based” education, because one needs to have faith that students are learning something, even if the assessment design of the program is not designed to record or report that learning.

Charles Muscatine, in his book “Fixing College Education,” argues that the current system of majors exists in no small part to create “peace among the departments” (Muscatine, 2009, p. 39), ensuring that students continue to take courses in different areas of the university, even if the divisions that are created are not grounded in reality. As Muscatine pointed out, there has never been research conducted on whether the division of learning into sectors like “humanities,” “science,” and “social science” has real benefit to learners, or even reflects a valid distinction among different ways of thinking. He recommends distinguishing between disciplines in terms of the methods employed for sensemaking—“logical, empirical, statistical, historical, critical (analytic, evaluative), creative” (Muscatine, 2009, p. 40)—and making sure students have practice in each. This is what might once have been described as a truly liberal education, meant to liberate the mind and broaden our ability to think in flexible ways.

 

Assessment in the Big Idea: Business-as-Unusual

As we introduce our plans for assessment in the Big Idea, we note that, as with other elements of our design, the ideas presented here are not necessarily new; similar approaches have existed in different educational contexts in the past, and similar ideas continue to be employed in specialized applications today. However, the use of these practices at the core of a degree-granting undergraduate program within the modern research university is unusual.

In contrast to the checklist approach to graduating with a major described in the previous section, the Big Idea program starts with an ambitious set of program-wide learning goals, and has only one criteria for graduation: that a student has met all the goals at an acceptable level, as evidenced his/her “mastery transcript,” a document designed to: record accomplishment, provide evidence of accomplishment, and allow for tailoring to meet different student, program, employer, or other needs. (Our thinking about this kind of documentation is inspired by the work of the Mastery Transcript Consortium.) There are no required courses, and no minimum GPA. In fact, we do not intend to record or report grades for students in the Big Idea program. Nor is progress measured by the number of credit-hours completed, which is a metric of time rather than learning. What we are interested in is the progress a student makes towards mastery of the learning goals.

 

In the Big Idea program, there will be no required courses, and no grades… what counts is the progress a student makes towards mastery of the learning goals.

 

Our approach to assessment is designed to promote personalization of learning pathways, agency, and self-authorship. It is designed to promote resilience and personal growth. It is designed to support learners from diverse backgrounds. Where most current assessment paradigms are geared towards ranking and sorting students, the Big Idea assessment model is designed for transparency with respect to learning.

 

Assessment for Learning

There are two primary types of assessment: formative and summative. Summative assessment is a report of student knowledge, skill, or accomplishment at the end of some defined period or event, such as a course, high school, etc. Examples of summative assessment include final exams, which are typically used to measure end-of-course understanding and serve as the end of a student’s relationship with a course and instructor, and the SAT or ACT tests, which are meant to “sum up” a student’s academic potential at the end of secondary school and provide a measure of the student’s their readiness for post-secondary education. Formative assessment, on the other hand, is meant to inform learning, to give feedback on student progress, or to serve as a milestone towards a larger goal. Assessment in the Big Idea program is, by design, intended to be almost entirely formative.

Students in the Big Idea are expected to always be making progress towards the learning goals. Feedback on learning will come from many different places in the program — research supervisors, faculty across different learning experiences (including courses), and members of various communities where students conduct research and learning. The learning goals themselves are not meant to be a “final” report of a student’s potential or accomplishment; rather, they represent a certain level of attainment that can continue to be built upon throughout a learner’s life and career.

 

Students in the Big Idea are expected to always be making progress towards the learning goals.

 

Paths Towards Competence and Mastery

To emphasize the importance of progress and practice towards mastering the learning goals (no learning goal is a box to be checked), we describe paths towards mastery as encompassing several levels of proficiency: Awareness, Literacy, Competency, and Mastery. These terms are defined as follows in the Big Idea program:

  • Awareness

Students who achieve this goal know that this goal, practice, or skill exists, and they understand how it fits within the larger field or profession.

  • Literacy

Students who achieve this level are in the middle of learning this goal, practice, or skill, they understand its dimensions, they can apply or demonstrate it in a basic manner, and they are able to learn more about it through additional work. Such a student could play a supporting role in a project that employs the skills or practices inherent in this goal where someone else is leading and would understand what the other person was doing.

  • Competency

Students who achieve this level are ready to begin professional work requiring this practice or skill. A competent student can do a task requiring this skill on his/her own, he/she knows how to ask well-formed questions in the area or provide sound answers to others’ questions, and he/she could advance their understanding through self-study.

  • Mastery

Students who achieve this level are ready to employ the practices or skills embodied in this learning goal in the real world. Such students could supervise, guide, or teach others with respect to this goal. We note that “Mastery” for the purposes of graduation from the Big Idea program is not necessarily lifelong mastery; our rubrics will emphasize areas for ongoing growth even beyond our program goals.

Note that while we expect all learners in the program to achieve competency in all the learning goals, we expect each learner to achieve mastery in only a subset of the overall learning goals. Which goals a learner masters will depend on their particular focus and choices made during undergraduate study.

 

Getting Started (even before the “start”)

We expect that students in the Big Idea will arrive with some initial proficiency in many of the learning goals for the program, and we will invite students to present evidence of that learning. This would be a natural outgrowth of the kinds of experiences one might have had in secondary education (including extra-curricular experiences) that might lead a student to be interested in the Big Idea in the first place. We note, however, that traditional markers of student “accomplishment,” such as Advanced Placement scores, will not be considered as evidence of learning in and of themselves. We will invite students to present a case for their current level of learning that might include information about courses taken or tests passed, but that also requires demonstration of their actual knowledge or skills.

One of the first formalized activities of the Big Idea curriculum, to be conducted as part of a “Forum” experience (this is meant as a multi-year home base for students—similar to homeroom in secondary education—with access to more experienced peers and an advising faculty member), involves helping students become familiar with the learning goals, understanding why the goals were chosen and how they are to be assessed, and gaining exposure to a range of examples (perhaps provided by the more advanced students in the program). We will also engage learners in self-assessment activities to allow them to calibrate their current level of proficiency in each of the learning goals.

 

Demonstrating Achievement

We will develop detailed rubrics for each learning goal, providing descriptions and examples at each level of learning towards each goal. The rubric is meant as a guideline for learning and for assessment, not an “answer key” or template. We expect broad individual variation in the way each learner expresses their current levels of learning, to reflect individualized interests, choices, and pathways.

Students will use an electronic portfolio as a tool to record their work and accomplishments, to share/present those accomplishments for evaluation and feedback, and (following graduation) to offer evidence of learning in the future. We intend the portfolio to be a “mastery transcript” as it will serve as a replacement for traditional transcripts, among other uses. Portfolios have long been used as a means to encourage self-authorship by students, allowing them to shape the narrative of their accomplishments, and even customize that narrative for different audiences and purposes. Electronic portfolios also allow for the inclusion of many different forms of evidence in support of claims of learning, including links to assessment evidence and they can be assembled in different ways for different needs an audiences. Understanding how to communicate one’s own abilities using the portfolio (and underlying data) will be an important component of the Big Idea program, related to the “Communication” learning goal.

 

We expect broad individual variation in the way each learner expresses their current levels of learning.

 

Digital badges, also known as micro-credentials, are one mechanism for tracking and reporting accomplishment that can be employed in an electronic portfolio. Digital badges have many useful affordances. They can be used to establish pathways towards complex learning, to record progress, and to (when used as credentials), to signal accomplishment. Digital badges can also be used to expand assessment beyond our formal processes by encouraging students to make progress towards the learning goals in all areas of their life, whether part of the formal activities of the Big Idea program or elsewhere.

In addition to the portfolio-based mastery transcript, assessment in the Big Idea will include in-person interviews or performances. The sessions will be personalized to each student, allowing the individual cases demonstrate achievement in individual ways.

 

Formal (and Formative) Assessment in the Big Idea

Students in the Big Idea program are responsible for making an evidence-supported case for their progress towards or accomplishment of learning goals. There will be two levels of panel review for students to demonstrate achievement, both of which include review of the mastery transcript and in-person interviews or performances. As part of the overall learning process within the Big Idea, these two levels of review allow personalized assessment and feedback at a large scale through a manageable workload. The process is managed by the Forum/homeroom instructor, who (together with more advanced students in the Forum) can give advice to students about how to assemble their materials for review and when they are ready to start the process.

The first level involves review by a panel comprised of more advanced, “arm’s length” students, working under the supervision of a faculty member. This arrangement allows for frequent assessment of student progress at a large scale and is a key part of the learning process for student panelists, as they learn to give constructive feedback to more junior students with respect to each of the learning goals. The goals of this first level review are both to mark progress and, more importantly, to provide feedback to the student. This review level is expected to be employed regularly for students at the awareness and literacy stages of proficiency. While the actual number of reviews might vary for each student according to their needs and pace, we would expect students to engage in this first-level review several times a semester with respect to different learning goals. We also anticipate that not all students will succeed in each review step (though we hope that feedback from Forum instructors and peers helps mitigate this). The Big Idea is designed to support progress towards meeting the learning goals, so we would not necessarily consider this a “failure,” but rather progress towards eventual success.

As an example of how the first-level review might work, consider a student who believes she is making good progress on the statistical and computational learning goals (Ways of Knowing) and also the resilience goal (Personal Good), because of various difficulties she encountered towards learning these goals. The student discusses her readiness to be reviewed with her advisor and other students in Forum, collecting input on what evidence to include in her mastery transcript and how to assemble it. This evidence could include examples of the work done in statistics and computation, including a computer program that could be used to display statistical analyses related to a public health dataset about water quality (the dataset comes from a project the student is working on led by a faculty member in the School of Public Health) and a reflection statement about what was learned and how it represents progress towards achieving competency on those learning goals. To present evidence of resilience, the student writes a narrative discussing various challenges faced as she worked with this data and how she worked through those challenges. The panel reviews the material and provides feedback and questions to the student, and the student is invited to respond in writing. An in-person conversation could be scheduled for the student to meet with the panel for further discussion, if needed. Finally, the panel would issue feedback and a decision about what level of proficiency the student had reached for each learning goal. Students who believe they are ready to be reviewed for competence or mastery in particular learning goals would also use this student-run panel, and the panel would “approve” portfolios for review at the second level.

The second, more advanced level for assessment, involves a panel of faculty, advanced students, community members, alumni, etc. who will review and give feedback to students. This panel will primarily hear cases at the competency or mastery level of learning goals. Students will be encouraged to present cases that combine multiple learning goals (though we do not expect any single case to contain all learning goals), and they would be expected to engage in this second level of review for each learning goal, with the expectation that any review should include multiple goals to represent the full range of learning goals. This stage will also include a public presentation and discussion, similar to a doctoral thesis defense. As part of the Forum activities, we would work to prepare students to be proficient in a range of presentation modalities (again, the Communication learning goal), recognizing that this is another area where students will vary. We hope to make these community events. Once the program is operating at scale and the scheduling of such events is difficult, we envision an annual (or semi-annual) public celebration involving a poster fair, talks, and panels of students and others involved in the research.

 

The assessment infrastructure of higher education has evolved to emphasize efficiency and to simplify the management of learners, courses, and programs, but without a focus on learning or support for student individual differences or independence.

 

Summary

The use of grades, credit hours, and majors has served to help the modern university “manage” the processes of undergraduate education, but these tools were not designed to support learning or make it more transparent. Eventually, these structures became the tail that wags the dog. The assessment infrastructure of higher education has evolved to emphasize efficiency and to simplify the management of learners, courses, and programs, but without a focus on learning or support for student individual differences or independence. Periodic attempts at reform often focus on the re-introduction of learner-focused ideas such project- or problem-based learning, but these efforts strain against the boundaries of the existing infrastructure. We need “infrastructuring” work (Star & Ruhleder, 1996), a conscious reshaping of the practices, technological supports, and cultural norms that guide our thinking about assessment and support for learning in education. Our proposal for assessment in The Big Idea requires us to re-engineer the infrastructure for higher education assessment to emphasize progress, resilience, and eventually mastery of ambitious learning goals. Re-shaping these structures is a key component of our plan to design undergraduate education to take best advantage of the resources and opportunities of a major public research university.

 

References

Brookhart, S. M., Guskey, T. R., Bowers, A. J., McMillan, J. H., Smith, J. K., Smith, L. F., … Welsh, M. E. (2016). A Century of Grading Research: Meaning and Value in the Most Common Educational Measure. Review of Educational Research, 86(4), 803–848. https://doi.org/10.3102/0034654316672069

Lagemann, E. C. (2000). An elusive science: The troubling history of education research. Chicago: University of Chicago Press.

Muscatine, C. (2009). Fixing College Education: A New Curriculum for the Twenty-first Century. Charlottesville: University of Virginia Press.

Rose, T. (2016). The end of average: How we succeed in a world that values sameness. New York: HarperOne.

Silva, E., White, T., & Toch, T. (2015). The Carnegie Unit: A century-old standard in a changing education landscape. Retrieved from http://www.carnegiefoundation.org/resources/publications/carnegie-unit/

Star, S. L., & Ruhleder, K. (1996). Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces. Information Systems Research, 7(1), 111–134. https://doi.org/10.1287/isre.7.1.111