UM Center for Academic Innovation Logo

Shobita Parthasarathy on policy in the era of AI and big data, and how technology reflects the values of those who create it

“Justice and Equity in Technology Policy” teaches learners how inequity and injustice can become embedded in technology, and how those issues can be addressed

Sean Corp, Content Strategist

More than ever, technology is shaping, and being shaped by, public policy. This has an enormous impact, particularly for marginalized communities. While we rely even more heavily on big data and AI-driven decision-making, evidence is pointing to those algorithms reinforcing social biases against women, people of color, disabled people, and more.

The online course “Justice and Equity in Technology Policy” helps learners understand how inequity and injustice can become embedded in technology, science, and associated policies, and how these issues can be addressed.

Taught by Shobita Parthasarathy, a professor of public policy and director of the Science, Technology and Public Policy program at the Gerald R. Ford School of Public Policy, the course covers an array of technology policymaking and the politics of innovation policy.

Professor Parthasarathy sat down and answered some questions about her course, now available on Michigan Online and Coursera, and discussed examples of technology policy in action.

Shobita Parthasarathy, professor of public policy and director of the Science, Technology and Public Policy program. | Photo courtesy of the Gerald R. Ford School of Public Policy.

1. You studied biology and are well-versed in science, but have shifted towards analyzing science and technology from a social and policy perspective. What led you to study this intersection of science, society, and policy?

I came of age as the Human Genome Project, the Clinton-era effort to map and sequence the whole human genome, was coming to a close. There was great excitement about how this information could revolutionize healthcare and allow us to lead longer, healthier lives. But some worried about how this newly available genomic information might affect individual autonomy and privacy, and create new social stigmas (this resonates with concerns about AI and machine learning today). I was very interested in these questions, and wanted to study the social implications of this new area of science and technology, and help develop public policies that would maximize the benefits while minimizing the harms.

2. You’ve said that this course is as useful for a scientist as it is for those working on public policy. How important was it for you to create a course that could speak about these important issues in a way people from various backgrounds could appreciate?

The bottom line is that technology is increasingly important to our lives and work. It is no longer just the concern of engineers and scientists, but also policy professionals and social workers who use machine learning algorithms to allocate government services and community organizers who want to ensure that their voices are heard on how renewable energy and vaccines are deployed and accessed at the local level. And this isn’t a matter of understanding the technical details, it’s about knowing how society shapes and is shaped by technology, and how they might shape development and regulation to ensure technology really serves public needs.

3. Everyone experiences the proliferation of technology in their daily lives, but might be less clear on how policy decisions driving these technologies impact us. Can you provide any examples?

The first category is innovation policies, which shape which technologies are developed and made available to us. The most famous recent example is Operation Warp Speed in the United States, which ensured rapid development and distribution of COVID-19 vaccines across the country. But innovation policies can also restrict access: while our patent system is designed to stimulate innovation, often it makes new technologies really expensive and difficult to access. The second category is regulatory policies. Data privacy laws in the European Union, California, and Illinois shape what kinds of information can be collected about you and how it can be used. Meanwhile, cities across the country are banning the use of facial recognition technology, concerned that it is biased in terms of race, gender, and disability status. Finally, technology is used to carry out policy objectives. In Detroit, law enforcement is using facial recognition technology in the hopes that it will protect businesses and residences. And around the world, we see governments starting to use machine learning algorithms to determine who should have access to social services and other benefits.

4. One of the major myths you debunk in your course is that technology is inherently objective and cannot be biased. Can you expand on the ways technology reflects the values of those who create it and how that ultimately impacts (some) users?

We like to think that science and technology are outside of society, and therefore neutral. Perhaps it helps us psychologically, to think that there is a neutral place away from politics and the challenges of our daily lives. But science and technology are social institutions, just like the law, universities, and health care systems. We can see this in all sorts of ways.

Consider the spirometer, the technology used to measure lung function. The American plantation physician and slaveholder Samuel Cartwright, who developed the technology, assumed that Black people naturally had inferior lung capacity–even though he only tested the spirometer on enslaved people whose lung capacity had likely been harmed through violence and excessive labor. He adjusted their results with this difference in mind. But the problem didn’t end with Cartwright, which is where we see societal biases also playing a role.

Over the next decades, there were dozens of studies that seemed to reinforce the racial differences in lung capacity even though these studies didn’t consider other variables like occupation or social class, and researchers had thoroughly debunked the biological basis of race. They just assumed Cartwright was correct, and there seemed to be no reason to change their minds. Soon, these racist assumptions were coded into the machine itself: as the spirometer became a more sophisticated technology it featured “race correction” software designed to accommodate the supposed racial difference. This had real impacts in the late 1990s, when asbestos manufacturer Owens Corning challenged lung damage claims made by their Black employees.

5. On the flipside, you provide some examples in your course about ways technology can advance equity. How can policy be developed to do that?

One good example is the Open Insulin Foundation, where affected communities and technologists are coming together to try to develop a community-centered and affordable approach to producing and delivering insulin, crucial to people with diabetes. Governments can foster these kinds of efforts, that explicitly center the needs, priorities, and knowledge of marginalized communities. Another approach is to bring equity explicitly into research funding policies and regulation, by considering the needs of marginalized communities at every step in the innovation process, including ideation, research, testing, and deployment. The point is to anticipate potential harms and address them early.

6. Are there some recent examples of how policy has positively reflected advances in equity and equitable thinking, and some negative examples?

The White House’s Office of Science and Technology Policy recently issued a Blueprint for an AI Bill of Rights, which provides a framework to help both technologists and policymakers put a set of principles (including, for example, algorithmic discrimination protections) into practice. The European Union’s AI Act, designed to regulate machine learning according to its potential social implications, also holds promise. Finally, India’s government-funded National Innovation Foundation identifies and fosters grassroots innovation, produced by low-income and marginalized communities.

7. The course concludes with sections on rethinking expertise, rethinking design and rethinking policy. Can you expand on why you think we should be rethinking these core ideas, and how thinking about them in a new way could benefit policymakers, scientists and impact everyone?

In the past, we haven’t thought about equity much in technology development. As a result, we need to rethink how we approach every stage of innovation, including policy. And that starts with who has relevant expertise for the conversation. It’s crucial to remember that while we tend to think that scientists and engineers are the central experts in discussions about technology, we are usually trying to develop technology to help society. Therefore, we need experts in the social problems too. In sum, a much wider range of people have crucial perspectives on the development and deployment of technology. And marginalized communities in particular have crucial expertise if we want to maximize technology development that centers social equity and justice.


“Justice and Equity in Technology Policy” is available now on Michigan Online and Coursera. The course is free to University of Michigan students, faculty, staff and alumni. To learn more and to enroll, visit the course page on Michigan Online.

Recent Posts

Generative AI for Course Design: The Basics

Generative AI for Course Design: The Basics

Read More
Student Fellow Spotlight: Sadia Rahman on finding her ‘thing’ and her voice as an XR Project Management Fellow

Student Fellow Spotlight: Sadia Rahman on finding her ‘thing’ and her voice as an XR Project Management Fellow

Read More
Tandem receives Trailblazer Award from WISE for supporting equity in STEM 

Tandem receives Trailblazer Award from WISE for supporting equity in STEM 

Read More