On a breezy fall afternoon, philosophy professor Colin Koopman ponders a busy intersection near the UO. Cars and pedestrians stop, go and turn in a well-orchestrated dance.
"Imagine when automobiles first travelled the same dirt roads as horses and buggies," Koopman said. "No doubt there was chaos, fear and speculation about the future."
Like artificial intelligence, the automobile was a disruptive technology.
But society eventually figured out how to regulate cars and trucks. Koopman points to all the drivers and pedestrians doing what they need to do, without anyone directing traffic.
Railroads, radio, television and the internet were also disruptive, scary and exciting, Koopman said. Today, artificial intelligence sparks the same mixed emotions.
Koopman is director of the UO's New Media and Culture Certificate program, an interdisciplinary offering for graduate students. Through his teaching and research, he explores the politics of information and data with a focus on privacy and equality.
In his new book, "Data Equals: Democratic Equality and Technological Hierarchy," Koopman explores questions about data, algorithms and equality. Technology is never neutral, Koopman asserts. And information technology often perpetuates and exacerbates social inequality.

It's also driving people apart and posing existential threats to democracy. During his campus neighborhood walk, Koopman entertained questions from OregonNews about AI.

Q: What advice do you have for using AI chatbots?
Stick with what you know. Users with some degree of knowledge can operate those large language models and do some impressive things. But they're not all-purpose solutions. If I'm using generative AI to piece together complex philosophical arguments, I can weed out the hallucinations (a fancy word for mistakes) because that's my specialty. But if I use it to compute rocket trajectories, I can't distinguish an obvious mistake from an elegant solution.
Q: So AI won't make us experts in everything?
Well-intentioned tech ambassadors are touting the promise of AI for democratizing intelligence. But I don't think that's very promising. Consider the internet, which was supposed to democratize information. The internet disseminates volumes of content to countless people. But it also circulates misinformation and disinformation just as quickly. Those two sides of the coin cannot be separated.
Q: What's the result of all that internet misinformation and disinformation?
It's created a perfect storm for social separation and political polarization. We may have thought the internet was a force for democratization back in the 1990s, but I don't see how anyone could believe that today. The same forces are driving AI.
Q: Why does access to information divide us?
Personalization. Today the value of a search engine isn't just the content it finds but also data about you: your searches and posts. Corporations want information about searchers, not just searches. Over time, personalized search results transform small differences into bigger ones, through the information your searches yield. Social media platforms also magnify our differences, isolate opposing viewpoints and divide us. The algorithms keep feeding us more of what we react to, with content careening inevitably toward extreme, dogmatic viewpoints.
Q: Could AI divide us in the same ways?
Absolutely. AI falls prey to the same personalization problems the moment you log in to a chatbot. The value of a large language model to a corporation isn't just the content it spits out. Like internet search, the real value is your data, your personalized user history which they can leverage for revenue. As people become more reliant on AI to do their thinking, the gulf widens. We separate off into isolated networks.
Q: Why is there suddenly so much personal data being used by AI?
It's not so sudden. It took decades for us to get wrapped up in data. We often think of data technology as something that came on the scene around 2000. Datafication, the transformation of our lives into discrete chunks of information, started much earlier. For more than a century, we've been investing ourselves in data systems. Society depends on them, but they also perpetuate inequality.
Q: Can't we opt out?
Not a practical option. Your credit score is a good example of a data point you need in our society. Want to buy a house or car? The higher your score, the more you can borrow and the better your loan terms. That's one example of how data perpetuates inequality, and also how it's not an option. No credit score at all? The system can't even handle that. Credit scores can perpetuate or magnify inequality and create barriers. They're used for all sorts of things such as selecting job applicants and renters or establishing an account with your utility company. Any obligatory technology can operate to reproduce inequality, unless we take proactive measures to ensure that it doesn't.
Q: What is the impact of AI on all that data?
Your life is limited by opaque, mysterious systems. AI analyzes vast datasets of personal information for companies, lenders and organizations (including those with good intentions). They all rely on AI to make crucial decisions, but they don't know why or how it spits out the numbers. Even if they had the time and resources to try, it would be impossible to figure out. And there's junk in those vast, centralized datasets. I'm also interested in the data they leave out that could mitigate social inequality.
Q: Why don't we just get new data?
Producing high-quality datasets and curating them requires human labor and money, which is not how business is done anymore. In the late 1990s, when Google took internet search market leadership from Yahoo, the process moved from huge numbers of people hand-curating webpage information to large-scale data centers and algorithms. That business model of automation has carried over into AI.

Q: What gives you hope?
Regulatory agencies can make AI technologies work for a democratic purpose. We've managed to regulate other technologies such as television and automobiles. It's not like there's some earlier-generation technology that our grandparents used that remains unregulated because it's impossible to do so.
Q: Any last words?
We all need to be aware and frequently remind ourselves of the limits of AI systems. The way they're being designed, marketed and hyped - by the most leveraged companies in the history of capitalism - is that they're all-purpose problem-solvers. They are not. However, like any tool, these technologies perform well when used thoughtfully and judiciously.