As generative AI systems become more deeply woven into the fabric of modern life - drafting text, generating images, summarizing news - debates over who should profit from the technology are intensifying. A new paper in the Journal of Technology and Intellectual Property argues that the current copyright system is ill-equipped to handle a world in which machines learn from, and compete with, human creativity at unprecedented scale.
Frank Pasquale, professor of law at Cornell Tech and Cornell Law School, and co-authors Thomas W. Malone, professor of information technology at MIT Sloan School of Management, and Andrew Ting, professorial lecturer in lawat George Washington University, describe a legal innovation called "learnright," a new intellectual property protection that would give creators the right to license their work specifically for use in AI training. The basic idea behind learnright was first proposed by Malone in 2023. The new paper shows how the concept could actually work legally and economically.
The researchers say the idea stems from a growing imbalance. Today's largest AI systems are trained on vast datasets scraped from the internet-millions of books, articles, songs, photographs, artworks and posts. Some of the authors of those works are now suing AI companies, arguing that using their copyrighted work to train a commercial model without permission is a violation of the law.
Yet court rulings on the issue remain unsettled, especially around whether training counts as fair use. While some judges have signaled skepticism toward AI companies, others have suggested that training may indeed be lawful if considered analogous to a human reading a book.
"The ongoing legal uncertainty here creates problems for both copyright owners and technologists," Pasquale said. "Legislation that mandates bargaining between them would promote fairer sharing in AI's bounty."
The stakes are hard to ignore. Artists complain that their signature styles can now be mimicked in seconds. Journalists are seeing readers peel away as chatbots summarize the news without sending traffic back to publishers. And white-collar workers in law, design, marketing and coding now worry that the next AI upgrade could automate portions of their jobs using the very work they once produced.
The authors argue that this is not simply a legal puzzle but a moral one. From a utilitarian perspective, they say, society benefits when creative work continues to be produced-and that requires maintaining incentives for humans to keep making it. From a rights-based standpoint, they argue that tech companies vigorously protect their own intellectual property while dismissing the value of those whose work powers the models. And from the perspective of virtue ethics (which focuses on the type of character and habits that are essential to human well-being), they suggest that flourishing creative communities depend on norms of attribution and respect.
"Learnright law provides an elegant way of balancing all these competing perspectives," said Malone. "It provides compensation to the people who create the content needed for AI systems to work effectively. It removes the legal uncertainties about copyright law that AI companies face today. In short, it addresses a growing legal problem in a way that is simpler, fairer, and better for society than current copyright law."
The proposal would not replace copyright. Rather, it would add a seventh exclusive right to the six already granted to creators, specifically addressing machine learning. Just as copyright law recognizes special protections for digital audio transmissions, the authors say, Congress could extend a new protection for submitting a work to an AI training process.
Under such a regime, companies building generative AI tools would license the right to learn from specific datasets-much as some already do with news archives or stock photo libraries. The authors say that market negotiations would naturally set fair rates, and that clearinghouses or collective licensing organizations could replicate successful models from the music industry.
Critics may argue that such a right could slow innovation or burden startups. But the authors counter that unrestrained training could ultimately undermine the very creative ecosystem that AI depends on. They point to research suggesting that feeding models their own outputs over time can lead to "model collapse," reducing quality. Without a continually refreshed supply of human-generated art, journalism and scholarship, they say, AI's progress could stagnate.
The paper arrives as lawmakers signal growing interest in regulating generative AI. A learnright, the authors argue, offers a clear path for policymakers: a middle ground that neither bans training nor leaves creators uncompensated.
"At present, AI firms richly compensate their own management and employees, as well as those at suppliers like NVIDIA," says Pasquale. "But the copyrighted works used as training data are also at the foundation of AI innovation. So it's time to ensure its creators are compensated as well. Learnright would be an important step in this direction."