The Digital Futures Institute presented two panels at the King's Festival of Artificial Intelligence.

The two events explored the digital futures of literature and magic, marking the return of the Digital Futures Institute's Living Well with Technology series, launched in 2024.
Digital Futures of Literature
The event, hosted by Professor Kate Devlin, featured Dr Clementine Collett, BRAID research fellow at the Minderoo Centre for Technology and Democracy at The University of Cambridge, Professor Genevieve Lively, Professor of Classics and Director of the Research Institute for Sociotechnical Cyber Security at the University of Bristol, and Dr Sarah Perry, Visiting Research Fellow in the Faculty of Arts & Humanities at King's, Booker Prize-nominated bestselling novelist and Fellow of the Royal Society of Literature.
The multidisciplinary panel tackled the promises and perils of generative AI in the literary world.

Dr Clementine Collett, who researches the impact of generative AI on writers and publishers, called for the voices of creatives to be centred in the discussion and to create channels that feed back to people who are in the seats of power. Dr Collett is a campaigner against the opt-out model, which the government proposed in the recent Copyright and AI Consultation.
The opt-out model means that creative works can be used to train generative AI models, unless that creative has opted their piece of work out. It comes with a lot of difficulties and issues, one of those being that it's really difficult to opt out all your pieces of work. Also, it puts all the onus on the creatives instead of the tech companies, who are benefiting from this work. The UK has incredible copyright law, but when we cannot see what these companies are using, we can't enforce it. The government is choosing not to be a world leader in this space, but to cozy up to the big tech companies.
Dr Clementine Collett, BRAID research fellow at the Minderoo Centre for Technology and Democracy, University of Cambridge

Professor Genevieve Lively introduced The Routledge Handbook of AI and Literature, the new book she co-edited with Dr Will Slocombe.
One of Professor Lively's chapters in the book is entitled Free spaces of imaginal adventure: voicing silence in AI and literature. Speaking about how AI predicts the unwritten, Professor Lively pointed out that while AI tools are becoming more and more advanced, they are still unable to understand silence.
AI wants to fill the silences and gaps with something. In AI research, there is a recognition that silence is underresearched. There is so much in literature and literary criticism that we can do to annotate silence, to try and expose its narrative, literary and poetic dynamics. I'm a little bit afraid that the generative AI hoovers all that up and then actually learns to work with silence. But at the moment, that's still an area of significant weakness.
Professor Genevieve Lively, Professor of Classics and Director of the Research Institute for Sociotechnical Cyber Security at the University of Bristol

Dr Sarah Perry, author of The Essex Serpent, expressed her belief in readers - she thinks that the use of AI will only elevate the literature that deserves its audience. 'Books are about something that lies underneath, it's about what's encoded. Until AI develops a sense of irony and metaphor, I feel quite untroubled,' Dr Perry said.
In my experience, the work that is purely generative AI, that is pure mimicry, fails. I'm quite optimistic and sanguine about the threat or not to my livelihood, because I believe in readers. I think they are not stupid. They can tell the difference between parody and mimicry, and something that actually has meaning.
Dr Sarah Perry, Visiting Research Fellow in the Faculty of Arts & Humanities at King's
Dr Sarah Perry also spoke about correctly identifying the nature of the threat that AI poses: 'I think there's an extent to which writers are trying to play both ends against the middle. On the one hand, they don't want their precious art to be replicated. But on the other hand, the only reason there's a real threat is because they're worried their income's going to go down - in which case, it's not art. We have to think really carefully about the nature of the threat, and if it's purely fiscal, then we're not talking about a threat to art.'
Watch the recording of the Digital Futures of Literature here.
Magic, Minds, and the Mysteries of AI
In a blend of stagecraft and science, Dr Matt Tompkins, a professional magician and a researcher in visual cognition at Lund University, performed a live magic show that explored how we perceive and misperceive.

'A magician is just an actor playing the role of a magician,' Dr Tompkins said, referencing 19th-century illusionist Jean-Eugène Robert-Houdin. 'When you go to a modern magic show, you might see things that appear to be impossible. You might see things that violate the laws of physics, our understanding of how reality works. Things fall up, objects pop in and out of existence, and you might see things that seem like psychic powers that are outside the realms of known science. But what you are seeing is not necessarily what is occurring.'
Dr Matt Tompkins does 'fake magic' - when what the viewer sees should feel impossible, but they know it is not. In his research, he taps into principles of human psychology, ideas around illusions and perceptual eccentricities of the human psyche. These tools can be used to understand what the future of AI might bring.
His talk featured live demonstrations of thought-reading, card tricks and perceptual illusions, all used to illustrate how easily the human mind can be misled.
Dr Tompkins explained how magic trick methodology is used in the laboratory to explore human cognition from an experimental psychology perspective, looking into sensation, perception, memory, reasoning and metacognition - the way we think about our own thinking.
In a magic trick, you have a secret method that creates an effect for the audience, and you have to hide the effect. Hiding the effect is called misdirection. When you do this at speed, the magician's body language, and the assumptions that you have psychologically about how hands and objects work, conspire to trick your brain into thinking that the object is in one place, when it is not.
Dr Matt Tompkins, Postdoctoral Researcher at Lund University's Choice Blindness Lab
When designing his experiments, Dr Tompkins does not want to create the illusion of impossibility. He wants participants to falsely attribute their experiences to a fictional algorithm. Metacognitive illusions, he says, can help researchers predict how people might respond to AI that does not exist yet.
Dr Tompkins inverts one of British science fiction writer Arthur C. Clarke's three laws - the idea that any sufficiently advanced technology is indistinguishable from magic. In his experiments, he uses deception, always followed by dehoaxing - debriefing participants that they were misled - to tell participants that they are interacting with advanced artificial intelligence neurotechnology. This creates a simulated experience of high fidelity thought reading and mind control - seemingly real, but genuinely theatrical experiences.
When you ask people to predict their own behaviours in future circumstances, even really mundane future circumstances, everyone, myself included, is terrible at that. If you ask them to play this kind of imagination, fantastical game, the data you're going to get there is going to be deeply flawed in terms of predictive value. We try and short-circuit that problem with lies. We create the simulation, and we use magic tricks to sell it. We create what is, for our participants, an empirical experience of whatever kind of fantastical, extraordinary technological experience, which may or may not exist in the future. And this gives us a little window as to how they might interact with such a system if it were real.
Dr Matt Tompkins, Postdoctoral Researcher at Lund University's Choice Blindness Lab
Creating regulations around AI and neurotech is really important, Dr Tompkins says. But a real challenge lies in creating frameworks, policy and regulation around imaginary responses to imaginary technologies before they enter use: 'I can't make the technology real, that's not my department, but I can make the experiences of the technology real, and using that as a tool, we can get a window into how people might react to these things.'
