Last night I enjoyed a debate hosted by The Free Press and FIRE: “Will the Truth Survive Artificial Intelligence?”
I was frustrated by some bad ideas, unanswered good ideas, and issues that weren’t even brought up. Here I want to explain how we should think about Truth and AI, and which of the ideas shared were worth preserving. A quick summary of the debate follows, then my commentary. They started with opening statements.
The Debate
Founder of Perplexity Aravind Srinivas, on the yay side, opened with the idea that because humans are innately curious, the tools will be used to feed that curiosity. Fitting for a search tool founder.
Journalist and author Nicholas Carr, on the nay side, said that AI summaries remove the need to think. Thinking is how we find truth, and we’ll end up with passable mediocrity.
AI Researcher Fei Fei Li, on the yay side, gave examples of answering questions for children and building smart systems to prevent the elderly from falling. She is a roboticist, so this embodied intelligence is a win, but not germane to a debate about truth. She teaches top talent and sees how they successfully use AI.
Technologist and author Jaron Lanier, on the nay, argued that our culture and business are setup for failure here. Just like social media isn’t inherently bad, the culture around social media makes us angry and anxious by using it. And on the business front, the ad business model means services push to sway us, and bend the truth. He also implored appreciation for complex systems with unknown side effects.
Fei Fei then argued that Nicholas was wrong because AI empowered the most thoughtful people. She also said AI isn’t the problem, but people, which was nonsense like arguing nuclear weapons aren’t a concern because people launch them.
Aravind pointed out that Perplexity is a subscription service at scale, so the ad business risk of swaying attention was lower. Fei Fei highlighted that competition means we’ll have higher quality over time.
Fei Fei also argued that the problem with generating an essay with AI isn’t that the student doesn’t need to think but that the assignment needs to be updated to improve how we assess student knowledge. That motivation is the key concern.
Bari Weiss, the moderator, brought in quotes like one from Marc Andreessen, about how AI will be 1000x times worse than centralized social media censorship because it will be one closed system. Aravind said open source will help, while Jaron correctly pointed out that even open weight models are inscrutable today. Just today Grok open sourced their system prompt after an employee injected disinformation.
Nicholas concluded with a plea for artists. They are in touch with truth, and their work will be worse when AI synthesis is plausible, generic, and soulless. Jaron echoed this with a concern about attributing creator work in the models, and getting them paid fairly.
Aravind said early that he didn’t know how to debate and that AI helped him prepare. He ended the evening by reading a prepared conclusion that said “the other side wants to paint a dystopian picture.” Nicholas’s concern about passable mediocrity was thus proven correct. Instead of listening and addressing any issues live, Aravind read an AI-prepared plausible conclusion against a phantom.
The yays lost the debate, measured by how much the audience swung from yay to nay. The evening started with 68-32 yay Truth will survive, and ended with 45-55.
Going Deeper
I’m firmly in the yay camp, so let me make some arguments that Fei Fei and Aravind didn’t.
Truth is not a state of being but a process in our society. We value truth seeking, and many cultural institutions embody that value. To answer the question, we should look at these institutions in turn:
Education, how we train and assess the next generation
Research, the pinnacle of formal education, where we find new knowledge
Engineering, with technology, how we deploy and scale our knowledge
Security, how we protect what we value
Journalism, with media, how we explain what is going on
Art, as emotional and poetic truth
Education
The default view from educators is that a 5 paragraph essay can now be produced by AI and thus AI is cheating. Take a photo of a math problem, and get the answer. Use a browser extension to do both, and let the AI take your test. Each is very bad, obviously. Kids won’t learn the content or how to think. But like Fei Fei mentioned, the assessments are at fault. In education, the Bloom 2-sigma effect notes that 1:1 instruction is two standard deviations better for education performance. That isn’t affordable, but with AI we can achieve it. Our education system will adapt to deliver the best educational outcomes. Some institutions might resist and slow this change, like teachers unions or school district managers, both which resist any changes. We spend close to $20K per student per year, but a $2K/yr and maybe a $200/yr AI might outperform our teachers. AI that can understand your level of knowledge, tune explanations, and have infinite patience just for you.
Fei Fei was correct about motivation being key. It’s the intrinsic motivation to improve that will compel kids and parents to choose the best AI tools. Unfortunately this is a hard topic because do we even know what motivates or demotivates kids? Some think we need student driven inquiry. We also need heroes, and in our age of miracles we’re getting more and more examples. Getting humans back on the moon and to Mars. Colossal bringing back dire wolves. Boyan Slat building machines to clean our oceans and rivers. Can a personal AI tutor deliver personal motivation to achieve the impossible?
That is hard to predict, but I know already there is a Cambrian Explosion of tools custom to task in education.
Research and Engineering
What’s very clear already is that AI has an unambiguously positive impact on research and engineering. This wasn’t brought up in the debate much at all. Software engineering has gone through a revolution where regular people can ship whole apps, like using Lovable. And trained software engineers can produce and deploy code 10 times faster, with Copilot, Cursor, and Codex. AlphaFold has released 200M proteins, which represents a 1M times speedup in biotech research.
Security
Imagine your grandparents get a video call with your voice and face saying you got in an accident while traveling out of the country and they need to wire money. Deep fakes like this are a major concern. This predates AI, with cybersecurity broadly: from this social engineering to breaking into systems and ransoming data. Security is a cat and mouse game, and it’s not obvious AI offense will beat AI defense. Your bank should have systems that scan for weird wires. Maybe even your communication tools should have AI authenticators that detect fakes. Maybe signing all messaging is required to verify yourself what is said.
Journalism
Before printing, people lived in a media vacuum. From the very start of printing, bias was part of the game, from the American pamphleteers to yellow journalism. It was only with monopolies from high powered radio, huge newspaper printing machines, and restricted television broadcast channels that the news business boomed to the extent that editorial staff could afford objectivity. Then the internet exploded this model and we’re still trying to figure out what it means. The best media sources today have declared editorial perspectives while maintaining truth seeking. Unfortunately, bad faith and outrage on social media mean many are successful voices that spout nonsense. Concerns around deep fakes are overblown because skepticism and tribalism dominates our media today. We already believe fake things that don’t use generated imagery.
What few are talking about: personal AI to direct your attention. Maybe people are thoughtless, but our distracted minds are downstream from centralized algorithmic feeds optimized for your engagement and platform ad revenue. While I care about businesses efficiently reaching customers, I don’t need my feed influenced by anything other than what I want. AI is finally good enough to take a high level description of your desires, and to feed you that. This is on the consuming end and can’t be blocked by platforms. If you stream it, you can filter it. System 1 thinking is fast and intuitive while System 2 thinking is slow and deliberate. Algo feeds and push notifications thrive off System 1. But Personal AI feeds will be driven by System 2 intentional desires.
Concern around social media is also a current moral panic driven by narratives that lack real data. We talk about rage inducing tweets. What about the millions of hours of content that feed your soul. Rick Beato music theory. 3Blue1Brown math wonders. Nerdwriter screenwriting. Crash Course history. Mark Rober’s inspired engineering. Here Jaron is right that complexity rules. Media is so large with so many niches that it’s very hard to argue we have a good enough grasp to judge. Humility should defeat a moral panic. We should be most concerned with what generates the thoughtfulness to seek out the gems. We need to fight brain rot, AI generated or not.
Art
Generating images, music, stories, and movies is getting easier and easier. This is a double edged sword, with content creators potentially losing livelihoods while more and more people can finally create. The struggle to master a craft might be inherent. Learning music theory is actually just hard, no matter our best efforts to make it easier. But to edit video in tools like Adobe Premiere takes many times the length of the source material, and that struggle is just a deadweight loss. Sometimes better tools just help us achieve our vision better and faster. We also have business conditions that warp what artists receive. For example, very few people make a good living off streaming music from Spotify. But everyone in tech will tell you that is downstream from the behavior of music labels, not inherent to the technology itself. Music has the most extreme ratio of emotional salience to monetization, the inversion of sports video games. And then you have trends like Marvel or Star Wars mediocrity, where studios spend every resource possible to generate dreck.
Even without AI, content volume has exploded, and that will accelerate. So the question is on meaning and monetization. I predict more people will find personal meaning in their own creativity and that of others than meaning lost from a change in the caliber of content overall. I’d also predict more people make a living off of creative enterprises than do today.