The fixer’s dilemma: Chris Lehane and OpenAI’s impossible mission

Chris Lehane is one of the best in the business at hiding bad news. Al Gore’s press secretary during the Clinton years, and Airbnb’s chief crisis manager during every regulatory nightmare from here to Brussels — Lehane knows how to juggle. He’s now two years into what may be his most impossible assignment yet: As OpenAI’s vice president of global policy, his mission is to convince the world that OpenAI really cares about democratizing AI while the company increasingly behaves like every other tech giant has claimed to be different.
I spent 20 minutes with him on stage at the Elevate conference in Toronto earlier this week — 20 minutes to go over the talking points and real contradictions that eat away at OpenAI’s carefully constructed image. It wasn’t easy or entirely successful. Lehane is really good at his job. He’s adorable. It seems reasonable. Acknowledges uncertainty. He even talks about waking up at 3 a.m. worrying about whether any of this will actually benefit humanity.
But good intentions mean little when your company is calling out detractors, draining water and electricity in economically depressed cities, and bringing dead celebrities back to life to assert your market dominance.
The company’s Sora problem is actually the root cause of everything else. The video creation tool launched last week with copyrighted material seemingly hidden right in it. It was a bold move for a company that had already been sued by The New York Times, the Toronto Star, and half the publishing industry. From a business and marketing standpoint, it was great too. The invite-only app has risen to the top of the App Store as people create digital versions of themselves, and OpenAI CEO Sam Altman; Characters like Pikachu, Mario and Cartman from “South Park”; And dead celebrities like Tupac Shakur.
When asked why OpenAI decided to launch this newest version of Sora with these characters, Lehane offered that Sora is a “general purpose technology” like electricity or a printing press, democratizing creativity for people without talent or resources. He said on stage that even he – a self-described “creative zero” – can now produce videos.
What it’s about is that OpenAI is initially “allowing” rights holders to opt out of using their works to train Sora, which is not how copyright is typically used. Then, after OpenAI noticed that people really liked using copyrighted images, it “evolved” toward a subscription model. This doesn’t really happen again. This tests your ability to get away with it. (By the way, although the Motion Picture Association made some noise last week about legal threats, OpenAI appears to have gotten away with a lot.)
Naturally, the situation brings to mind the worsening role of publishers who accuse OpenAI of training for their work without sharing the financial spoils. When I pressed Lehane about excluding publishers from the economy, he cited fair use, the American legal principle that is supposed to balance the rights of creators with the public’s access to knowledge. He described it as the secret weapon of American technological dominance.
TechCrunch event
San Francisco
|
October 27-29, 2025
maybe. But I recently interviewed Al Gore — Lehane’s old boss — and realized that someone could simply ask ChatGPT about it instead of reading my article on TechCrunch. “It’s repetitive, but it’s also an alternative,” I said.
Lehan listened and dropped his speech. “We’re all going to need to know that,” he said. “It’s really knee-jerk and easy to sit here on stage and say we need to figure out new economic revenue models. But I think we will do it.” (In short, we’re making it up as we go, that’s what I hear.)
Then there is the infrastructure question that no one wants to answer frankly. OpenAI already operates a data center campus in Abilene, Texas, and recently began construction on a massive data center in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane has likened the accessibility of AI to the advent of electricity — saying those who last accessed it are still playing catch-up — yet OpenAI’s Stargate project appears to be targeting some of those economically challenged places as sites to set up facilities with an accompanying massive appetite for water and electricity.
When asked during our sit-in whether these communities would benefit or just foot the bill, Lehane turned to gigawatts and geopolitics. He noted that OpenAI needs about a gigawatt of power per week. China brought in 450 gigawatts last year, in addition to 33 nuclear facilities. If democracies want democratic AI, he said, they must compete. “The optimist in me says this will modernize our energy systems,” he said, painting a picture of a remanufactured America with transformed energy grids.
It was inspiring, but not an answer to whether people in Lordstown and Abilene will watch their utility bills rise while OpenAI creates videos for The Notorious BIG (video generation is the most power-hungry AI system ever).
The human cost became clearer the day before our interview, when Zelda Williams logged on to Instagram to beg strangers to stop sending AI-generated videos of her late father, Robin Williams. “You don’t make art,” she wrote. “You are making disgusting, hyper-processed sausages out of human lives.”
When I asked how she reconciled this kind of intimate harm with her mission, Lehane responded by talking about processes, including responsible design, testing frameworks, and government partnerships. “There’s no playbook for this stuff, right?”
Lehane showed weakness at times, saying his sleep was interrupted every night by worries about democracy, geopolitics and infrastructure. “There are tremendous responsibilities that come with this.”
Whether those moments were designed for the audience or not, I believe him. In fact, I left Toronto thinking I had witnessed a master class in political messaging – Lehane threading an impossible needle while dodging questions about corporate decisions that, as far as I know, he doesn’t even agree with. Then the news broke that already complicated picture.
Nathan Calvin, a lawyer who works on AI policy at a non-profit called Encode AI, revealed that at the same time I was speaking with Lehane in Toronto, OpenAI sent a deputy mayor to Calvin’s home in Washington, D.C., during dinner to serve him a subpoena. They wanted to send his private messages to California lawmakers, college students, and former OpenAI employees.
Calvin accuses OpenAI of intimidation tactics regarding a new piece of AI regulation, California’s SB 53. He says the company used its legal battle with Elon Musk as an excuse to target critics, meaning Encode was secretly funded by Musk. In fact, Calvin says he fought OpenAI’s opposition to California’s SB 53, an AI safety bill, and when he saw the company claim it had “worked to improve the bill,” he “literally laughed out loud.” In a jibe on social media, he went on to specifically call Lehane a “master of the dark political arts.”
In Washington, that might be a compliment. And at a company like OpenAI whose mission is “to build artificial intelligence that benefits all of humanity,” it seems like a condemnation.
What matters much more is that even OpenAI employees themselves are conflicted about what they have become.
As my colleague Max reported last week, a number of current and former employees took to social media after Sora 2’s release, to voice their concerns, including Boaz Barak, an OpenAI researcher and Harvard professor, who wrote of Sora 2, saying: “It’s technically amazing but it’s too early to congratulate ourselves on avoiding the dangers of other social media apps and deepfakes.”
On Friday, Josh Acciam — head of mission coordination at OpenAI — tweeted something more interesting about Calvin’s accusation. Introducing his comments by saying that they were “potentially a risk to my entire career,” Achim went on to write about OpenAI: “We cannot do things that would make us a fearsome force rather than a virtuous one. We have a duty and a mission to all of humanity. The bar to us from pursuing that duty is remarkably high.”
this . . .something. One OpenAI executive publicly wonders whether his company has become “a fearsome force rather than a virtuous one,” not on par with a competitor taking photos or a reporter asking questions. This is the person who chose to work at OpenAI, who believes in its mission, and who now admits to having a crisis of conscience despite the professional risks.
It’s a crystallization moment. You can be the best political activist in tech, or adept at navigating impossible situations, but end up working for a company whose actions are increasingly at odds with its stated values – contradictions that may intensify as OpenAI’s race toward artificial general intelligence.
It’s got me thinking now that the real question isn’t whether Chris Lehane can sell the OpenAI mission. It’s about whether others – including, other people who work there – still believe it.
Don’t miss more hot News like this! Click here to discover the latest in Technology news!
2025-10-11 06:04:00