AI

The three big unanswered questions about Sora

OpenAI is taking steps toward monetizing things (you can now buy products directly through ChatGPT, for example). On October 3, its CEO, Sam Altman, wrote in a blog post that “we will have to make money somehow to make the video,” but did not elaborate. One can imagine personalized ads and more in-app purchases.

However, it is alarming to think that a mountain of emissions would result if Sora became famous. Altman accurately described the emissions burden of a single ChatGPT query as impossibly small. What he doesn’t specify is what that number is for a 10-second video created by Sora. It’s only a matter of time until AI and climate researchers start calling for it.

How many lawsuits are coming?

Sora is steeped in copyrighted and trademarked characters. It allows you to easily deepfake deceased celebrities. Her videos use copyrighted music.

last week, Wall Street Journal It stated that OpenAI sent letters to copyright holders notifying them that they would have to opt out of the Sora platform if they did not want their material included, which no How do these things usually work? The law around how AI companies handle copyrighted material is not yet settled, and it would be reasonable to expect lawsuits challenging this order.

In a blog post last week, Altman wrote that OpenAI is “listening from a lot of rights holders” who want more control over how their characters are used in Sora. He says the company plans to give those parties more “fine control” over their characters. However, he wrote, “There may be some extreme cases of generations passing through that should not.”

But another problem is the ease with which you can use photos of real people. People can restrict who can use their cameos, but what limits can be applied to these cameos in Sora’s videos?

This seems to be an issue that OpenAI is already having to answer. Sora’s president, Bill Peebles, posted on October 5 that users can now restrict how they use their hijab — preventing it from appearing in political videos or saying certain words, for example. How well will this work? Is it only a matter of time until someone’s hijab is used for something outrageous, downright illegal, or at least creepy, resulting in a lawsuit claiming OpenAI is responsible?

Overall, we haven’t seen what Sora looks like at scale yet (OpenAI is still providing access to the app via invite codes). When we do, I think it will be a grim test: Can AI create videos that are so finely tuned for endless engagement that they will outperform “real” videos for our attention? Ultimately, Sora isn’t just testing OpenAI technology, he’s testing us, and how much reality we’re willing to trade for an infinite array of simulations.

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-10-07 09:00:00

Related Articles

Back to top button