AI built a new social network—starring you
It was only a matter of time before the technology came for our faces.
First, it was our creations: AI models trained on copyrighted music, books, and newspapers. And then it came for our images, creating visual deepfakes of celebrities and politicians—from the pope to President Trump to Taylor Swift.
After that, it came for our voices with audio deepfakes and our confessions in pseudo-therapy sessions with our AI companions.
And now, it’s ready to train on our faces.
With OpenAI’s new Sora app, a TikTok-like short-form video app exclusively made for AI-generated videos, our very likeness (and that of our family and children) can get shared all over the internet.
AI’s creative power can be astonishing, but it also blurs the line between expression and exploitation. In this week’s newsletter, we explore that tension of where innovation meets intrusion through the lens of Sora, examining what it means for deepfakes, copyright, privacy, and the evolving future of social media itself.
// AI + social media + short-form video
Sora is OpenAI’s social AI video-generating app. Within five days of its most recent launch on September 30th (a previous Sora model had been available since 2024), the app reached over one million downloads.
A review of Sora in The New York Times called it “a social network in disguise; a clone of TikTok down to its user interface, algorithmic video suggestions and ability to follow and interact with friends.”
The key difference, however, is that all the videos on Sora are AI-generated. And with Sora’s “cameo” feature, users can upload videos of themselves or friends, generating an unlimited stream of memes and short videos in which they take center stage. It’s an app built for those with main character energy, or those hungry for “interactive fan fiction,” as OpenAI CEO Sam Altman described it on his blog.
A tech reviewer in The Wall Street Journal sarcastically explained the appeal: “After a week with Sora, I’ve realized that TikTok, with its millions of people, places and things, can’t get you to the ultimate payoff that AI video-generation tools can: a whole lot more you.”
More you could create more problems—from personal deepfakes and privacy issues, to copyright concerns and misinformation. And because Sora is not the only social AI video app—Meta and Google both have competitors—the number of AI-generated videos will likely explode.
// What could go wrong?
The launch and meteoric rise of Sora have led to a steady stream of criticism.
Personal privacy
Individuals can choose how their own likeness is used on Sora, according to an article on OpenAI’s website. A user can specify whether their cameo can only be used by them, people they approve of, or anyone on the internet (Altman has chosen anyone on the internet, leading to a proliferation of AI-generated images of himself).
What happens when someone creates a video of your likenesss without your consent? OpenAI published a blog post, “Launching Sora Responsibly”, listing the various safeguards the company has put in place—including safeguards to ensure a person’s likeness doesn’t get used without their consent (“Only you decide who can use your cameo, and you can revoke access at any time.”), safeguards for teens designed to shield them from inappropriate content or messages, and filters for harmful content.
Normalizing deepfakes
Based on the settings a user selects, Sora allows anyone to create a deepfake of themselves, possibly their friends, and definitely tech leaders like Altman, or anyone who grants full permission to their likeness (or accidentally fails to restrict it).
Daisy Soderberg-Rivkin, a former trust and safety manager at TikTok, sees Sora as attempting to rebrand deepfakes into something lighthearted and innocuous: "It's as if deepfakes got a publicist and a distribution deal," she said. What feels playful can quickly become perilous. As deepfakes become commonplace, it will be easier for nefarious actors to push disinformation or scam unsuspecting viewers.
After Bernice King, the daughter of Martin Luther King Jr., began receiving Sora-generated deepfakes of her father, OpenAI blocked deepfakes of Martin Luther King Jr., as it “strengthens guardrails for historical figures,” it said in a statement.
Copyright issues
When Sora was launched in September, users could create videos featuring copyrighted content, unless the rightsholders explicitly opted out. This default setting of “your copyrighted material is opted-in for use” enraged Hollywood studios that saw their copyrighted characters, voices, and scenes as fair game. Rob Rosenberg, a former Showtime Networks executive, told The Hollywood Reporter that OpenAI is “turning copyright on its head. They’re setting up this false bargain where they can do this unless you opt out. And if you don’t, it’s your fault. That’s not the way the law works.”
A few days later, under mounting pressure and the threat of lawsuits, OpenAI reversed course, shifting to an opt-in model that requires permission from rightsholders before copyrighted characters could appear in Sora. The copyright issues in Sora are representative of a battle between rightsholders and AI companies that continues to play out.
- Earlier this year, in separate lawsuits, Disney, Warner Bros. Discovery, and Universal sued Midjourney for allowing its users to create AI images of their copyrighted characters.
- AI companies, including OpenAI, have lobbied the federal government to allow AI models to train on copyrighted materials by clarifying the “fair use” legal doctrine, which would allow certain use of intellectual property without a license.
The real cost of producing AI videos
In the days after the launch, Altman admitted that “we are going to have to somehow make money for video generation. People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.”
Producing an AI video requires substantial, costly computing power (which is driving the rapid construction of gargantuan data centers across the country). Researchers from Hugging Face, an open-source AI platform, found that the energy demands of text-to-video generators quadruple when the length of videos doubles. This means that the power needed to produce longer AI videos increases quadratically, not linearly, further exacerbating the energy (and cost) drain of apps like Sora.
The insatiable hunger of AI video might not be a problem for a person who uses the app for free, but it could put a strain on a company that faces real tradeoffs about what it allocates its computing power to do. According to Altman's own writing, that could be a choice between curing cancer and educating every child on earth.
Some critics see Sora as a betrayal of OpenAI’s mission to create artificial general intelligence that benefits all of humanity.
One tweet put it like this:
“OpenAI in 2021: ‘we want to cure brain cancer’
OpenAI in 2025: ‘we’re becoming brain cancer’”
// The highest and best use of AI
In 2011, technology entrepreneur Peter Thiel lamented the stagnation of technology by quipping, “We wanted flying cars, instead we got 140 characters.”
His words echo with the Sora app. Given the power and potential of artificial intelligence, is its highest and best use creating another addictive social media app?
If you take OpenAI executives at their word, the answer is yes.
- “We felt that the best way to bring [AI video tools] to the masses is through something that is somewhat social,” Rohan Sahai, OpenAI’s product lead for Sora, said.
- In a tweet (ironic, right?), Altman seemed less confident of the why behind Sora: “we do mostly need the capital for build AI that can do science, and for sure we are focused on AGI with almost all of our research effort. it is also nice to show people cool new tech/products along the way, make them smile, and hopefully make some money given all that compute need.”
There is no clear or straightforward path to using AI to cure brain cancer or to be a benefit to all of humanity. It is far easier to do the proven thing of creating a highly addictive, algorithm-driven social feed centered on short-form video.
We wanted AI breakthroughs in science and medicine; instead, we got more AI slop and “brain rot.”
// The good news
The good news is that Sora does not represent the totality of how AI is being used today. There are AI researchers and technologists, civil society organizations and advocacy groups (many of whom are in the Project Liberty Alliance), policymakers and politicians, and parents and students who are all choosing to use AI more responsibly and ambitiously.
The AI era doesn’t necessarily have to be a repeat of the social media era. It can be rich in opportunities to advance human agency and flourishing.