Go Back

Sora AI Drives Deepfake Growth, Raising Misinformation Risk

Sora AI Drives Deepfake Growth, Raising Misinformation Risk

Catenaa, Thursday, October 30, 2025-The AI video generator Sora is producing hyper-realistic deepfakes that challenge users’ ability to distinguish real footage from synthetic content, experts warn.

The app, launched in 2024 and recently upgraded to Sora 2, also includes a social media platform where all videos are AI-generated.

Sora videos are technically advanced, offering high-resolution visuals, synchronized audio, and a feature called “cameo,” which allows users to insert others’ likenesses into generated scenes.

While impressive, this capability raises concerns over misinformation and misuse, particularly for public figures. Unions such as SAG-AFTRA have urged OpenAI to strengthen guardrails to prevent exploitation.

Detecting AI content requires multiple strategies. Sora videos carry a dynamic watermark, a white cloud logo, that signals AI origin. However, watermarks can be removed or cropped.

Metadata provides another tool: Sora videos include C2PA content credentials that reveal the creator and AI status. The Content Authenticity Initiative offers an online verification tool to examine such metadata.

Social media platforms like Meta, TikTok, and YouTube are experimenting with AI labeling to flag synthetic content, though these measures are not foolproof.

Experts advise users to rely on multiple indicators, including watermarks, metadata, platform labels, and visible inconsistencies such as physics-defying movements or altered text, while remaining skeptical of unverified videos.

The rise of Sora underscores the growing challenge of AI-driven misinformation and the need for vigilance in digital media consumption.