OpenAI’s New “Sora” Development is a Massive Security Risk

computer, laptop, tech-4795762.jpg

What does OpenAI’s newest video generator mean for security?

Welcome to the future! Instead of dynamic flying cars or practical solutions to improve everyday life, instead it’s severe threats to our person44al security, privacy, and data.

Prompted and spearheaded by the developments of machine learning (or AI generated content), OpenAI’s latest development is not so much a technological revolution but more a dynamic threat to the concept of trustworthy information itself. Why so dramatic? Let’s explore why.

About two years ago AI-generated content started to enter the spotlight with big names like ChatGPT demonstrating their inventive capabilities with prompt-created writing. Images arrived next, where business entities like OpenAI scraped the internet of media to build their learning models. AI-generated images took the net by storm, helping us arrive at the next “step” in this technology: Sora.

No, not that Sora.

Sora is OpenAI’s established name for a learning model designed to generate realistic video sequence ranging from 10-60 seconds long. While AI-generated vids are nothing new, prototype examples were glaringly obvious as machine made with often incomprehensible (sometimes horrifying) results.

But Sora establishes a new footnote in this technology, and while close inspection does reveal errors and obvious machine-generated tells, that isn’t the point. Rather, it’s the implication of this technology and what it means on a social, security, and even legal level.

Examining the Implications

The development of AI-generated content that the “Sora” model signals is worrying, and we need to examine the implications of it.

Social

AI-generated content can be used to harass, slander, and even threaten people. What happens on a social scale when you’re at risk of having someone create a falsified video of you? What if someone uses media to say, falsify audio of your voice and send it to a place of work, colleague, friend, or even family member?

The level of misinformation and harassment something like “Sora” can cause – along with the eventual competitors/wannabes – is staggering. We depart from an era of online content to one where every bit of data and information must be second guessed and scrutinized.

That’s nothing to say of consent – or lack thereof – regarding the data Sora was trained on (or any AI model for that matter). Any generative content created by a machine learning model is accomplished by extrapolating from datasets. Think of said datasets like gigantic caches of information. ChatGPT, for example, scraped the internet sources providing it data to draw from and create content based on a specific prompt.

Sora is no different, drawing from video sources without any clear consent from the original sources. Those original sources were unlikely imagining their captured media to find itself in the bowels of an algorithmic blender, and unlikelier still that OpenAI (or other AI vendors) request permission of the original creator(s).

Security

If we manage to ignore the obscene algorithmic grinder collectively chopping up various forms of media, expression, photographs, videos, and content, we now have to contend with the security implications Sora brings to the table. And they are, in an unregulated context, nightmarish.

First, we need to understand that without any form of regulated action and enforced, restrictive responsibility, this technology can be used to do tremendous harm. The potential for misinformation alone is hellish.

“But surely OpenAI won’t allow for malicious use!”

Let’s use that hypothetical, even optimistic, statement as an example. Even if OpenAI was self-regulating, even if OpenAI only sourced and promoted its technology for ethical use case scenarios (and there has been no admission this is or will ever be their intent), it doesn’t matter. A go-to rule in the realm of IT and cybersecurity is the double-edged sword: what benefits society and the white-glove tech sector ultimately benefits malicious actors.

A perfect example: when ChatGPT and competitors cropped up in 2023, threat actors were overjoyed to utilize AI-generated code for malicious purposes. Even if initial attempts were elementary and insufficient to execute malware creation, it sparked a wildfire of technological misuse.

Lets step back further. Well before big names like OpenAI entered the tech spotlight, concepts like deepfakes were nothing new and well established. Perfect for misinformation, one could concoct malicious, falsified media. When you compile this intent with the concept of artificially generated videos, you should understand the security implications of it. The level of damaging misinformation that can cause is astronomical, even if said disinformation is later corrected.

Harassment, extortion, and political misinformation are just a few handy examples that come to mind with artificially generated videos.

Legal

There are other serious far-reaching consequence of AI-generated “content” on a legal level too. There are no regulatory rules or guidelines in place for AI – and it shows. Data scraping involves siphoning media content – visual, written, verbal – to build model sets. Again, without the consent of original sources.

It begins creating questions about copyright, ownership of content, and intent. What happens in the instance of harassment cases where AI is used to target a person or group? What happens when falsified media is used as “evidence” against someone? Sora opened the pandora’s box, a technology that promises its venture audience it will get better. What happens when this technology improves to the point it can fabricate evidence, videos, media, or even voices?

The implications – from falsifying information and evidence to spreading misinformation – are far reaching.

The bottom line for security

OpenAI’s development with Sora signals a continued evolution in machine-learning models, without examination the dangerous consequences involved. Sure, right now the tech still contains flaws and possess intrinsic limits. But the idea is selling a pitch and looking for prospective adopters. This technology can and will improve over the course of the next several years – but it is improvement for good?

Future Prep

AI is a contentious subject of a fascinated tech industry, strictly because of the ethics, dangers, and questions it raises. OpenAI’s “Sora” presents a whole new field of potential issues and problems we need to be ready for and ask ourselves if this is something we need.

For other tech questions, IT support, and cybersecurity solutions reach out to Bytagig today.

Share this post: