Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own. By the way, TechCrunch plans to launch an AI newsletter […]
© 2024 TechCrunch. All rights reserved. For personal use only.
Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.
By the way, TechCrunch plans to launch an AI newsletter soon. Stay tuned. In the meantime, we’re upping the cadence of our semiregular AI column, which was previously twice a month (or so), to weekly — so be on the lookout for more editions.
This week in AI, OpenAI revealed that it’s exploring how to “responsibly” generate AI porn. Yep — you heard that right. Announced in a document intended to peel back the curtains and gather feedback on its AI’s instructions, OpenAI’s new NSFW policy is intended to start a conversation about how — and where — the company might allow explicit images and text in its AI products, OpenAI said.
“We want to ensure that people have maximum control to the extent that it doesn’t violate the law or other peoples’ rights,” Joanne Jang, a member of the product team at OpenAI, told NPR. “There are creative cases in which content involving sexuality or nudity is important to our users.”
It’s not the first time OpenAI has telegraphed a willingness to dip a toe into controversial territory. Earlier this year, Mira Murati, the company’s CTO, told The Wall Street Journal that she “wasn’t sure” if OpenAI would eventually allow its video generation tool, Sora, to be used to create adult content.
So what to make of this?
There is a future in which OpenAI opens the door to AI-generated porn and it all turns out… fine. I don’t think Jang’s wrong in saying that there are legitimate forms of adult artistic expression — expression that could be created with the help of AI-powered tools.
But I’m not sure we can trust OpenAI — or any generative AI vendor, for that matter — to get it right.
Consider the creators’ rights angle, for one. OpenAI’s models have been trained on vast amounts of public web content, some undoubtedly pornographic in nature. But OpenAI hasn’t licensed all this content — or even allowed creators to opt out of training until relatively recently (and even then, only certain forms of training).
It’s tough to make a living in adult content, and were OpenAI to take AI-generated porn mainstream, there’d be even stiffer competition facing creators — competition built on the backs of those creators’ works, not for nothing.
The other problem in my mind is the fallibility of current safeguards. OpenAI and rivals have been refining their filtering and moderation tools for years. But users constantly discover workarounds that enable them to abuse the companies’ AI models, apps and platforms.
Just in January, Microsoft was forced to make changes to its Designer image creation tool, which taps OpenAI models, after users found a way to create nude images of Taylor Swift. On the text generation side, it’s trivial to find chatbots built on top of supposedly “safe” models, such as Anthropic’s Claude 3, that readily spit out erotica.
AI has already created a new form of sexual abuse. Grade school and high school students are using AI-powered apps to “strip” photos of their classmates without those classmates’ consent; a 2021 poll conducted in the U.K., New Zealand and Australia found that 14% of respondents aged 16 to 64 had been victimized with deepfake imagery.
New laws in the U.S. and elsewhere aim to combat this. But the jury’s out on whether the justice system — a justice system that already struggles to root out most sex crimes — can regulate an industry as fast-moving as AI.
Frankly, it’s tough to imagine an approach that OpenAI might take to AI-generated porn that isn’t fraught with risk. Maybe OpenAI will reconsider its stance once again. Or maybe — against the odds — it will figure out a better way. Whatever the case ends up being, it seems we’ll find out sooner rather than later.
Here are some other AI stories of note from the past few days:
Apple’s AI plans: Apple CEO Tim Cook revealed a few tidbits about the company’s plans to move forward with AI during last week’s earnings call with investors. Sarah has the full story.
Enterprise GenAI: The CEOs of Dropbox and Figma — Drew Houston and Dylan Field, respectively — have invested in Lamini, a startup building generative AI tech along with a generative AI hosting platform aimed at enterprise organizations.
AI for customer service: Airbnb is launching a new feature that allows hosts to opt for AI-powered suggestions to reply to guests’ questions, such as sending the guests a property’s checkout guide.
Microsoft restricts AI use: Microsoft has reaffirmed its ban on U.S. police departments from using generative AI for facial recognition. It also banned global law enforcement from applying facial recognition tech on body cameras and dashcams.
Money for the cloud: Alternative cloud providers such as CoreWeave are raising hundreds of millions of dollars as the generative AI boom drives the demand for low-cost hardware to train and run models.
RAG has its limits: Hallucinations are a big problem for businesses looking to integrate generative AI into their operations. Some vendors claim that they can eliminate them using a technique called RAG. But those claims are greatly exaggerated, finds yours truly.
Vogels’ meeting summarizer: Amazon’s CTO, Werner Vogels, open sourced a meeting summarizer app called Distill. As you might expect, it leans heavily on Amazon products and services.
Leave a Reply