Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here. After a brief hiatus, we’re back with a few show notes on OpenAI’s DevDay. The keynote yesterday morning in San Francisco was remarkable for its subdued tone — a contrast to the rah-rah, hypebeast-y address from […]
© 2024 TechCrunch. All rights reserved. For personal use only.
Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.
After a brief hiatus, we’re back with a few show notes on OpenAI’s DevDay.
The keynote yesterday morning in San Francisco was remarkable for its subdued tone — a contrast to the rah-rah, hypebeast-y address from CEO Sam Altman last year. This DevDay, Altman didn’t bound up onstage to pitch shiny new projects. He didn’t even make an appearance; head of platform product Olivier Godement emceed.
On the agenda for this first of several OpenAI DevDays — the next is in London this month, followed by the last in Singapore in November — were quality-of-life improvements. OpenAI released a real-time voice API, as well as vision fine-tuning, which allows developers to customize its GPT-4o model using images. And the company launched model distillation, which takes a large AI model like GPT-4o and uses it to fine-tune a smaller model.
The event’s narrow focus wasn’t unanticipated. OpenAI tempered expectations this summer, saying DevDay would focus on educating devs, not showcasing products. Nevertheless, what was omitted from Tuesday’s tight, 60-minute keynote raised questions about the progress — and status — of OpenAI’s countless AI endeavors.
We didn’t hear about what might succeed OpenAI’s nearly year-old image generator, DALL-E 3, nor did we get an update on the limited preview for Voice Engine, the company’s voice-cloning tool. There’s no launch timeline yet for OpenAI’s video generator, Sora, and mum’s the word on Media Manager, the app the company says it’s developing to let creators control how their content is used in model training.
When reached for comment, an OpenAI spokesperson told TechCrunch that OpenAI is “slowly rolling out the [Voice Engine] preview to more trusted partners” and that Media Manager is “still in development.”
But it seems clear OpenAI is stretched thin — and has been for some time.
According to recent reporting by The Wall Street Journal, the company’s teams working on GPT-4o were only given nine days to conduct safety assessments. Fortune reports that many OpenAI staff thought that o1, the company’s first “reasoning” model, wasn’t ready to be unveiled.
As it barrels toward a funding round that could bring in up to $6.5 billion, OpenAI has its fingers in many underbaked pies. DALL-3 underperforms image generators like Flux in many qualitative tests; Sora is reportedly so slow to generate footage that OpenAI is revamping the model; and OpenAI continues to delay the rollout of the revenue-sharing program for its bot marketplace, the GPT Store, that it initially pegged for the first quarter of this year.
I’m not surprised that OpenAI now finds itself beset with staff burnout and executive departures. When you try to be a jack-of-all-trades, you end up being a master of none — and pleasing nobody.
AI bill vetoed: California governor Gavin Newsom vetoed SB 1047, a high-profile bill that would’ve regulated the development of AI in the state. In a statement, Newsom called the bill “well-intentioned” but “[not] the best approach” to protecting the public from AI’s dangers.
AI bills passed: Newsom did sign other AI regulations into law — including bills dealing with AI training data disclosures, deepfake nudes, and more.
Y Combinator criticized: Startup accelerator Y Combinator is being criticized after it backed an AI venture, PearAI, whose founders admitted they basically cloned an open source project called Continue.
Copilot gets upgraded: Microsoft’s AI-powered Copilot assistant got a makeover on Tuesday. It can now read your screen, think deeply, and speak aloud to you, among other tricks.
OpenAI co-founder joins Anthropic: Durk Kingma, one of the lesser-known co-founders of OpenAI, this week announced he’ll be joining Anthropic. It’s unclear what he’ll be working on, however.
Training AI on customers’ photos: Meta’s AI-powered Ray-Bans have a camera on the front for various AR features. But it could turn out to be a privacy issue — the company won’t say whether it plans to train models on images from users.
Raspberry Pi’s AI camera: Raspberry Pi, the company that sells tiny, cheap, single-board computers, has released the Raspberry Pi AI Camera, an add-on with onboard AI processing.
AI coding platforms have nabbed millions of users and attracted hundreds of millions of dollars from VCs. But are they delivering on their promises to boost productivity?
Maybe not, according to a new analysis from Uplevel, an engineering analytics firm. Uplevel compared data from about 800 of its developer customers — some of whom reported using GitHub’s AI coding tool, Copilot, and some of whom didn’t. Uplevel found that devs relying on Copilot introduced 41% more bugs and weren’t any less susceptible to burnout than those who didn’t use the tool.
Developers have shown enthusiasm for AI-powered assistive coding tools despite concerns pertaining not only to security but also copyright infringement and privacy. The vast majority of devs responding to GitHub’s latest poll said they’ve embraced AI tools in some form. Businesses are bullish too — Microsoft reported in April that Copilot had over 50,000 enterprise customers.
Liquid AI, an MIT spinoff, this week announced its first series of generative AI models: Liquid Foundation Models, or LFMs for short.
“So what?” you might ask. Models are a commodity — new ones are released practically every day. Well, LFMs use a novel model architecture and notch competitive scores on a range of industry benchmarks.
Most models are what’s known as a transformer. Proposed by a team of Google researchers back in 2017, the transformer has become the dominant generative AI model architecture by far. Transformers underpin Sora and the newest version of Stable Diffusion, as well as text-generating models like Anthropic’s Claude and Google’s Gemini.
But transformers have limitations. In particular, they’re not very efficient at processing and analyzing vast amounts of data.
Liquid claims its LFMs have a reduced memory footprint compared to transformer architectures, allowing them to take in larger amounts of data on the same hardware. “By efficiently compressing inputs, LFMs can process longer sequences [of data],” the company wrote in a blog post.
Liquid’s LFMs are available on a number of cloud platforms, and the team plans to continue refining the architecture with future releases.
If you blinked, you probably missed it: An AI company filed to go public this week.
Called Cerebras, the San Francisco-based startup develops hardware to run and train AI models, and it competes directly with Nvidia.
So how does Cerebras hope to compete against the chip giant, which commanded between 70% and 95% of the AI chip segment as of July? On performance, says Cerebras. The company claims that its flagship AI chip, which it both sells direct and offers as a service via its cloud, can outcompete Nvidia’s hardware.
But Cerebras has yet to translate this claimed performance advantage into profits. The firm had a net loss of $66.6 million in the first half of 2024, per filings with the SEC. And for last year, Cerebras reported a net loss of $127.2 million on revenue of $78.7 million.
Cerebras could seek to raise up to $1 billion through the IPO, according to Bloomberg. To date, the company has raised $715 million in venture capital and was valued at over $4 billion three years ago.
Leave a Reply