TechCrunch Minute: OpenAI shrinks its flagship model

OpenAI is launching a mini version of its latest AI model. These small AI models are meant to be faster and more affordable than the full version — making them particularly useful for simple, high-volume tasks. That should appeal to smaller developers who don’t necessarily have a lot of money for AI costs but want […]
© 2024 TechCrunch. All rights reserved. For personal use only.

OpenAI is launching a mini version of its latest AI model.

These small AI models are meant to be faster and more affordable than the full version — making them particularly useful for simple, high-volume tasks. That should appeal to smaller developers who don’t necessarily have a lot of money for AI costs but want to incorporate AI into their website or app in a relatively lightweight way.

In this case, OpenAI announced its latest flagship model, GPT-4o, back in May. The “o” stands for “omni,” a reference to the fact that the model should be able to understand speech and video, as well as text.

And now there’s GPT-4o mini, which is the pared down version. GPT-4o mini currently supports text and images, but OpenAI says it will add video and audio capabilities in the future. This new mini model is supposed to be more than 60% cheaper than GPT 3.5 Turbo, which it’s replacing as OpenAI’s smallest model. It also scores better than competing small models on the MMLU (an industry benchmark for reasoning).

Hit play to learn more, and then let us know what you think in the comments!

 


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *