Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.
© 2024 TechCrunch. All rights reserved. For personal use only.

On Tuesday, Google announced a number of new additions to Gemma, its family of open (but not open source) models at its annual Google I/O 2024 developer conference.

The headline-grabbing release here is Gemma 2, the next generation of Google’s Gemma models, which will launch with a 27 billion parameter model in June.

Already available is PaliGemma, a pre-trained Gemma variant that Google describes as “the first vision language model in the Gemma family” for image captioning, image labeling and visual Q&A use cases.

So far, the standard Gemma models, which launched earlier this year, were only available in 2-billion-parameter and 7-billion-parameter versions, making this new 27-billion model quite a step up.

In a briefing ahead of Tuesday’s announcement, Josh Woodward, Google’s VP of Google Labs, noted that the Gemma models have been downloaded more than “millions of times” across the various services where it’s available. He stressed that Google optimized the 27-billion model to run on Nvidia’s next-gen GPUs, a single Google Cloud TPU host and the managed Vertex AI service.

Size doesn’t matter, though, if the model isn’t any good. Google hasn’t shared a lot of data about Gemma 2 yet, so we’ll have to see how it performs once developers get their hands on it. “We’re already seeing some great quality. It’s outperforming models two times bigger than it already,” Woodward said.

We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.

 


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *