Google’s going all in on AI — and it wants you to know it. During the company’s keynote at its I/O developer conference on Tuesday, Google mentioned “AI” more than 120 times. That’s a lot! But not all of Google’s AI announcements were significant per se. Some were incremental. Others were rehashed. So to help […]
© 2024 TechCrunch. All rights reserved. For personal use only.
Google’s going all-in on AI — and it wants you to know it. During the company’s keynote at its I/O developer conference on Tuesday, Google mentioned “AI” more than 120 times. That’s a lot!
But not all of Google’s AI announcements were significant per se. Some were incremental. Others were rehashed. So to help sort the wheat from the chaff, we rounded up the top new AI products and features unveiled at Google I/O 2024.
Google plans to use generative AI to organize entire Google Search results pages.
What will AI-organized pages look like? Well, it depends on the search query. But they might show AI-generated summaries of reviews, discussions from social media sites like Reddit and AI-generated lists of suggestions, Google said.
For now, Google plans to show AI-enhanced results pages when it detects a user is looking for inspiration — for example, when they’re trip planning. Soon, it’ll also show these results when users search for dining options and recipes, with results for movies, books, hotels, ecommerce and more to come.
Image Credits: Google / Google
Google is improving its AI-powered chatbot Gemini so that it can better understand the world around it.
The company previewed a new experience in Gemini called Gemini Live, which lets users have “in-depth” voice chats with Gemini on their smartphones. Users can interrupt Gemini while the chatbot’s speaking to ask clarifying questions, and it’ll adapt to their speech patterns in real time. And Gemini can see and respond to users’ surroundings, either via photos or video captured by their smartphones’ cameras.
Gemini Live — which won’t launch until later this year — can answer questions about things within view (or recently within view) of a smartphone’s camera, like which neighborhood a user might be in or the name of a part on a broken bicycle. The technical innovations driving Live stem in part from Project Astra, a new initiative within DeepMind to create AI-powered apps and “agents” for real-time, multimodal understanding.
Image Credits: Google
Google’s gunning for OpenAI’s Sora with Veo, an AI model that can create 1080p video clips around a minute long given a text prompt.
Veo can capture different visual and cinematic styles, including shots of landscapes and time lapses, and make edits and adjustments to already generated footage. The model understands camera movements and VFX reasonably well from prompts (think descriptors like “pan,” “zoom” and “explosion”). And Veo has somewhat of a grasp on physics — things like fluid dynamics and gravity — which contribute to the realism of the videos it generates.
Veo also supports masked editing for changes to specific areas of a video and can generate videos from a still image, a la generative models like Stability AI’s Stable Video. Perhaps most intriguing, given a sequence of prompts that together tell a story, Veo can generate longer videos — videos beyond a minute in length.
Image Credits: TechCrunch
Google Photos is getting an AI infusion with the launch of an experimental feature, Ask Photos, powered by Google’s Gemini family of generative AI models.
Ask Photos, which will roll out later this summer, will allow users to search across their Google Photos collection using natural language queries that leverage Gemini’s understanding of their photo’s content — and other metadata.
For instance, instead of searching for a specific thing in a photo, such as “One World Trade,” users will be able to perform much more broad and complex searches, like finding the “best photo from each of the National Parks I visited.” In that example, Gemini would use signals including lighting, blurriness and lack of background distortion to determine what makes a photo the “best” in a given set and combine that with an understanding of the geolocation info and dates to return the relevant images.
Image Credits: TechCrunch
Gmail users will soon be able to search, summarize and draft emails, courtesy of Gemini — as well as take action on emails for more complex tasks, like helping process returns.
In one demo at I/O, Google showed how a parent who wanted to catch up on what was going on at their child’s school could ask Gemini to summarize all the recent emails from the school. In addition to the body of the emails themselves, Gemini will also analyze attachments, such as PDFs, and spit out a summary with key points and action items.
From a sidebar in Gmail, users can ask Gemini to help them organize receipts from their emails and even put them in a Google Drive folder, or extract information from the receipts and paste it into a spreadsheet. If that’s something you do often — for example, as a business traveler tracking expenses — Gemini can also offer to automate the workflow for use in the future.
Google previewed an AI-powered feature to alert users to potential scams during a call.
The capability, which will be built into a future version of Android, uses Gemini Nano, the smallest version of Google’s generative AI offering, which can be run entirely on-device, to listen for “conversation patterns commonly associated with scams” in real time.
No specific release date has been set for the feature. Like many of these things, Google is previewing how much Gemini Nano will be able to do down the road sometime. We do know, however, that the feature will be opt-in — which is a good thing. While the use of Nano means the system won’t be automatically uploading audio to the cloud, the system is still effectively listening to users’ conversations — a potential privacy risk.
Image Credits: Google
Google is enhancing its TalkBack accessibility feature for Android with a bit of generative AI magic.
Soon, TalkBack will tap Gemini Nano to create aural descriptions of objects for low-vision and blind users. For example, TalkBack might refer to an article of clothing as, “A close-up of a black and white gingham dress. The dress is short, with a collar and long sleeves. It is tied at the waist with a big bow.”
According to Google, TalkBack users encounter around 90 or so unlabeled images per day. Using Nano, the system will be able to offer insight into content — potentially forgoing the need for someone to input that information manually.
Leave a Reply