A version of this q&a first appeared in TechCrunch’s free robotics newsletter, Actuator. Subscribe here. We’re wrapping up our end-of-year robotics Q&A series with this entry from Deepu Talla. We when I paid a visit to NVIDIA’s Bay Area headquarters, back in October. For more than a decade, Talla has been the chip giant’s Vice […]
© 2023 TechCrunch. All rights reserved. For personal use only.
A version of this q&a first appeared in TechCrunch’s free robotics newsletter, Actuator. Subscribe here.
We’re wrapping up our end-of-year robotics Q&A series with this entry from Deepu Talla. We when I paid a visit to NVIDIA’s Bay Area headquarters, back in October. For more than a decade, Talla has been the chip giant’s Vice President and General Manager – Embedded & Edge Computing. He offers a unique insight into the state of robotics in 2023 and where thing are headed in the future. Over the past several years, NVIDIA has established itself a major platform for robotics simulation, prototyping and deployment.
Previous Q&As:
What role(s) will generative AI play in the future of robotics?
We’re already seeing productivity improvements with generative AI across industries. Clearly, GenAI’s impact will be transformative across robotics from simulation to design and more.
Simulation: Models will be able to accelerate simulation development, bridging the gaps between 3D technical artists and developers, by building scenes, constructing environments and generating assets. These GenAI assets will see increased use for synthetic data generation, robot skills training and software testing.
Multimodal AI: Transformer-based models will improve the ability of robots to better understand the world around them, allowing them to work in more environments and complete complex tasks.
Robot (re)programming: Greater ability to define tasks and functions in simple language to make robots more general/multipurpose.
Design: Novel mechanical designs for better efficiency — for example, end effectors.
What are your thoughts on the humanoid form factor?
Designing autonomous robots is hard. Humanoids are even harder. Unlike most AMRs that mainly understand floor-level obstacles, humanoids are mobile manipulators that will need multimodal AI to understand more of the environment around them. An incredible amount of sensor processing, advanced control and skills execution is required.
Breakthroughs in generative AI capabilities to build foundational models are making the robot skills needed for humanoids more generalizable. In parallel, we’re seeing advances in simulations that can train the AI-based control systems as well as the perception systems.
Following manufacturing and warehouses, what is the next major category for robotics?
Markets where businesses are feeling the effects of labor shortages and demographic shifts will continue to align with corresponding robotics opportunities. This spans robotics companies working across diverse industries, from agriculture to last-mile delivery to retail and more.
A key challenge in building autonomous robots for different categories is to build the 3D virtual worlds required to simulate and test the stacks. Again, generative AI will help by allowing developers to more quickly build realistic simulation environments. The integration of AI into robotics will allow increased automation in more active and less “robot-friendly” environments.
How far out are true general-purpose robots?
We continue to see robots becoming more intelligent and capable of performing multiple tasks in a given environment. We expect to see continued focus on mission-specific problems while making them more generalizable. True general-purpose embodied autonomy is further out.
Will home robots (beyond vacuums) take off in the next decade?
We’ll have useful personal assistants, lawn mowers and robots to assist the elderly in common use.
The trade-off that’s been hindering home robots, to date, is the axis of how much someone is willing to pay for their robot and whether the robot delivers that value. Robot vacuums have long delivered the value for their price point, hence their popularity.
Also, as robots become smarter, having intuitive user interfaces will be key for increased adoption. Robots that can map their own environment and receive instructions via speech will be easier to use by home consumers than robots that require some programming.
The next category to take off would likely first be focused outdoors — for example, autonomous lawn care. Other home robots like personal/healthcare assistants show promise but need to address some of the indoor challenges encountered within dynamic, unstructured home environments.
What important robotics story/trend isn’t getting enough coverage?
The need for a platform approach. Many robotics startups are unable to scale as they are building robots that work well for a specific task or environment. For commercial viability at scale, it’s important to develop robots that are more generalizable — that is, they can add new skills rapidly or bring the existing skills to new environments.
Roboticists need platforms with the tools and libraries to train and test AI for robotics. The platform should provide simulation capabilities to train models, generate synthetic data and exercise the entire robotics software stack, with the ability to run the latest and emerging generative AI models right on the robot.
Tomorrow’s successful startups and robotics companies should focus on developing new robot skills and automation tasks and leverage the full extent of available end-to-end development platforms.
Leave a Reply