Putting a plant into a planter, placing snacks in containers, and sorting laundry successfully doesn’t necessarily mean you’re going to have a humanoid robot in your home next year. But the dream of offloading home cleaning, maintenance and maybe even cooking is getting a little more real as Google DeepMind showed off Apptronik’s Apollo robot obeying verbal commands and actioning tasks with objects it had never seen before.
In the video, Google shows robots opening Ziploc bags, inserting bread into the bag, sorting laundry by color, and manipulating odd-shaped real-world items that are sometimes squishy, sometimes difficult to pick up. The robots understand commands like “pick up the green block” or “sort this laundry into darks and whites,” and adapt to changes in the environment when their trainers move containers or objects they’re trying to pick up.
But they’re not fast.
“Sometimes they’re a bit clunky,” says Hannah Fry, a mathematician, broadcaster, and host of the DeepMind podcast. “But you have to remember that this idea of having a robot that can understand semantics, that can get a contextual view of the scene in front of it, that can reason through complex tasks is completely inconceivable just a few years ago.”
Google invested in Apptronik’s massive $403 million funding round earlier this year. In December last year, Apptronik announced a strategic partnership with Google DeepMind’s robotics lab to “bring together best-in-class artificial intelligence with cutting-edge hardware and embodied intelligence.”
To roughly over-generalize: Apptronik brings the robots, Google brings the brain. That brain has recently gotten a lot smarter with Gemini 3, and the robotics-specific version of Gemini, Gemini Robotics, is explicitly designed to support multiple embodiments – from dual-arm industrial robots to full humanoid form factors like Apollo – without retraining for each body.
The goal: a general purpose robot that can, essentially, do everything.
In other words, the Apptronik’s Apollo is being trained to do more than pick up boxes or repeat pre-programmed basic factory motions. It’s being taught to navigate the messy, unpredictable world that we humans inhabit: packing lunches, sorting laundry, opening unknown containers, and even responding gracefully when given previously unseen objects or tasks.
The latest hardware and software is getting much better at demonstrating that promise. Figure, which has shown its humanoid robot running smoothly and gracefully, has also shown Figure dealing with typical home challenges: putting dishes in the dishwasher, putting groceries away, and so on. The pace of development has massively accelerated in the past two years thanks to better AI, better hardware, and cheaper components.
If this most recent lab-based demo proves durable in the real world, it signals something fairly consequential. DeepMind and Apptronik are fusing high-quality humanoid hardware and foundation-model intelligence into a general-purpose robot that can perform a broad range of everyday physical tasks with minimal retraining. The result, potentially, could be the long-imagined “universal robot worker:” a cost-effective machine that can understand instructions, plan multi-step procedures, adapt to new objects and execute tasks with near-human dexterity.
Don’t hold your breath on that dexterity piece. Robots aren’t close yet: the Google demo in this video of putting a slice of bread into a Ziploc bag is all well and good … but you’ll note if you watch the video that the robot does not actually seal the Ziploc bag. That’s an astoundingly difficult thing to do that even humans struggle with sometimes.
Still, Apollo with Google’s DeepMind AI accomplishes at least four things:
- Dexterity: delicately manipulating non-standard items, like a bag of chips
- Generalization: handling objects that it had never seen before correctly
- Natural-language control: obeying verbal commands that require a significant world model, like “put the green block in the orange tray”
- Long-horizon planning: planning out multiple steps to accomplish a task
But there’s a long way to go. Humanoid robots will need to be faster in accomplishing tasks: right now they all look like they are moving in slow motion when handling objects and doing work. That means better hardware: joints, muscles (actuators), and control systems.
In addition, they need better training methods.
“These robots take a lot of data to learn these tasks,” says Kanishka Rao, director of robotics at Google DeepMind. “So we need a breakthrough where they can learn more efficiently with data.”
That’s interaction data and manipulation data: data that robotic brains can take and use to learn how to do tasks that they’ve never been faced with before.
And finally, of course, they need to be guaranteed safe to use in human environments, where they might encounter people – including children – and pets.
Plus, of course, grandma’s good china.
Read the full article here









