Interview With Geordie Rose - Co-Founder and CEO of Sanctuary AI

Published on: March 14, 2024
Shauli Zacks Shauli Zacks
Published on: March 14, 2024

In a recent interview with SafetyDetectives, Geordie Rose, co-founder and CEO of Sanctuary AI, shares insights into the innovative forefront of artificial intelligence and robotics. With a rich background that spans founding the world’s first quantum computing company, D-Wave, and leading Kindred through pioneering uses of reinforcement learning, Rose’s journey is as diverse as it is impressive. At Sanctuary AI, his ambition soars towards creating human-like intelligence in general-purpose robots, aiming to revolutionize how we perceive and interact with AI and robotics in everyday life. From deploying humanoid robots in commercial settings to addressing labor shortages with advanced technology, Rose’s vision for Sanctuary AI encapsulates a future where robots and humans coexist seamlessly, enhancing efficiency and safety across various sectors.

Introduce yourself, your background, and your role at Sanctuary AI.

My name is Geordie Rose and I am a co-founder and CEO of Sanctuary AI where I lead the company on its mission to create the world’s first human-like intelligence in general purpose robots. Prior to Sanctuary, I founded D­-Wave, the world’s first quantum computing company, and was the CEO of Kindred, the world’s first robotics company to use reinforcement learning in a production environment. I have sold quantum computers and robots to Google, NASA, Lockheed Martin, Gap Inc., and several US government agencies.

I also hold a Ph.D. in theoretical physics from the University of British Columbia. I am a two-time Canadian national wrestling champion, a member of the McMaster University Hall of Fame, and was the winner of the 2010 NAGA masters white belt Brazilian jiu-jitsu world championships in both gi and no-gi categories.

What is Sanctuary AI?

Sanctuary AI is the only company on a mission to create the world’s first human-like intelligence in general purpose robots. It was founded in 2018 by some of the people who founded D-Wave and Kindred. D-Wave was the world’s first commercial supplier of quantum computers and Kindred was the first company to use reinforcement learning in a production robot.

We were the first company to deploy humanoid general-purpose robots in a commercial setting on March 7th last year. Our robots were capable of performing hundreds of tasks onsite at one of the biggest retailers in Canada.

While others are focused on developing special-purpose AI and special-purpose robots to address singular tasks or activities, we take a much more general-purpose approach to both AI and robotics. We publicly unveiled our 6th-generation humanoid robots ‘Phoenix’, last year. Powered by a pioneering AI control system called Carbon, Phoenix robots have hands designed to mirror human dexterity and full body mobility.

Our general purpose robots are designed to help us work more safely, efficiently, and sustainably, and address the labor challenges facing many organizations today. There are 9.5 million job openings in the U.S. alone, but according to the U.S. Chamber of Commerce only 6.5 million unemployed workers.

The Carbon-powered Phoenix robots were the only humanoids named on TIME’s Best Inventions of 2023 list, and are currently the most advanced general-purpose robots as measured by the hundreds of tasks that have been performed under teleoperation.

Can you share your insights on the current trends and advancements in artificial intelligence and robotics?

In the past year, humanoid robotics went mainstream. Public interest increased dramatically, with more companies coming out of the woodwork, and we’ve seen compelling hardware advancements across the industry. We’re particularly interested in the developments that have been made in relation to haptic feedback and robotic hands as those will play a key role in companies’ ability to create truly general purpose robots.

As people, so much of our learning comes from things other than text-based data, which is the way the majority of generative AI tools currently operate. We learn from senses like sound, sight, and touch as well. In order to build machines that understand and think like us, they need to experience moving through the world similarly to how we do. This understanding has caused more experts in space to realize that to truly create artificial general intelligence (AGI), AI that thinks and understands like us, we must be focused on embodying the technology to give it that physical real world experience.

What areas within AI and robotics do you believe hold the most promise for future breakthroughs?

Historically, robots that have been successful have been on the special purpose end of the spectrum. For example, a manufacturing robot might be designed to move a car chassis following a precise and specific pattern. Or a vacuum cleaning robot, which is designed to vacuum your floor. These special purpose robots are incredibly valuable, but by design, limited in scope. Neither of these robots could do the other’s job. At Sanctuary AI, we’re instead focusing our efforts on building general purpose robots that can do any work a person might be reasonably expected to do. The world, and the workplace, were designed by and for people. Thus for a robot to be truly useful and effective in the workplace, it must have human-like intelligence, form, function, and senses. General purpose humanoid robots are designed to fit into any existing work environment, which is much easier and cost-effective than restructuring an entire workplace around different special purpose tools.

A critical component needed to make robots truly general purpose is dexterous hands. Given that more than 98% of all work requires the dexterity of the human hand, one can’t create a truly useful humanoid robot without human-like hands. This is an area that Sanctuary AI has been intently focusing on. Each with 19 degrees of freedom and proprietary haptic technology, Phoenix’s hands are designed to be as close as possible to human hands in their dexterity and fine manipulation capabilities. Our haptic technology provides the robots with a sense of touch which is essential to perform tasks successfully.

Additionally, while the AI field accelerated rapidly in 2023, it’s all still locked in the digital world. In 2024, the key will be grounding AI in the physical world, including systems with touch and tactile feedback. We see “embodied AGI” as the next great opportunity in AI, and we’re getting incrementally closer to achieving it.

What role do explainability and transparency play in building trust with end-users when it comes to AI systems?

I’ve worked on this problem in different guises over the last ten years, and over this time I’ve learned that trust is impossible without explainability and transparency. We see a future where robots will work alongside people, and the only way of this future being successful is creating a relationship between people and their robot colleagues which is grounded in trust.

We place transparency with end-users at the center of all of our public communications. We have a few different channels where we do this, such as the “Robots Doing Stuff” videos on our YouTube channel. This series shows one of our general purpose robots performing a task or series of tasks. Almost all of these videos come from one of our internal testing sessions that are conducted almost every day either in one of our labs or at one of our commercial customers’ facilities. The videos all include the timecode in the bottom right corner of the frame as a measure of full transparency from our engineering team: none of these videos are spliced together from multiple takes or sped up. They are all shown at real-time speed and from one single, continuous take.

Explainability is also a key technological component which makes up our Carbon control system. Carbon features reasoning, task, and motion plans that are both explainable and auditable, so users can understand at a granular level what is happening in the AI system.

What advancements in natural language processing do you find most exciting, and how might they shape communication between humans and machines?

Within NLP, one of the most interesting developments of the last year has been the surge of Large Language Models (LLMs), a specialized subset of NLP. Anyone who has been following the progress of LLMs over the last ten years will attest to how fascinating it has been to watch.

Ten years ago, LLMs weren’t much more than a fun toy to play with which could, for example, generate the US constitution in the style of a nursery rhyme. This was the limit of the technology for a long time, and some people didn’t think it would ever get beyond that stage. But then with the availability of enormous amounts of training data, it exponentially exploded.

The next frontier will be when this starts to be applied to behavior. I think the next ‘ChatGPT moment’ will be when this happens to robot behavior and movements in the world. This is what we’re currently building, and we refer to it as ‘Large Behavioral Models’ (LBMs). This may also look like a toy at the moment – a robot that can move a block, stack a block, like maybe pick something up, press a button.

We’re building this in a careful and considered manner at the moment, but at some point we believe that progress will see that same exponential curve that we saw with LLMs, and it will seem like robots are suddenly able to do everything seemingly out of nowhere.

About the Author
Shauli Zacks
Published on: March 14, 2024

About the Author

Shauli Zacks is a tech enthusiast who has reviewed and compared hundreds of programs in multiple niches, including cybersecurity, office and productivity tools, and parental control apps. He enjoys researching and understanding what features are important to the people using these tools.