Announcing our Series C with $110M in total funding. Read more →.

Episode thumbnail

Special Panel | Encord x Weights & Biases x Anyscale

Brains, Bodies & Benchmarks: Physical AI Panel

Speakers

Chris Paxton

Chris Paxton

AI Innovation Lead @ Agility Robotics

Jason Ma

Jason Ma

Co-Founder @ Dyna Robotics

Daniel Ho

Daniel Ho

Director of Evaluations @ 1X

Adrian Li-Bell

Adrian Li-Bell

Member of Technical Staff @ Physical Intelligence

Chris Paulson

Chris Paulson

Senior Dev Rel Manager, Robotics, NVIDIA

About this Panel

This panel brings together leading experts from Agility Robotics, Dyna Robotics, 1X, Physical Intelligence, and NVIDIA to discuss the cutting edge of Physical AI. Learn about the real-world challenges of building embodied AI systems, the role of Vision-Language-Action models (VLAs) and World Models in robotics, and how top teams validate and benchmark their systems in production.

Sign up to hear about our latest Physical AI event

Stay in the loop on upcoming Physical AI events, panels, and exclusive content from leading robotics experts.

This panel brings together the leading voices in Physical AI to discuss the transition from labs to real-world reliability. Featuring Chris Paxton (Agility Robotics), Daniel Ho (1X), Adrian Li-Bell (Physical Intelligence), and Jason Ma (Dyna Robotics), the discussion dives deep into why the next two years will be the "iPhone moment" for the robotics industry.

Brains, Bodies, and the Inseparable Loop

One of the most profound takeaways from the panel was the rejection of the "brain-in-a-vat" approach. The experts agreed that in robotics, the software (brain) and hardware (body) are fundamentally connected.

  • Data-Centric Intelligence: Jason (Dyna Robotics) emphasized that "brain and body are so connected" that a simple camera adjustment can be more effective than adding complex layers to a model.
  • The VLA Debate: While Vision-Language-Action (VLA) models are the current workhorses, the panel explored whether world models—which allow robots to "dream" or predict future states—will be the key to true generalization.

The Data Flywheel: From Lab to Living Room

The shift from "internet-scale data" to "robot-scale data" is the next frontier. The panelists discussed how to build a self-sustaining cycle where robots learn from their own deployments.

  • Safety as a Foothold: Daniel (1X) explained their strategy of deploying "low-energy, safe" robots in homes. By solving tasks like laundry with an 80% success rate today, they create a data funnel that drives 99% reliability tomorrow.
  • Cross-Embodiment: Adrian (Physical Intelligence) highlighted that training models that work across different robot types (N to N+1). This "multitask generalization" ensures that when hardware iterates, the AI doesn't have to start from square one.
  • Filtering the Noise: Chris Paxton (Agility) warned that as fleets scale, most data becomes "worthless." The challenge is building infrastructure to identify the specific 0.1% of "failure moments" that actually teach the model something new.

The Reliability Bar: Why "Almost" Isn't Enough

The panel drew sharp contrasts between academic success and commercial viability. In a lab, a successful demo is a win; in a factory or home, a 0.1% failure rate could mean a robot falling every single day.

The Road to 2026 and Beyond

When asked how far we are from "generalized robotics," the answers were surprisingly optimistic:

  • The "iPhone Moment": Some panelists predict that in 2026, we will see the first major wave of robots providing real utility to consumers.
  • The Labor Replacement: For a robot that can "replace all labor" or fix a leaky pipe under a sink with no intervention, the timeline extends to 10+ years.
  • Intelligence for Free: Jason noted a fascinating trend: as models get smarter, safety issues (like self-collisions) often "solve themselves" without extra engineering, simply because the model understands the physical world better.

Event Highlights