Physical Intelligence - featured BIM software image

Physical Intelligence

Physical Intelligence is a robotics AI lab that ships vision-language-action foundation models (including the ?? family) so one policy stack can be fine-tuned across arms, mobile bases, and manipulation tasks instead of hand-built controllers per robot line.

Physical Intelligence describes its mission as bringing general-purpose AI into the physical world through foundation models and learning algorithms meant to control many robot types. Public materials emphasize vision-language-action (VLA) policies, online reinforcement learning, and memory for long tasks rather than single-purpose scripts.

In February 2025 the team released code and weights for the ??0 base model under the openpi GitHub repository so researchers can run inference, fine-tune on their own platforms, and try checkpoints tuned for widely documented setups such as ALOHA and DROID (Physical Intelligence, 2025). The same post notes that internal experiments often used roughly one to twenty hours of task data for fine-tuning, with results varying by platform.

Newer blog and research entries on the site track fast-moving model generations (for example ??0.5 and ??* variants), memory architectures for long-horizon behavior, and partner deployments. The company lists a San Francisco Mission District office and active hiring across research, robotics software, and build roles.

Commercial collaboration is pitched alongside the open release: the openpi page invites email contact for partnerships that customize models, co-develop features, and support additional hardware. Treat licensing, export, and safety review as part of any production rollout, especially outside research labs.

Specifications

Pricing

Freemium

Platforms

Linux

Used for

Generalist robot policiesManipulation researchFine-tuning on custom robots

Used by

Robotics researchersML engineersRobot platform teams

Tasks

Policy fine-tuningVision-language-action modelingRobot fleet experimentation

Pros and cons

Pros

  • Strong transparency through papers, blogs, and an open-weight ??0 drop aimed at community experimentation.
  • Clear story on multi-robot generalization versus one-off scripted cells.
  • Active research cadence with dated posts useful for tracking model generations.

Cons

  • Open releases are positioned as experiments; not every hardware stack will reach production reliability without heavy validation.
  • The default tooling in public repos may favor research-oriented stacks that differ from some factory IT standards.
  • Enterprise pricing and SLAs are not published on the marketing site.

Key features

  • ?? family VLAs: Generalist policies described as controlling multiple robot morphologies after fine-tuning on modest task data.

  • openpi release: Public GitHub repository with ??0 code and weights plus example integrations and checkpoints for common research platforms.

  • FAST action tokenization: Research tracks describe faster training and alternate inference paths for discrete action spaces.

  • Research pipeline: Regular posts on reinforcement learning, memory, and transfer from human video to robot control.

  • Partner programs: Official pages route hardware teams and enterprises to collaboration email for tailored model work.

Pricing

openpi (model code and weights)

Free

Released for research use; verify license terms in the GitHub repository and any partner agreement for production.

Enterprise partnership (contact)

Free

Quoted engagements for custom models and forward-deployed support; request terms from Physical Intelligence.

Frequently asked questions

What does Physical Intelligence sell?

The company builds embodied foundation models and publishes research on vision-language-action policies, reinforcement learning, and memory for robots. It distributes the ??0 base model openly via the openpi repository while also inviting partners to co-develop customized deployments through commercial channels.

Is ??0 free to download?

Physical Intelligence released ??0 code and weights in the openpi GitHub project in February 2025 for experimentation and fine-tuning. You still need compatible robots, compute, and legal review for real-world use; partner programs cover supported enterprise rollouts.

Which robots are supported out of the box in openpi?

Documentation highlights checkpoints and examples tied to widely shared research platforms such as ALOHA-style dual arms and DROID-style Franka setups, plus simulation hooks like Libero in their posts. Other hardware requires building a client and fine-tuning with your own data.

How much data do I need to fine-tune ??0?

The open-sourcing blog cites internal experiments where roughly one to twenty hours of demonstrations often sufficed for several tasks, with the caveat that mileage varies by environment, gripper, and task difficulty.

Where is Physical Intelligence located?

Hiring pages list a San Francisco Mission District office address context and remote-friendly notes for some roles; confirm current location and travel expectations when applying or contracting.

How do I get commercial support?

The openpi announcement provides collaborate and research email addresses for partnership questions, technical feedback, and customization work beyond the public repository.

Tutorials and learning

Sources