Go Back

ZenO Unveils Public Beta for Physical AI Data

ZenO launches physical AI data beta

Catenaa, Monday, February 16, 2026-ZenO announced the launch of its public beta, opening access to a platform for collecting first-person audio, video, and image data from smart glasses and smartphones.

The initiative aims to provide real-world datasets to train physical AI systems, including robots, autonomous agents, and embodied models, using Story’s Layer-1 blockchain for secure provenance and licensing.

Physical AI systems often rely on synthetic or web-scraped data, which limits real-world performance. ZenO addresses this gap by capturing egocentric data, what humans actually see, hear, and do, allowing AI models to better perceive environments, generalize across conditions, and perform tasks with higher accuracy in real-world scenarios.

During the 6–8 week beta, participants can capture data through ZenO smart glasses or smartphones, upload it via the app, and have it processed through automated formatting, AI-assisted QA, and human review.

Sensitive information is anonymized, and structured metadata is added to describe actions and environments. Approved datasets are stored in ZenO’s secure data marketplace infrastructure.

Contributors are incentivized through immediate XP rewards during the beta and revenue sharing from downstream dataset sales in stablecoins.

Wallet-signed onchain consent and data identifiers provide verifiable authorization and provenance. Future releases will enable programmable data rights, licensing, and automated revenue distribution via the Story blockchain.

ZenO recently joined NVIDIA Inception, leveraging its GPU ecosystem and cloud infrastructure to scale enterprise-grade data capture.

The beta marks a step toward building a robust physical AI data network capable of supporting large-scale, rights-cleared datasets for real-world AI applications.