The Robotics Summit & Expo is where robotics innovation comes to life. Join NXP at Booth #536 to see AI in action. Experience live demos, meet our experts and discover how intelligent systems perceive, decide and act at the edge.
お客様の素早い設計とより早い製品化を実現する、技術情報と専門知識をご紹介します。
重要: このページには 当社の製品に関するセキュア情報が 記載されています サインイン 承認されたリソースにアクセスします。
重要: このページには当社製品に関する安全な情報が記載されています。
セキュア情報を表示する重要: セキュアなコンテンツを表示するには認証が必要です。
パスワードの再入力May 27-28, 2026, Boston Convention and Exhibition Center Booth #536
The Robotics Summit & Expo is where robotics innovation comes to life. Join NXP at Booth #536 to see AI in action. Experience live demos, meet our experts and discover how intelligent systems perceive, decide and act at the edge.
We’re bringing AI into motion. NXP technologies enable robots to perceive, reason and act in real time at the edge.
Explore how our platforms support multimodal sensing, low-latency decision- making and scalable system design to accelerate your next robotics innovation.
Join us at Booth #536 to experience how integrated platforms bring perception, AI processing, and control together to drive real-time robotic systems.
Our demos highlight scalable architectures designed for low latency, high reliability and intelligent decision-making at the edge.
| Demo Name | Demo Description |
|---|---|
| Real-Time Dexterous Robotics Hand | Precise, synchronized control of a multi-axis robotic hand powered by i.MX RT1180 and MCX MCUs. Deterministic motion control and EtherCAT networking enable smooth, responsive, human-like movement. |
| Seamless Human-Robot Interaction | See how next-generation edge conversational AI systems enhance RAG-based pipelines by leveraging multimodality to interpret user intent and respond in real time, without relying on the cloud. |
| Physical AI: from Vision to Action | Discover how Vision-Language-Action models enable robots to understand their environment and act in real time. Accelerated at the edge, this demo shows how perception drives intelligent decision-making. |
May 28 at 1:30 PM ET
Presented by: Enzo Ruedas, Machine Learning Engineer, NXP
Explore a new front end for robot conversation, where interaction moves beyond scripts into context‑aware dialogue. This session introduces a modular, end‑to‑end framework spanning attention, perception, reflection, and action. See how multimodal signals and agentic AI help robots reason, decide, and respond intelligently at the edge.