Amazon takes ‘major step’ toward zero-touch manufacturing facilities with Nvidia AI




Amazon, arguably the world’s largest user of robotics technologies, is said to be “powering big leaps” in zero-touch manufacturing with a new physical AI software solutions and digital twin technologies from Nvidia.

Deployed this month at an unnamed Amazon Devices & Services facility, the company’s innovative, simulation-first approach for zero-touch manufacturing trains robotic arms to inspect diverse devices for product-quality auditing and integrate new goods into the production line – all based on synthetic data, without requiring hardware changes.

This new technology brings together Amazon Devices-created software that simulates processes on the assembly line with products in Nvidia-powered digital twins. Using a modular, AI-powered workflow, the technology offers faster, more efficient inspections compared with the previously used audit machinery.

Simulating processes and products in digital twins eliminates the need for expensive, time-consuming physical prototyping. This eases manufacturer workflows and reduces the time it takes to get new products into consumers’ hands.

To enable zero-shot manufacturing for the robotic operations, the solution uses photorealistic, physics-enabled representations of Amazon devices and factory work stations to generate synthetic data.

This factory-specific data is then used to enhance AI model performance in both simulation and at the real work station, minimizing the simulation-to-real gap before deployment.

It’s a huge step toward generalized manufacturing: the use of automated systems and technologies to flexibly handle a wide variety of products and production processes – even without physical prototypes.

Although the specific facilities that are undergoing this transformation, Amazon Devices & Services manufactures products in multiple locations.

The company has a strong presence in the Puget Sound region of Washington state, including Seattle and Kirkland, and Arlington, Virginia. It also has facilities across North America and globally.

AI, digital twins for robot understanding

By training robots in digital twins to recognize and handle new devices, Amazon Devices & Services is equipped to build faster, more modular and easily controllable manufacturing pipelines, allowing lines to change from auditing one product to another simply via software.

Robotic actions can be configured to manufacture products purely based on training performed in simulation – including for steps involved in assembly, testing, packaging and auditing.

A suite of Nvidia Isaac technologies enables Amazon Devices & Services physically accurate, simulation-first approach.

When a new device is introduced, Amazon Devices & Services puts its computer-aided design (CAD) model into Nvidia Isaac Sim, an open-source, robotics simulation reference application built on the Nvidia Omniverse platform.

Nvidia Isaac is used to generate over 50,000 diverse, synthetic images from the CAD models for each device, crucial for training object- and defect-detection models.

Then, Isaac Sim processes the data and taps into Nvidia Isaac ROS to generate robotic arm trajectories for handling the product.

The development of this technology was significantly accelerated by AWS through distributed AI model training on Amazon devices’ product specifications using Amazon EC2 G6 instances via AWS Batch, as well as Nvidia Isaac Sim physics-based simulation and synthetic data generation on Amazon EC2 G6 family instances.

The solution uses Amazon Bedrock – a service for building generative AI applications and agents – to plan high-level tasks and specific audit test cases at the factory based on analyses of product-specification documents.

Amazon Bedrock AgentCore will be used for autonomous-workflow planning for multiple factory stations on the production line, with the ability to ingest multimodal product-specification inputs such as 3D designs and surface properties.

To help robots understand their environment, the solution uses Nvidia cuMotion, a CUDA-accelerated motion-planning library that can generate collision-free trajectories in a fraction of a second on the Nvidia Jetson AGX Orin module.

The nvblox library, part of Isaac ROS, generates distance fields that cuMotion uses for collision-free trajectory planning.

FoundationPose, an Nvidia foundation model trained on 5 million synthetic images for pose estimation and object tracking, helps ensure the Amazon Devices & Services robots know the accurate position and orientation of the devices.

Crucial for the new manufacturing solution, FoundationPose can generalize to entirely new objects without prior exposure, allowing seamless transitions between different products and eliminating the need to collect new data to retrain models for each change.

As part of product auditing, the new solution’s approach is used for defect detection on the manufacturing line. Its modular design allows for future integration of advanced reasoning models like Nvidia Cosmos Reason.

Visit Us: Robotics and Automation

Comments

Popular posts from this blog

Beyond manufacturing: Cobots in healthcare, labs, and food service

GreenBot unveils autonomous system for weeding woody crop areas

Natural language processing and voice control for robots: The conversational robot