By Jeremy Cook
Stretching back to the ancient Greek legend of Talos–Crete’s mythological robotic protector, humans have long dreamed of using artificially intelligent beings to make our lives easier. The invention of the transistor and the following explosion of available computing power, like in autonomous mobile robots, means that this idea is now a reality.
What are autonomous mobile robots?
The first mobile robots were laboratory experiments such as Shakey the Robot, developed in the 1960s at the Stanford Research Institute. Experimental devices gave rise to mobile robots that could be used in real-world applications, such as the mail robot featured on the FX TV show, The Americans. These sometimes-problematic devices were sold beginning in the 1970s. While nominally robotic, these mailbots are considered AGVs (i.e., autonomously guided vehicles) that follow a set movement path embedded in the floor.
AGVs are limited in their flexibility and applications. With better sensor technology and massively improved computing power, AGVs have given way to AMRs, or autonomous mobile robots. This device class understands and navigates through its environment without the direct intervention of a human. They do not need a pre-determined path, as they can sense the environment, create an internal model, and then calculate an optimized route to complete tasks.
These robots can react to changing warehouse or outdoor conditions and work with other robots and even humans. Computations may be performed onboard the robot or via local computing resources at the physical building where they operate. Autonomous mobile robot systems can also take advantage of cloud computing for optimal performance.
Automated mobile robots and their applications
Although AGVs were niche devices in the past—you’ve likely never seen an Americans-era mailbot in person—today, we’re seeing a proliferation of autonomous mobile robotics driven by applications in warehouse management, most notably at Amazon. The warehousing and delivery behemoth has installed over 350,000 robots in its facilities since their introduction in 2012.
Automation has allowed Amazon warehouses to stock 50% more items, retrieved 3x faster than before. The cost of fulfillment has fallen by 40%, and automation of repetitive tasks allows Amazon to optimize procedures to reduce human injuries.
Amazon’s fleet of automated mobile robots primarily consists of short robots resembling heavily upgraded Roombas. These robots slide under inventory shelves, known as pods, pick them up with an integrated jack, and then cart them off to human pickers at around three miles per hour. Human workers can then retrieve the items without trekking through the warehouse. While a robotic warehouse may appear chaotic to a casual observer, movements are coordinated via a central controller allowing AMRs to reach their final destination.
After warehousing for AMRs, the next step is to move items to the final destination. Human drivers are still primarily used. However, new devices like Starship Technologies’ six-wheeled delivery bot and Amazon’s (now-ended) trial of the similar Scout robot could allow companies to eventually forgo using a driver altogether. Aerial drones could be another step in this evolution.
Beyond getting products from one place to another, AMRs could be implemented for automated inspection, especially in areas where it’s dangerous for humans to travel. Consider that some situations, such as deep underground, make direct human supervision per RF interference difficult. Automation would therefore be essential. Learn how automatic UV sanitization of hospitals and similar applications are tasks that robots can accomplish (not simply report on) without human interaction.
How do autonomous mobile robots work?
With current advances in cloud computing and more capable edge processing, we’ve seen a vast improvement in robot control. These robots must have up-to-date information on the world around them via local sensing and be able to control their movement via the proper motor and drive setup.
These technologies will be essential to the next autonomous mobile robotic revolution:
- Machine Vision – Uses a camera or cameras to see in the same manner as humans. AI, machine learning, and enhanced edge processing allow for enhanced capabilities.
- LIDAR – Light detection and ranging. Generates a point cloud to map the surrounding area in 3D.
- Edge AI Computing – Powerful processing modules onboard mobile robots allow for real-time decision-making without the latency inherent in accessing far-off cloud resources. The Jetson line of SBCs is a popular method for experimentation with edge AI.
- Wireless Networking – Allows access to cloud and/or local (facility level) computing resources and coordinates multiple robots working as a swarm.
- Motor Driver – Directly controls robot motors.

Autonomous mobile robot design: Near limitless potential
While the current thrust of AMR technology is largely around warehouse automation, robotics for UV sanitization was a hot topic during the height of the COVID pandemic, and mobile security patrol applications are being explored today.
The NYPD recently unveiled plans to start using a 400-pound, five-foot-tall, wheeled Knightscope K5 robot. Whether the public will embrace this robot is still an open question. The department experimented with a Spot robot in 2021 before it was taken out of service after public backlash. This new Knightscope robot could be described as R2-D2 morphed with a gigantic egg, or as one commenter noted, “a Disney version of a Dalek.”
The field of human policing may be safe for now, but automated mobile robots will undoubtedly continue to significantly affect our lives. The question is whether they will be seen in public or if they will be hidden behind warehouse and industrial facility walls.
Autonomous mobile robot design involves different key components, such as brushless DC (BLDC) motors, sensors and power supply. Explore these products and more at Arrow.com.