Automated Guided Vehicles (AGVs) have been augmenting internal logistics for some time now, and the technology has proved to be extremely valuable, though not perfect, in Industry 4.0 initiatives to increase automation.
Having AGVs perform menial, repetitive tasks in warehouses for example has led to greater efficiency where expensive or under-utilised resources were previously acting – not to mention the atemporal nature of the machines allowing 24/7/365 scheduled operations.
Despite this, there is a new technology in town here to succeed the AGV. Automated Mobile Robots (AMRs) may sound similar in name, however they are a leap forward both in technology terms and financially.
One of the more significant challenges that several industries are attempting to tackle is automation of shelf inventory.
Whether it’s in the warehouse or middle aisle of the supermarket, knowing what you have on the shelf, and where on the shelf it is, is crucial to order fulfilment and a steady stream of sales.
The traditional model, of course, sees a fairly regular cycle of troops on the ground running up and down aisles with a clipboard and pencil marking numbers in boxes.
That way of thinking is being re-envisioned under the principles of Industry 4.0 to accommodate the workforce in a more productive capacity, cut costs and quash expensive errors.
In our last blog we talked about AMRs, a technological leap towards solving the problem of automated shelf inventory. But the robot ‘body’, if you will, is only half the equation. The other half is the ‘brains’. The robot must be able to reliably determine what is on the shelf in order to calculate and report inventory, a process often termed ‘vision recognition’.
Vision recognition software must accomplish two major feats to work effectively – object detection (what is an object, as opposed to background) and object recognition (what the object is). At a high level, the process requires training of machine learning models using large image datasets. To this end, there are several technologies maturing.
Unsurprisingly, the Big Four of Amazon, Google, IBM and Microsoft all have similar Machine Leaning (ML) offerings – let’s look at Amazon’s as an example. Their proprietary cloud-based offering of AWS SageMaker is an end-to-end environment for ‘Build, Test and Deploy’ of ML models. It facilitates the use of open-source frameworks like TensorFlow and ApacheMXNet and trains models through the SageMaker engine. Ground Truth, an image-labelling service capable of automation, and the DeepLens camera, a specialty camera designed for vision recognition, are two of the differentiators that perhaps make the Amazon toolset the most complete and accessible of the options for developers looking to get started in vision recognition today
Outside of these ‘premium’ first-party offerings, however, there are no shortage of open-source tools to get started with for free. Some of the most impressive, like YOLO v3, Darknet & LabelImage, to name a few, are the touch point for many of the most exciting vision recognition projects being undertaken today.
Between the emergence of AMRs and the rapidly maturing modern vision recognition technologies, the capacity for automation of shelf inventory has never been more attainable. Industry 4.0 initiatives can have robots intelligently determining where they need to do a stock-check next in a 20,000 sq/ft warehouse, avoiding fallen boxes, forklifts and workers on its way until it finally arrives and accurately reports that there are three items left on the shelf rather than the required four. All before the order fulfilment guys even realised there was any order to fulfil.