We are presently in the fourth industrial revolution which is employing the latest, emerging technologies to improve all industry. Systems are now capable of connecting everything by the sharing of devices, machines, and factory information. Smart devices in a factory setting are able to independently manage manufacturing processes. Sensor data, in all aspects of a manufacturing environment, is one very key source of critical information. This information is then routed to a higher-level decision-making system. I will be providing a few of the latest applications using smart sensors in this article, in which the system architecture examples follow the IIoT approach for the smart factory.

Automated guided vehicles (AGVs)1

Materials and goods need to move around a factory floor in an expeditious manner while getting to the right place at the right time. This was the job of humans in the past, maybe they were speeding around the factory in a small electric vehicle delivering these needed supplies. Today, we have the technology to autonomously bring these materials via the AGV.

To do this successfully and effortlessly, we need to have the factory floor properly mapped into a central computer system. One method to enable this solution uses radio frequency identification (RFID) technology embedded in the floor paths so that the AGV can make route decisions and identify the final location(s) of materials.

An Android app coupled with Bluetooth communication is one tried-and-true solution to control these AGV ‘robots.’ RFID, for route guidance, is perfect for Industry 4.0 since it will capture real-time data for smart factory logistics.

The AGV can be equipped with sensors and actuators, as well as processors and communication systems. Path-planning algorithms are very important in this process; one function they perform is anti-collision, with many vehicles traversing the factory floor.

The AGV’s ‘map’ or kind of ‘indoor GPS’ in this factory environment, is RFID. Small tags are embedded in the factory floor with a tag reader deployed on the AGV underside. Since these RFID tags are passive in nature, they need no power source of any kind; the RF signal from the transmitter in the reader, activates the tag to send its information back to the AGV. The tags do not need line-of-sight, which is very helpful in a harsh factory environment (Figure 1).


Figure 1
A simulated factory floor embedded with RFID tags (Image courtesy of Reference 1)

The test system shown above runs at 13.56 MHz. The AGVs will stop at select areas to simulate picking up or dropping off materials or products. In this case an Android phone acts as the central processor and control center.


Figure 2
The AGV and its sensors (Image courtesy of Reference 1)

There is a central processor (in the Android phone) that manages the key decisions. For communications, an app was designed to communicate with three AGVs via Bluetooth. The signals from the AGVs determine their position locations on the factory floor and the processor also sends signals to the AGVs for their next task. The infrared sensor on the bottom of the AGV, helps the AGV to follow the track fitted with the RFID tags. There is also a sonar sensor in front of the AGV, which will detect any objects ahead to avoid collisions (Figure 2). See Reference 1 for more details.

Robot automation with LIDAR and camera2 sensor fusion

Robots will perform various tasks in Industry 4.0 production areas and even storage facilities. Dynamic object tracking can be achieved via sensor data fusion with the use of a stereo camera positioned in a fixed location and a mobile platform carrying a laser range-finder for collision avoidance. Robots and humans need to co-exist and work together efficiently and safely.

The system needs to be able to avoid disruption of the target tracking process. One solution could be to provide the mobile robot with current maps, surveillance feed access, and additional sensors as part of the existing infrastructure. 5G and autonomous vehicle research will be using V2X technology and this may also be a possibility. For this article, we will explore lower cost, simpler technology using stereo vision RGB-D (depth sensor) cameras3, which will have improved depth sensing for object tracking via sensor data fusion. The use of LIDAR is chosen here as part of the sensor fusion since passive depth sensing methods are sensitive to light exposure changes.

Reference 2 uses a design architecture with snapshot detection and tracking where each sensor’s data gets translated into a coordinate system (Figure 3).


Figure 3
Two modules are used in this system: 3D processing for the detection of the object and Bayes filtering4 for information fusion of and tracking the target. (Image courtesy of Reference 2)

Further processing is necessary (Figure 4). Note that each of the sensors may not use all the processing steps. Also, the stereo camera’s point cloud data needs down-sampling because the embedded computer has limited processing capacity. The system used a Voxel Grid (a set of tiny 3D boxes in space) method to preserve the cloud’s distribution of points.


Figure 4
Snapshot detection block diagram (Image courtesy of Reference 2)

It was found that track losses, caused by temporary blockages or interruptions in the LIDAR path, were prevented by using the auxiliary stereo measurements. Although cameras are less accurate; the camera’s high ground position and extended field of view provided a benefit. Stereo cameras were found to be both affordable as well as powerful, and when they are integrated into an infrastructure, they give a real-time overview in a complex robotic application.

[Continue reading on EDN US: Augmented reality assembly processes]

Steve Taranovich is a senior technical editor at EDN with 45 years of experience in the electronics industry.

Related articles: