Robotic vision electronics design for industry and space

Article By : Steve Taranovich

18 years into the new millennium, there are a number of exciting and evolving electronic innovations taking place. Among them is the development of ‘intelligent’ robots for industry, especially in smart factories.

18 years into the new millennium, there are a number of exciting and evolving electronic innovations taking place. Among them is the development of ‘intelligent’ robots for industry, especially in smart factories (see The role of Sensors in the Industrial IoT (IIoT)).

The advent of 5G communications will enable factories to take data from the production floor that will improve quality and enable increased automation. 5G low latency with accelerated edge computing, coupled with fast sampling capability, will give rise to higher speeds in manufacturing and enable closed-loop inline inspection of manufactured components8.

Hewlett-Packard Enterprises says, “Edge computing is a distributed, open IT architecture that features decentralized processing power, enabling mobile computing and Internet of Things (IoT) technologies. In edge computing, the device processes data itself, or by local computer or server, rather than being transmitted to a data center” (Figure 1).

Figure 1 The push for accelerated data processing (at the edge) (Image courtesy of Reference 8)

Machine vision

A critical component in the performance of intelligent robotics is machine vision (MV) technology. This uses computers coupled with high-speed cameras. By combining these two technologies, complex inspection tasks can be performed as well as digital image acquisition and analysis. That data can control a robotic arm, sort objects, recognize patterns, and so much more that we have not even conceived yet.

An alliance

First, we need to begin with the Embedded Vision Alliance, which defines embedded vision as the practical use of computer vision in machines that understand their environment through visual means.

Next, let’s look at industrial applications of a robot’s vision sensing architecture. One of the best and most complex aspects of MV is 3D imaging. Cameras, combined with other support equipment perform many tasks including image signal processing (ISP), video transport, format conversion, compression, and analytics.

Some 3D camera imaging technologies from Microsoft, Intel, and Occipital follow.

Microsoft

Microsoft Azure has a neat computer vision offering via cloud computing. They are bringing the intelligent edge to robotics via this platform. See an example of how well they can analyze an image here. Their Computer Vision API is pretty remarkable. One of their customers, Jabil Circuit, Inc connected their factory floor to the cloud; see how they integrated predictive analytics with real-time manufacturing here.  Jabil uses sensors, wireless, precision machines, optics, automation, and mechatronics in their manufacturing.

In late, 2018, Microsoft announced that they are bringing the Robotic Operating System (ROS) to Windows 10, working with Open Robotics and the ROS Industrial Consortium (ROS-I).

Intel

Intel is taking a hardware approach to accelerate intelligent vision with the use of FPGA-based accelerator solution technology as a result of their 2015 acquisition of Altera combined with Intel CPUs for next-gen vision-based systems.

Camera sensor technology is ever being improved, so there is a trend to replace analog cameras with smart Internet Protocol (IP) cameras. Also, in the new mix with IP cameras comes artificial deep-learning-based video analytics. FPGAs are right for the needs of vision-based systems since they have high performance/watt, low latency, and flexibility (Figure 2).

Figure 2 The flexibility that FPGAs have in supporting different sensor and MV interfaces (Image courtesy of Intel)

MV technology is enhanced via the use of FPGAs because they enable MV camera designs with different image sensors and MV-specific interfaces. An FPGA also has use as a vision-processing accelerator inside the edge computing platform that can capture the power of artificial intelligence (AI) deep learning to analyze the MV data outputs.

Other areas that are enhanced and enabled by the use of FPGAs in robotic vision cameras are: use of multiple GigE cameras where one FPGA can integrate image capture, camera interface, communications, and preprocessing; a frame grabber link between MV cameras and the host PC running the algorithms; the use of Camera Link using TI’s Channel Link interface; USB 3 vision; CoaxPress; and Thunderbolt.

Occipital

This company has Occipital Tracking technology for MV with 6-degree of freedom (6-DoF) positional tracking, mapping and obstacle awareness, and more. They also have Structure sensor and Structure Core for robotics.

With Structure Core, Occipital has created a pocket-sized computer vision device with onboard wide-vision camera, stereo infrared capability, an on-board DSP, and a color module. Figure 3 shows three cameras: A wide-vision camera with 160 degree field-of-view and two infrared cameras. There is an on-board inertial measurement unit (IMU) and a NU3000 processor that computes depth and also has a programmable DSP.

Figure 3 Occipital Structure Core is an advanced depth sensor (Image courtesy of Occipital)

[Continue reading on EDN US: Advanced MV and 3D displacement sensing]

Steve Taranovich is a senior technical editor at EDN with 45 years of experience in the electronics industry.

Want to learn more? Check out these other articles in AspenCore’s Special Project on machine-vision-guided robots:

3D vision enhances robot opportunities
Vision guided robotics (VGR) has long used 2D imaging, but the advent of cost-effective 3D is opening new application opportunities.

Open-source software meets broad needs of robot-vision developers
Robot vision applications can bring a complex set of requirements, but open-source libraries are ready to provide solutions for nearly every need. Here are some of the many open-source packages that can help developers implement image processing capabilities for robotic systems.

Applications for Vision-Guided Robots
Perhaps the most significant recent developments regarding robotics have involved the combination of high-resolution imaging, artificial intelligence, and extreme processing capabilities.

Designer’s Guide to Robot Vision Cameras
Giving a robotic system vision requires the right camera selection. Here’s a guide to get you started.

3D vision gives robots guidance
Many options exist for 3D machine vision, each addressing different application needs.

Related articles:

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Leave a comment