AI and machine learning: Shaking up the space industry

Article By : Ossi Saarela

Today’s increasingly ambitious mission requirements are demanding higher levels of autonomy and greater navigational precision from spacecraft, requiring more than logic-based AI.

Spacecraft autonomy has been primarily a collection of logic-based algorithms designed to respond to a set of circumstances that can be defined (or at least bounded) a priori. This type of artificial intelligence (AI) has worked well when the inputs to the algorithms fall within the pre-defined mission scope, allowing the pre-built logic statements to generate the appropriate response ahead of time.

Today’s increasingly ambitious mission requirements are demanding higher levels of autonomy and greater navigational precision from spacecraft, requiring more than logic-based AI. High-precision space navigation to small comets and asteroids; entry, descent and landing (EDL) on moons and planets; and, rendezvous and proximity operations (RPO) with both cooperative and uncooperative targets all need sensing and perception capabilities provided by vision-based systems. Traditionally the development of these technologies have fallen within the domain of the public sector, but today the private sector is participating actively by driving progress in vision-based technologies such as autonomous satellite servicing, lunar landing, and research in vision-based AI and machine learning.


Editor’s Note: The development of reusable rockets is lowering barriers to both scientific and commercial exploration of space, stimulating increasing interest and investment in space electronics. This article is part of an AspenCore Special Project that provides designers with a look at the technologies and design practices needed for creating space-worthy electronic designs, including ICs, ASICs, flex-cable, connectors, thermal management, rad-hard techniques, space-related testing methods, and more.


Despite the increasing popularity of vision-based sensing systems, developing them has traditionally been costly and resource intensive. The algorithms used to translate a raw image into data for vehicle control are developed by a niche group of engineers with specialized area expertise. Verification and validation of these algorithms can involve complex physical testbeds featuring robots moving on tracks toward physical scale models of approach targets such as spacecraft and asteroids. In some cases, the testbeds are even flown in orbit before the technology is deployed on its intended mission.

Once the algorithms are developed and validated by test, implementation onto production hardware is complicated by the need to optimize the available on-board processing resources, which are often limited by the availability of computing hardware that can survive the hostile radiation environment of space. As part of this optimization, it is common for portions of the algorithms to be distributed between FPGAs and computer processors. This split though can increase both design complexity and the number of engineering specializations required.

NASA’s Raven is an on-orbit testbed for developing vision-based sensing systems for relative navigation. It is shown here deployed on the International Space Station. (Image courtesy of NASA)

However, change is brewing. The ongoing private space race, which is disrupting many space-related technologies, is also driving down the cost of developing relative navigation capabilities. Competitions like the Google Lunar XPRIZE have motivated new companies to develop extraterrestrial landing technology at substantially lower cost than was previously possible.

How is this being accomplished? Companies are using higher-level languages such as MATLAB and Simulink for algorithm development. This approach enables their algorithm design engineers to focus on developing the high-level application rather than spending time reinventing lower-level image processing routines, which are now available off the shelf. These higher-level languages also enable rapid prototyping of candidate algorithms, which can be integrated with existing guidance, navigation, and controls models for early system-level validation. Using these languages with Model-Based Design also allows software and hardware development engineers to automatically generate code for embedded deployment on both processors and FPGAs, and create test benches for system verification.

Image processing techniques such as segmentation can be done in MATLAB without reinventing established methods. (Space vehicle image courtesy of NASA)

[Continue reading on EDN US: Learning from other industries]

Ossi Saarela is the Space Segment Manager at MathWorks.

Leave a comment