Next Launch:
Calculating...

The Algorithm to Mars

Mars,Rover,Artificial Intelligence
Sage Lazzaro
Renee Freiha
March 30, 202110:00 AM UTC (UTC +0)

Put yourself in Curiosity’s shoes (technically, wheels.)

You’ve been sent 350 million miles from home to spend nearly a decade roaming a distant planet alone, tasked with searching for signs of life and answering profound questions. You’re a machine, but more accurately a geologist and an explorer.

Really, you’re one of the valuable assets in humanity’s most daring mission of discovery and it’s essential that you’re equipped with a little intelligence. Amidst all the metal and mechanics of our Mars rovers, as well as our space exploration efforts at large, exists a growing presence of artificial intelligence, or AI.

The AEGIS (Autonomous Exploration for Gathering Increased Science) software that NASA first deployed to Curiosity in 2016, for example, enables the rover to use an AI subset called 'computer vision' to analyze images of an area, decide which rocks would be interesting to scientists, and study them autonomously. Previously, these images had to be downlinked to Earth, examined by humans, and sent back with commands. What used to take days now just happens, allowing more data to be collected and at more flexible times.

Across many industries, AI is viewed as the technology guaranteed to take us to the next level, for better or for worse. Google CEO Sundar Pichai described developments in AI as “more profound than fire or electricity.” However, the engineering challenges posed by working and operating in space are unlike anything we've encountered in Silicon Valley or on Earth for that matter. How significant a role can AI actually play in our exploration of the cosmos, and in eventually sending humans across the solar system?

There’s a lot to be enthusiastic about. The success of AEGIS, heavily used by NASA, helped pave the way for even more space-age AI software. When Perseverance recently touched down on the Red Planet, it autonomously guided its own landing using NASA’s new AI-powered TRN (Terrain Relative Navigation) technology. The rover took photos during descent and cross-referenced them with an onboard map, continuously computing its precise location and steering itself to a safe landing spot in the 28-mile-wide Jezero Crater. With steep cliffs, boulder fields, and harsh terrain, this landing location was both risky and highly desirable, and it wouldn’t have been an option if relying on commands from flight engineers back on Earth.

Since the distance between the two planets creates a lag time of up to 40 minutes, autonomy was the only way.

For exploration, Perseverance features an upgraded version of AEGIS, which improves the intelligent targeting software and allows mission engineers to remotely aim and control the rover’s camera. The benefit is the ability to vaporize more rocks faster using powerful lasers. An enhanced navigation system with a new path-finding algorithm is also on board, making it possible for Perseverance to autonomously drive in the area’s challenging terrain. Additionally, the navigation system carries a special onboard computer for even faster image processing, as well as a new capability the team calls “thinking while driving.”

“In the past, we'd have to sit still for a couple of minutes while the rover took and processed images. Then it would drive forward a step, maybe a meter, and repeat that same cycle,” Tara Estlin, supervisor of the NASA Jet Propulsion Laboratory's Machine Learning and Instrument Autonomy Group, told Supercluster. “With these new software changes, this rover can process as it drives. As it takes a step, it's already taking and processing the images for the next step."

Estlin said this enhanced auto navigation technology is “critical” for achieving the mission’s level-one objectives. “It was determined we have to have this capability, or we're not going to be able to meet those goals.”

But it’s not just NASA. Space agencies across the world — including the European, Canadian, Japanese, and Russian space agencies — are tapping AI. And it’s not just rovers and robotics.

Machine learning, a popular AI technique, is being used to design, build, and manage new satellites. It’s also being used to crunch massive amounts of telescope and Earth-observation data. AI can even help streamline space-related manufacturing and reduce mission costs. And for astronauts, it can power next-generation user interfaces and help with everyday tasks and research. CIMON, a head-shaped AI robot powered by IBM Watson, for example, famously helped German astronaut Alexander Gerst with a medical experiment while aboard the ISS in 2018.

With the help of a Google-trained machine learning model, NASA even managed to discover two exoplanets — Kepler-90i and Kepler-80g. And now, this July, NASA’s upcoming DART mission will launch with a secondary goal of testing the use of AI for spacecraft operations, such as docking.

Some space coverage has talked about “the AI space race,” posing the question of which agency or space-affiliated billionaire — Space X’s Elon Musk, Virgin Galactic’s Richard Branson, or Blue Origin's newly focused Jeff Bezos — will crack the AI market and take the lead. But AI experts working in space don’t see it as a cure-all for the industry.

“There’s a lot of hype. That's for sure,” Estlin said. 

Kristina Libby, Chief Science Officer at Hypergiant, a Texas-based space startup that Supercluster recently profiled, echoed this sentiment, saying there’s a discrepancy between where society has positioned AI and what AI can actually do. 

In fact, AI has some significant limitations. Advanced AI systems require massive amounts of computing power, which is extremely expensive. AI models overall are also plagued with bias, with no real solution for the issue in sight. And while they’re great at pattern recognition, they still struggle with reasoning. 

The limitations only increase when you leave Earth, where AI also has a hardware problem. Any processor that flies into space, especially to somewhere like Mars, needs to be “rad-hardened,” meaning specially designed to survive in a high radiation environment. A typical computer can’t function in the vacuum of space the same way they do on Earth. You have to give up significant computing power.

“The computers, where we run the applications, of course, are not as good as the one that you have in front of you now,” said Andrea Merlo, Head of Robotics and Mechatronics Group at French-Italian Thales Alenia Space. Merlo's team is working on the European Space Agency’s ExoMars mission, which is preparing to launch a rover to Mars next year. 

Rad-hardened computers are slower — several orders of magnitude slower than a typical laptop. This means an AI advancement on Earth doesn’t directly translate to something that can be deployed to space. Everything her team at NASA has already done with AI, Estlin said, required some creativity.

Beyond the limitations of what AI can do for space exploration, there’s also a limit to how much we need it. Everyone has their eyes on Mars, but it’s not an AI breakthrough that’s standing in the way.

Estlin, Merlo, and Libby all agreed that getting humans to Mars is more of a transport question than a question of advancing AI technology. 

“The one-way trip to Mars was already demonstrated several times,” Merlo said. “Adding astronauts just means increasing mission reliability and adding the second half of the trip, and this can still be done with ‘classical methods.’”

Libby, whose company specializes in AI and machine learning, said that “we don't need an advancement in AI to be able to get to Mars, with enough funding and the right funding of the right technologies, we could get there now.”

Sage Lazzaro
Renee Freiha
March 30, 202110:00 AM UTC (UTC +0)