A stylized 3D rendering of Earth at night overlaid with glowing purple and gold data pathways representing complex autonomous satellite networks and global connectivity.

The business of managing space satellite operations is being fueled by quickly developing artificial intelligence applications working in tandem with carefully scripted ground-based control human-in-the-loop decision-making that assumes stable conditions and predictable timelines.

But space is getting less predictable by the day, more crowded and more competitive, making it tougher for humans to control and giving AI a more active role in satellite spacecraft management.

The high level goal of AI in space is automating spacecraft operations, according to Nathan Ré, principal astrodynamics and satellite navigation engineer for Advanced Space, which is working with NASA on machine learning techniques in the form of neural networks for onboard station keeping and maneuver planning.

“As we get more and more spacecraft, both small missions and large missions, it becomes increasingly important to automate how they operate, and can retrieve only the most valuable data, or transmit only the most valuable data, perform the most valuable operations, whatever their mission may be,” Ré said. “The basic principle is that you build in physics-based constants, like constants of momentum or constants of energy, and you use that during your AI training to kind of guide the neural network to make its output conform to known physical laws.”

AI in space applications includes trajectory optimization, where AI finds optimal fuel-efficient paths for interplanetary or complex orbital maneuvers; anomaly detection, where AI identifies unusual patterns in massive datasets from telescopes, flagging rare celestial event or unexpected orbital behaviors; orbit determination and control, where AI synthesizes controllers for stabilizing spacecraft motion and managing formations; and AI used in periodic orbit discovery with physics-informed neural networks (PINNs) to find and characterize stable, repeating orbital paths.

Early Applications and the Rise of AI-First Satellites

Progress in AI satellite applications is stacking up. Satellogic, a satellite imagery company headquartered in Wilmington, Delaware, built an AI-first satellite constellation that runs onboard analytics and delivers near-real-time insights directly from orbit, reducing latency between image capture and actionable intelligence. Akula Tech, a Melbourne, Australia-based company building AI-powered smart satellites and autonomous defense systems, launched its Nexus-01 edge AI payload in August 2025, carrying an Nvidia Jetson TX2i with 256 CUDA cores supercomputer that runs on-board machine learning models for real-time data processing of hyperspectral images from orbit.

And there are dozens of others jumping on the AI train, including Seoul, South Korea-based TelePIX, with its 6U-class BlueBON cubesat using AI to analyze multi-spectral images of blue carbon stored in marine ecosystems such as mangrove forests and sargassum beds. EDGX, based in Ghent, Belgium, is making specialized AI solutions for onboard autonomy and real-time processing using up to 157 trillions of operations per second (TOPS) of in-orbit compute. And AIKO Space, based in Torino, Italy, is developing autonomous software for navigation, predictive maintenance and anomaly detection in spacecraft systems.

Space Traffic, Collision Avoidance, and AI’s First Operational Imperative

AI’s biggest role right now is helping to identify, locate, and sort out objects in space. And it’s coming just in time. Space-tracking.org, the public portal for the U.S. Space Force’s Space Surveillance Network, lists nearly 50,000 active satellites, objects and debris as of January 4, with more space clutter to come.

SpaceX, with over 9,000 satellites in orbit by late December 2025 and with 7,500 more authorized by the FCC, has been using AI for collision avoidance and on-orbit navigation since 2019, making real-time decisions about orbital maneuvers without direct human input.

As SpaceX has shown, collision avoidance is a significant actionable need for AI assistance. For example, according to a July status report to the FCC, SpaceX Gen2 satellites performed 84,990 propulsive maneuvers over a six month period from December 2024 to May 2025, most due to avoiding other satellites or collisions with space debris.

Elon Musk has said that he is planning to do more with AI, integrating AI processing units on the V3 generation of satellites to be launched at scale in 2026, with the goal of creating distributed orbital AI computing and data centers in space.

The Limits of Space Qualification

Advanced Space has been diving into AI in a number of ways, according to Jeffrey Parker, a former JPL mission design and navigation specialist now working for the Westminster, Colorado company. “One is in detecting anomalies,” he said. “So if the spacecraft telemetry string contains an anomaly, an AI tool can detect that more readily then sending the whole telemetry string down to the ground and having a ground operator sift through it. You should only have an AI response if it’s fully tested, and if you understand what those responses might be. If the spacecraft is going to execute a maneuver, you need to make sure that that AI is trained for every situation it might encounter.”

In general, using AI for space traffic and collision management has been limited mainly due to the lack of real-world data on which events, of the many thousands of close approaches, would have resulted in actual collisions—information that would be needed to train an AI model.

“These are onboard processing issues that still need to be figured out and operationalized about AI use in space,” said Paul Frakes, the Aerospace Corporation project lead for strategic foresight in its Center for Space Policy and Strategy, and an author of a November 2025 policy brief on AI and the space enterprise. “One example is the cadence of decision making. Because one of the things that AI technologies enable is to reduce the time cadence of decision-making through decision cycles like the OODA loop [Observe, Orient, Decide, Act, a continuous decision-making model developed by U.S. Air Force] or other operational models. By not embracing or including an AI technology, there’s a scalability enhancement you may be missing out on.”

Ralph Grundler, the director of space business development and R&D for Aitech, a Chatsworth, California-based aerospace and defense company developing open-architecture solutions to support multi-domain operations, said that there are two things that satellite developers are working on when it comes to AI. “One is to add more AI hardware power, and that’s directly with Nvidia or Nvidia-like hardware. The advantage of Nvidia is that it’s a generic kind of hardware platform that people can use any AI algorithm on.”

Grundler said that, for images of Earth, a satellite technician can effectively tune the AI algorithm to detect different stuff. “Maybe one day you want to detect whales, the next day you want to detect ships, and the day after that, you want to detect uranium mine locations,” he said. “So you just load up a new AI algorithm and then you can detect different things.”

AI in space development is focused on compute power. For example, Aitech is implementing the Nvidia Orin processors in space featuring 248 TOPS. “We’re even starting to investigate the next Nvidia Blackwell (with up to 5,000 TOPS) type computers, and how to implement those and put those in the space,” Grundler said. “Because it takes several years to get into space. You have to test and qualify hardware several times in order to make sure that it meets the harsh conditions, not only of the launch but the harsh conditions of space itself. That’s always the difficult part.”

Getting more compute power into satellite systems is a slow process, according to Ré. “Hardware takes a while for people to gain confidence in these things, and also they need to be proven out,” he said. “There’s a small number of spacecraft that have put more modern GPUs and things on board that are able to run AI models more capably.”

Advanced Space has demonstrated some of their smaller AI models onboard CAPSTONE (Cislunar Autonomous Positioning System Technology Operations and Navigation Experiment), a microwave-sized cubesat in an elliptical orbit around the moon since November, 2022. “The cubesat doesn’t have a processor that’s well optimized for AI specifically,” Ré said. “But one of the benefits of machine learning in general is that it gives us highly efficient models for arbitrary transformations of data, and it’s typically an approximate model. You can accept a little bit of loss of accuracy in an approximation if you gain something in speed or robustness.”

Trust, Autonomy, and the Long Road Ahead

NASA and European Space Agency are already testing quantum computers in orbit through their Quantum Artificial Intelligence Laboratory (QuAIL).

Parker said that Advanced Space is supporting a mission with the objective of demonstrating pieces of launch and forget, “with the idea being to launch a spacecraft on a unique mission such that the spacecraft executes its mission autonomously,” he said. “Now we, of course, would be supervising these missions until we truly feel like they are trustworthy, all by themselves. But for humans to get ultimately off this rock and explore the solar system, explore the cosmos more and more effectively, we really do need to sever those ties with the Earth. So we’re starting to practice what it means for a spacecraft to be autonomous.”

Machine learning and AI is moving so fast that the theoretical demonstration of what really is behind the stability of this model is lagging behind, according to Giuseppe Borghi, who leads the European Space Agency lab division. The lab funded a November 2025 study that demonstrated the ability to create global instantaneous 3D cloud reconstructions from three geostationary satellites (the European Space Agency MSG, the Japan Himawari-8, and U.S. GOES-16) using machine learning framework, helping in the forecasting of tropical cyclones.

“The results are so exciting to the community,” Borghi said. “They keep trying new things and developing new things, but the trust is still missing. I am sure that in the next decades, this trust will come, but we are still in a situation where we have a nice toy which is doing great things, but we don’t understand why it is doing this, why it is making mistakes. The trust is still okay, but let’s see.”

Explore More:

AI for EO, Neural Network Supervisors and Overcoming the Clouds

Intelligence, Automation and Turn Signals

Artificial Intelligence Is More Than Just Generative AI