Glowing red warning triangle on a digital circuit background symbolizing AI-driven cyber threats.

Overview

In February 2026, the Department of Science, Innovation & Technology (DSIT) and the AI Security Institute (AISI) released their International AI Safety Report 2026. This report assesses that general-purpose AI systems are increasingly capable of supporting complex, multi-step tasks across technical domains. While much of the report focuses on safety, governance and misuse prevention, its findings are directly relevant to the cyber threat landscape. The report reinforces the notion that cybercriminals and sophisticated adversaries are no longer just experimenting with AI tools but are beginning to operationalize them across meaningful portions of the cyber kill chain.

Recent reporting from government agencies, private-sector researchers and incident responders indicates that artificial intelligence is becoming a persistent force multiplier in cybercrime and state-sponsored cyber operations. Rather than enabling fully autonomous attacks, observed activity reflects a more pragmatic evolution: AI is being used to accelerate development cycles, scale social engineering and reduce technical friction across attack workflows, with greater autonomy emerging as a longer-term trajectory rather than an immediate capability. These shifts carry direct implications for the commercial space sector, which relies heavily on agile software development, distributed engineering teams and cloud-based ground systems that introduce numerous non-traditional entry points. AI-assisted tradecraft enables adversaries to rapidly adapt and iterate on attack flows tailored to these environments, increasing the likelihood of successful initial access and facilitating lateral movement from enterprise IT networks into mission-critical operational systems.

Evolution of AI-Enabled Threats

Early discussions of adversarial AI usage often centered on theoretical risks of isolated proofs of concept. By contrast, activity observed throughout 2025 and into early 2026 shows a gradual but meaningful transition toward operational use. Threat actors continue to demonstrate integration of AI into discrete tasks where it provides immediate value: code generation, vulnerability research, phishing content creation and workflow automation.

Trend Micro’s State of Criminal AI report from January 2026 underscores this shift, noting that most cybercriminals currently rely on jailbroken commercial large language models (LLMs) such as ChatGPT, Claude and Gemini rather than bespoke models. While this dependence introduces constraints, such as API monitoring and key revocation, it has not prevented adoption. Instead, actors are adapting their tooling and operational security to account for these limitations.

Critically, AI systems are not currently conducting end-to-end cyberattacks autonomously. Human operators remain responsible for target selection, campaign timing and key decision points. However, AI is increasingly embedded within attack workflows, enabling faster iteration and improved scalability. Additionally, the growing prevalence of Agentic AI is lowering the technical barrier for malware development.

The Rise of AI-Assisted Malware Development

The most consequential development observed so far in 2026 is the emergence of AI-assisted and, in some cases, AI-authored malware frameworks. In January, researchers disclosed VoidLink, an advanced Linux-focused malware platform reportedly developed almost entirely by Chinese AI agents. VoidLink features a modular, cloud-aware architecture designed to maintain persistent access across Linux environments, with capabilities tailored for long-term operations. This focus on Linux is particularly relevant to the space sector, where operational systems frequently rely on Linux-based servers, embedded systems and containerized workloads to support command-and-control and data processing functions. As commercial space evolves with cloud-native and hybrid operational models, malware optimized for Linux environments aligns closely with the technical realities of space system operations.

While VoidLink does not represent autonomous malware evolution, its significance lies in how AI was used during development. Tasks traditionally requiring experienced malware engineers were largely automated or accelerated through agent-based AI workflows. This suggests that advanced tooling may no longer be constrained by the availability of elite human developers, particularly for well-resourced state-sponsored actors. For space-sector targets, this lowers the barrier for adversaries to tailor malware to specialized operational environments, increasing the likelihood that custom payloads can be adapted to mission-specific configurations, legacy systems or air-gapped support networks.

VoidLink builds on a trajectory observed throughout 2025. Malware families such as MalTerminal, LameHug, and PROMPTLOCK demonstrated earlier approaches to embedding or querying LLMs during execution, often using static prompts or external API calls to generate code, enumerate systems or assist with payload development. Later variants, including PROMPTFLUX and PROMPTSTEAL, showed more dynamic interaction with language models during runtime, signaling a shift toward adaptive malware behavior.

Augmenting the Cyber Kill Chain

Beyond malware development, AI is increasingly supporting multiple phases of the cyber kill chain. Researchers have demonstrated how AI-assisted tools can expedite vulnerability discovery, a finding reinforced when Microsoft used Copilot to identify previously unknown flaws in open-source bootloaders. Adversaries are likely to apply similar techniques, particularly against widely deployed software and cloud-native components.

Social engineering remains one of the most immediately impacted domains. Deepfake technology is already reshaping phishing, business email compromise (BEC) and vishing campaigns. These tools enable threat actors to produce convincing, tailored lures at scale, while rapidly adapting messaging based on victim responses.

AI is also being leveraged to automate operational overhead. Reporting highlights the use of AI agents to create and manage user accounts, rotate infrastructure and assist with reconnaissance. While these capabilities may appear incremental, they collectively reduce friction and allow actors to sustain higher operational tempo with fewer resources.

Conclusion

Observed activity through early 2026 indicates that AI will continue to reshape cyber operations in subtle but consequential ways. Rather than replacing human operators, AI is amplifying their effectiveness by compressing development timelines, enabling rapid experimentation and lowering the barrier to advanced tradecraft benefiting well-resourced actors while gradually diffusing sophisticated capabilities across broader criminal ecosystems.

For the space sector, AI’s integration across the cyber kill chain increases risk to the interconnected systems that support satellite operations, as adversaries become more capable of rapidly tailoring malware to specialized environments and pivoting from enterprise and development networks into mission-critical operational domains. The most significant implication is not the advent of fully autonomous attacks, but the steady erosion of the time, complexity and specialization advantages that once constrained adversary activity, narrowing the gap between cyber intrusion and operational impact on space missions.