top of page

The Outer Space, Artificial Intelligence and Cyber Security: Navigating the Legal Challenges

-Ishita Das* & Kanishka Bhukya**


 

Abstract


Cyber-attacks that target space assets are on the rise due to technological developments and with the ongoing geo-political tensions, such threats may result in serious national security ramifications for the affected countries. Further, the increasing autonomy of space objects raises some critically important considerations regarding cyber-safety and risk. This piece aims to provide an overview of the relationship between AI and cyber security in the context of the outer space sector while also highlighting the legal challenges in this regard.


 

I. Introduction


A bulk of the globe's critical infrastructures, such as marine trade, air transport, communication, earth observation, and defence, rely significantly on space, particularly space-based assets, to operate on a daily basis. This dependency presents a severe, yet often overlooked, security concern, particularly with regard to cyber-attacks. Currently, sophisticated technological breakthroughs such as Artificial Intelligence [“AI”] and Machine Learning [“ML”] are already reshaping the space industry. AI, in particular, has presented unparalleled opportunities for space-based operations by enabling space assets to attain autonomy from external intervention in tasks such as relative positioning, autonomous navigation, end-of-life management, and so on. AI could be a game-changer as regards cyber-security as reasoned by Jack W. Davidson, Professor of a course titled ‘Defence against the Dark Arts’ at the University of Virginia.


According to Davidson, to tackle the complexities of cyber-attacks on critical infrastructures that include the space sector, it is crucial to focus on building a strong defense system that can patch the breaches in cyber speed, or in a matter of minutes. Cyber reasoning systems could identify threats and deal with them more effectively than human programmers who could take much longer to address the same vulnerabilities. However, on the other hand, AI could also be used by the perpetrators to target autonomous vehicles and autonomous space assets, potentially affecting several countries and resulting in serious economic damage. A report prepared by experts associated with several universities including the University of Oxford and the University of Cambridge highlighted the challenges with the malicious use of AI. The report emphasises that AI could be used to turn autonomous assets into weapons and possibly cause crashes with other autonomous assets or non-autonomous assets. With regard to the outer space sector, if the perpetrators use AI to turn autonomous space assets into weapons and cause collisions with other autonomous or non-autonomous satellites, the impact could be grave for the affected countries. The next section of the piece explores the interface between AI, cybersecurity and the outer space sector.


II. Outer space, Artificial Intelligence, and Cybersecurity


AI is predominantly employed in the space industry for activities such as remote sensing, data processing, autonomous navigation, and monitoring the health of space assets. Remote sensing, for instance, is used to locate distant objects by employing electromagnetic radiation and a large quantity of varied and sophisticated data is collected during this process, making it difficult to be processed by a human operator. Here, AI algorithms come into play. Deep-learning algorithms are installed on satellites to pre-process sensory input and limit the quantity of data relayed to ground stations. In this process, for example, if an unauthorized attacker injects false training data with the intent of perverting the learned model, the AI algorithm begins to produce erroneous output, and the human operators at the ground stations may make decisions based on this erroneous output, threatening the integrity of the decisions made on the basis of such an output.


To further illustrate, AI is also employed for spacecraft health monitoring. Anomalies and fault detection are critical components in guaranteeing the spacecraft's safety in hostile space conditions. In most circumstances, it is difficult to repair a spacecraft once it has begun its journey, and considerable attention must be paid to fault identification and diagnosis. Traditional approaches rely on pre-programmed checks that are executed to guarantee that the system is operational. These approaches, however, are incapable of identifying new and unknown errors that may develop which have not been previously programmed, and it is in this context that AI may be effective. Here, a cyberattack on spacecraft health monitoring systems might result in the false identification of system failures. And, due to the lack of transparency and accountability in AI decision-making processes, this might result in several errors going undetected, jeopardising the space mission’s objectives.


To that end, a cursory glance at these cyber-security threats posed to AI-based space technologies would demonstrate that, at this developmental stage, AI is nowhere near intelligent, committing mistakes that no man would commit. Such mistakes might be unforeseen and difficult to correct. In certain circumstances, the consequences might seem amusing or illogical. However, when AI-based space technologies are utilised for military purposes, these concerns become far more severe, with a higher likelihood of errors or unanticipated emergent behaviours when the degree of complexity escalates and a situation surpasses the predicted parameters of an algorithm.


Military forces all across the globe have been increasingly involved in using AI and satellites to speed up attacks on potential targets. These forces hope to employ AI to detect targets in satellite data and then transmit that targeting data to the battlefields through communication satellites, allowing army personnel to strike military targets. Thanks to sensor suites and powerful machine learning and deep learning algorithms, these weapons can detect a target, turn random data into meaningful and usable targeting data, generate engagement decisions, and drive a weapon into killing the target without human intervention or command in a matter of few seconds. It is also anticipated that there will be a larger degree of autonomy in military applications of AI due to the sheer speed necessary in some cyber operations such as air and missile defence. Therefore, the deployment of these AI-enabled lethal autonomous weapon systems may pose a number of operational risks, such as simple malfunctions, software errors, unexpected environmental interactions, and, most importantly, the threat of adversaries advancing defensive measures that intentionally undermine or interfere with these autonomous systems (for example, spoofing or behavioural hacking), in an attempt to distort data or target the algorithm itself.


For instance, let us assume nation X initiates a malicious cyber-attack to spoof nation Y's AI-enabled automatic target recognition systems, causing the weapon system to misinterpret civilian objects as military installations. Based on this incorrect information and the incapacity of human supervisors to discover the spoofed images in time to take remedial action, Nation Y might cause fratricide, civilian deaths, or even an inadvertent escalation in a conflict. Such a spoofing assault on the weapon system's algorithm is usually carried out in such a manner that the image appears to the target recognition system as indistinguishable from a legitimate military target. And, this is based on an incorrect assumption that is unlikely to mislead the human eye.


Moreover, the explainability issue that occurs with AI use may worsen these interactions. An inadequate understanding of how the AI algorithm arrives at a specific decision may complicate identifying whether it was attributable to the mathematical model of the AI failing to accurately categorise the military target, for example, due to environmental boundary conditions or if the dataset was made subject to data poisoning by a malicious actor. Not only that, but unless the AI's machine-learning algorithm is ceased, it may learn things that it was not designed to learn or carry out tasks that humans did not anticipate it to perform. Therefore, since so much is at stake, it is critical that we address the emerging cyber threats that define AI-generated space systems and missions, particularly in a military setting, as well as define and oversee the AI system's degree of autonomy in space, in addition to its interface with human operators.


Cyber-attacks that target space assets can have a severe impact on the affected country’s space capabilities. This situation becomes more complex if AI is used to cause crashes or collisions between autonomous space assets and could also cause damage to non-autonomous satellites. While AI can be useful to detect and prevent collisions such as the automated collision avoidance system designed by the European Space Agency [“ESA”], if used maliciously, AI can be extremely detrimental to the physical integrity of space assets in particular and the safety of the outer space environment in general. While the cyber-attacks currently do not involve AI, there is a strong possibility of such attacks taking place in the near future, especially with regard to deep space activities that might rely upon the use of autonomous systems completely. The next section of the piece explores the legal challenges concerning the impact of cyber-attacks on autonomous space assets.


III. Legal challenges


The interface between AI and the outer space sector involves some serious legal questions. The most important challenge concerns the determination of liability. While the Outer Space Treaty deals with the concepts of ‘international responsibility’ and ‘international liability’ under Articles VI and VII, respectively, and the Liability Convention expands on the notion of liability through Articles II, III, and IV, it is essential to bear in mind that as these instruments were created in the 1960s-70s, the technological advancements in the contemporary setting might not fit the original legal imagination. However, there is a possibility to reimagine the provisions of the Outer Space Treaty and the Liability Convention in light of the same. For instance, Article VI of the Outer Space Treaty deals with international responsibility wherein the launching states are responsible for ‘national activities in outer space’ whether carried on by governmental or non-governmental entities.


Article VII deals with international liability and pins liability on the launching state for the damage caused to another state, including its natural or juridical persons. It is pertinent to note that international responsibility imposes international legal obligations on the launching states as regards supervision of national space activities and international liability imposes liability on the launching states for the damage caused by their space objects. Therefore, while the Outer Space Treaty maintains a distinction between the two concepts of international responsibility and international liability, both are closely related. The launching state is both responsible and liable for the space activities and any damage thereof. The ‘launching state’ could be either the state launching the space object, the state procuring the launch, the state from whose territory the launch has taken place, or the facility from where the launch was initiated [Article I(c), Liability Convention]. ‘Space object’ includes its component parts as well as its launch vehicle and parts thereof [Article I(d), Liability Convention]. Therefore, the term that appears in the Liability Convention is wide enough to cover autonomous space assets within its fold as AI capabilities are a part of the software component of such space assets.


The Liability Convention lays down two categories of liability based on where the damage occurs. Under Article II, it imposes absolute liability if the damage is caused on the surface of the Earth or to aircraft in flight. This is based on the notions of extraordinary risk posed to the safety of human lives and property owned by humans. It imposes fault liability under Article III wherein if the space object causes damage to the space object of another launching state in any place apart from the surface of the Earth, the launching state of the space object that causes the damage would be held liable to the extent of its fault or the persons responsible for the same. Therefore, if the autonomous space asset causes damage to the surface of the Earth or to aircraft in flight, the launching state of that space object would be held absolutely liable.


In such a scenario, there is no question of probing into the intelligence of the space object and ascertaining whether it acted on its own accord. However, one could be concerned about the applicability of ‘gross negligence’ under Article VI of the Liability Convention which deals with exemptions from absolute liability. As the term has not been defined under the Convention and is generally understood to refer to a ‘standard of care’ that is attributable to human actions, the lack of involvement of human conduct as regards autonomous space assets might make it very difficult for the launching state to invoke exoneration from liability. Further, there is a problem with the reimagination of the fault-based liability system under the Liability Convention. It is interesting to note that fault liability might not be invoked even if the autonomous space asset is at fault. It is worth noting that fault is generally associated with human actions and behaviour, and intelligent space assets might not fit the traditional contours of this idea.


Therefore, due to innate lacunae in the Liability Convention that are focused on human actions, omissions, or intent, it might be difficult to invoke fault-based liability if the damage is caused elsewhere than on the surface of the Earth. For instance, if an autonomous space asset is made to collide with another autonomous space object in outer space through a cyber-attack, it would be difficult to ascertain the legal challenges concerning liability. The next section of the piece provides the concluding remarks and suggestions.


IV. Conclusion


It is pertinent to note that the international community is taking cognisance of the importance of adopting a normative approach to deal with the contemporary threats to outer space. For instance, the United Nations [“UN”] Resolution 75/36 adopted by the General Assembly on 7 December 2020 emphasises that outer space should continue to be maintained as a peaceful, stable, secure, and sustainable environment for exploration and the ‘benefit of all’. It reaffirms the core goals of the Outer Space Treaty and asserts that the use of outer space should be guided by the principles of cooperation and mutual assistance. The document also stresses the role of the Conference on Disarmament towards the prevention of the arms race in outer space and dealing with issues concerning the “‘weaponization of outer space and threats from capabilities on Earth’.” While there have been several attempts at laying down clear guidelines regarding the prevention of arms race in outer space, such congregations have not necessarily produced the desired outcomes.


For example, a Group of Governmental Experts [“GGE”] was established in 2017, by the General Assembly Resolution 72/250, however, despite meeting in 2018 and 2019, the GGE failed to produce a consensus report. While the GGE noted some important focus areas in both meetings, it is crucial that the work of the GGE yields practical results in collaboration with the UN Committee on the Peaceful Uses of Outer Space [“UNCOPUOS”], the UN Office for Outer Space Affairs [“UNOOSA”], the UN Office for Disarmament Affairs, and the UN Institute for Disarmament Research. The use of autonomous space assets can make space exploration easier for humankind;, however, the dangers related to the interface of AI, cybersecurity, and space activities should not be undermined. The international space law instruments including the Outer Space Treaty and the Liability Convention were not drafted with the vision to include autonomous space objects within their fold.


Therefore, there is a need to reimagine the core UN treaties concerning outer space in a new light. The liability regime governed by a combined reading of Articles VII of the Outer Space Treaty and the relevant provisions of the Liability Convention does not provide legal solutions to deal with the challenges associated with the use of autonomous space assets. However, there is a possibility to invoke Article VI of the Outer Space Treaty to hold the state accountable as an alternative to the rather sketchy liability system. While appropriate amendments to the Liability Convention are desirable, the same is not practically feasible as evident from the rather salty GGE experience in the past. The international community may also come up with a specific framework that may pave the path for addressing the legal challenges as regards the interface of AI, cybersecurity, and the outer space sector.


 

*Ishita Das is a legal professional who has an industry experience of 2 years and academic experience of about 3 years. Her areas of specialisation include International Space Law and Cyber Law. She may be contacted at ishita.das@nalsar.ac.in. (Assistant Professor (Law), NALSAR University of Law, Hyderabad).


**Kanishka Bhukya is a third-year law student at the National Law School of India University in Bangalore. He may be contacted at kanishkabhukya@nls.ac.in.



4 comentarios


wabijej677
18 oct

The intersection of outer space, artificial intelligence, and cybersecurity presents a unique set of legal challenges, particularly around data protection and security protocols. As we explore these uncharted territories, ensuring that AI systems are safeguarded from cyber threats is more critical than ever. Partnering with cybersecurity professional services can help organizations navigate these complexities by providing expert guidance on securing AI systems and ensuring compliance with evolving regulations. A proactive approach in this space is essential to mitigate risks and build a more secure technological future.

Me gusta

Syakila Putri
Syakila Putri
31 jul

thank you for your information. visit our website for more information https://unair.ac.id/

Me gusta

thamjidha n
thamjidha n
25 jul

RGreat article. Read the <a href="https://ebsedu.org/blog/copyright-laws-in-the-ai-era/?utm_source=backlinks&utm_medium=organic&utm_campaign=Copyright_Laws_AIEra_25_July_2024"> Copyright laws in the AI ERa</a>

Me gusta

Invitado
24 feb 2023

Very Informative. (https://www.zelican.com)

Me gusta

Recent

bottom of page