• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-24
  • E7
  • 3
  • paper
  • From Europe to Europa: implications of the European AI Act for the space industry

    Paper number

    IAC-24,E7,3,12,x86944

    Author

    Mr. Thomas Graham, Swinburne University of Technology, Australia

    Coauthor

    Mr. Giovanni Tricco, Alma Mater Studiorum - University of Bologna, Italy

    Coauthor

    Mr. Francesco Casaril, IMT, Belgium

    Year

    2024

    Abstract
    Artificial Intelligence (AI) is increasingly being incorporated into space systems to undertake tasks ranging from spacecraft autonomy to data processing and analysis. However, the development and deployment of AI in the space domain raises regulatory questions, among others, regarding risk assessment, safety certification, and accountability. As one of the first comprehensive attempts to regulate AI, the European Union’s forthcoming ‘AI Act’ proposes a risk-based approach with stricter requirements for “high-risk” applications. The United Nations Space Treaties do not explicitly address AI as a set of technologies, and similarly, emerging regulatory frameworks for AI lack specific provisions addressing its application in the space domain. The absence of explicit regulations provides significant room for interpretation and raises uncertainties regarding the consequences for those employing AI in the space industry, creating a regulatory gap. This paper examines the implications of the European AI Act for the global space industry, focusing specifically on whether space infrastructure may be designated as ‘critical infrastructure’ under EU law, therefore falling under the ‘high-risk’ classification and attracting the various controls and obligations associated with such a classification.
    
    The paper analyzes provisions of the EU AI Act relevant to the space domain, examining exceptions to the Act, primarily those concerning defense and scientific research to identify their relevance to use cases for AI in the space industry. It explores ambiguities in the regulations regarding the potential inclusion of space infrastructure as critical infrastructure and discusses the additional compliance obligations this would place on organizations in the space sector. It notes the lack of specific considerations related to the space industry across the entire Act. Proceeding from this analysis, the paper explores the need to evolve risk-based, context and use case-specific assessment frameworks for AI in the space sector. The paper contends that a ‘high-risk’ categorization should carefully evaluate the criticality of space infrastructure based on specific information, rather than relying on ambiguous definitions.
    
    In summary, this paper explores the necessity for nuanced regulation and improved dialogue between space actors, regulators and experts to promote responsible AI in outer space.
    Abstract document

    IAC-24,E7,3,12,x86944.brief.pdf

    Manuscript document

    IAC-24,E7,3,12,x86944.pdf (🔒 authorized access only).

    To get the manuscript, please contact IAF Secretariat.