Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

The Alan Turing Institute stresses AI's vital role in UK national security

Apr, 25, 2024 Hi-network.com
,

A recent report from the Turing's Centre for Emerging Technology and Security (CETaS), commissioned by the UK government, sheds light on the pivotal role of artificial intelligence (AI) in shaping national security strategies. Released amidst growing recognition of AI's potential, the report outlines both its benefits and associated risks, highlighting the need for responsible implementation in decision-making processes.

According to the report, AI is considered a valuable tool to help senior officials in government and intelligence make decisions, facilitating faster and more accurate data analysis to enhance national security. However, it also warns of the potential amplification of uncertainty, as AI introduces new dimensions to intelligence analysis. Consequently, decision-makers are urged to undergo additional training to comprehend the nuances and uncertainties introduced by AI-enriched intelligence, fostering trust in the technology.

The research considered:
  1. 'Whether national security decision-makers are sufficiently equipped to assess the limitations and uncertainty inherent in assessments informed by AI-enriched intelligence.'
  2. 'When and how the limitations of AI-enriched intelligence should be communicated to national security decision-makers to ensure a balance is struck between accessibility and technical detail. '
  3. 'Whether further governance, guidelines, or upskilling may be required to enable national security decision-makers to make high-stakes decisions based on AI-enriched insights.'
Key findings from the research are as follows:
  1. 'AI is a valuable analytical tool for all-source intelligence analysts. AI systems can process volumes of data far beyond the capacity of human analysts, identifying trends and anomalies that may otherwise go unnoticed. Choosing not to make use of AI for intelligence purposes therefore risks contravening the principle of comprehensive coverage in intelligence assessment, set out in the Professional Head of Intelligence Assessment Common Analytical Standards. Further, if key patterns and connections are missed, the failure to adopt AI tools could undermine the authority and value of all-source intelligence assessments to government.'
  2. 'However, the use of AI exacerbates dimensions of uncertainty inherent in intelligence assessment and decision-making processes. The outputs of AI systems are probabilistic calculations (not certainties) and are currently prone to inaccuracies when presented with incomplete or skewed data. The opaque nature of many AI systems also makes it difficult to understand how AI-derived conclusions have been reached.'
  3. 'There is a critical need for careful design, continuous monitoring, and regular adjustment of AI systems used in intelligence analysis and assessment to mitigate the risk of amplifying bias and errors.'
  4. 'The intelligence function producing the assessment product remains ultimately responsible for evaluating relevant technical metrics (such as accuracy and error rates) in AI methods used for intelligence analysis and assessment, and all-source intelligence analysts must take into account any limitations and uncertainties when producing their conclusions and judgements.'
  5. 'National security decision-makers currently require a high level of assurance relating to AI system performance and security to make decisions based on AI-enriched intelligence.'
  6. 'In the absence of a robust assurance process for AI systems, national security decision-makers generally exhibited greater confidence in the ability of AI to identify events and occurrences than the ability of AI to determine causality. Decision-makers were more prepared to trust AI-enriched intelligence insights when they were corroborated by non-AI, interpretable intelligence sources. '
  7. 'Technical knowledge of AI systems varied greatly among decision-makers. Research participants repeatedly suggested that a baseline understanding of the fundamentals of AI, current capabilities, and corresponding assurance processes, would be necessary for decision-makers to make load-bearing decisions based on AI-enriched intelligence.'
This report recommends the following actions to embed best practice when communicating AI-enriched intelligence to strategic decision-makers. 
  1. 'The Professional Head of Intelligence Assessment (PHIA) should develop guidance for communicating uncertainty within AI-enriched intelligence in all-source assessment. This guidance should outline standardised terminology to be used if articulating AI-related limitations and caveats to decision-makers. Guidance should also be provided on the threshold at which assessments should communicate the use of AI-enriched intelligence to decision-makers.'
  2. 'A layered approach should be taken by the assessment community when presenting technical information to strategic decision-makers. Assessments in a final intelligence product presented to decision-makers should always remain interpretable to non-technical audiences. However, additional information on system performance and limitations should be available on request for those with more technical expertise.'
  3. 'The UK Intelligence Assessment Academy should complete a Training Needs Analysis on behalf of the all-source assessment community to identify the requirement for training for new and existing analysts. The Academy should work with all-source assessment organisations to develop appropriate training in response to the Analysis.'
  4. 'Training should be offered to national security decision-makers (and their staff) to build their trust in assessments informed by AI-enriched intelligence. Decision-makers should be given basic briefings on the fundamentals of AI and corresponding assurance processes.'
  5. 'Short, optional expert briefings should be offered immediately prior to high-stakes national security decision-making sessions where AI-enriched intelligence underpins load-bearing decisions. These sessions should brief decision-makers on key technical details and limitations, and ensure they are given advanced opportunity to consider confidence ratings. These briefings should be jointly coordinated by the JIO and National Security Secretariat and should draw from cross-governmental expertise from the network of Chief Scientific Advisers and relevant Scientific Advisory Councils. Guidance on when to offer briefings should be produced, and the need for briefings should be continuously assessed; as decision-makers become more comfortable with consuming AI-enriched intelligence, the level of desired assurance may reduce, and briefings may eventually become unnecessary.'
  6. 'A formal accreditation programme should be developed for AI systems used in intelligence analysis and assessment to ensure models meet minimum policy requirements of robustness, security, transparency, and a record of inherent bias and mitigation. Technical assurance for the application of a system to a specific problem should be devolved to relevant organisations, and each organisation's assurance process should be accredited. This programme will require dedicated resourcing, bringing together understanding of intelligence assessment standards and processes with technical expertise. PHIA should assist in developing principles and requirements, while technical expertise for accreditation and testing should be drawn from technical authorities in the intelligence community and across government.'

In response to the report's findings, the government has taken proactive steps towards AI adoption across the public sector. Initiatives like the Generative AI Framework for HMG aim to ensure safe and secure AI usage. Deputy Prime Minister Oliver Dowden has pledged to carefully consider the report's recommendations to inform national security decisions, emphasizing the importance of harnessing AI effectively. Anne Keast-Butler, Director of GCHQ, underscores the critical role of AI in identifying threats amidst a rapidly evolving global landscape, stressing the need for ongoing efforts in AI safety and security.

Dr. Alexander Babuta, Director of The Alan Turing Institute's Centre for Emerging Technology and Security, reaffirms the institute's dedication to supporting the UK intelligence community with evidence-based research. The aim is to maximize AI's potential in safeguarding the nation, ensuring that advancements in technology are met with responsible and informed decision-making.

,

tag-icon Горячие метки: Искусственный интеллект

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.