Guardians of the Future: Ensuring AI Safety

In this blog, QA's Director of Cyber Security, Richard Beck, delves into the latest developments behind AI safety and governance, and its impact for businesses.

Artificial intelligence (AI) is rarely out of the news. This week is no exception, with the UK Government announcing the setup of an AI Safety Institute, and with leading AI companies starting to share their AI technology safety strategies. Notwithstanding the US issuing an Executive Order on ‘Safe, Secure and Trustworthiness AI’, particularly focused on new standards for AI Safety and Security. 

Whilst this isn’t a permanent US law yet, the order places a high emphasis on the National Institute of Standards and Technology's (NIST) efforts to enhance existing guidelines for AI risk management and the ‘red team’ testing to identifying potential vulnerabilities. Skills is not lost within the executive order, as the US administration notes that individuals with AI skills can seek opportunities via the federal government on AI.gov. The significance of AI is growing across all professions, as the UK Gov announces its AI skills package.. The World Economic Forum argues we should all prepare our workforce for emerging roles, when considering the AI skills gap, by recognising commonalities between current and future skill requirements. 

There has been a surge in AI governance initiatives across various levels, encompassing national and international government support, with multi-industry cooperation, which represents a logical extension of the swift adoption of AI and the industry's realignment around it. Including These established measures have laid the foundation for the participation in the Group of Seven's (G7's) recently released Guiding Principles and Code of Conduct on Artificial Intelligence. 

The executive order also requires “the development of a National Security Memorandum that directs further actions on AI and security.” This is likely to support international views, announced later this week on the ethical use cases and risks associated with ‘Frontier AI’ use across government, military, and law enforcement. 

The European Union (EU) is nearing the end of negotiations on its AI Act, and it's interesting to note how closely aligned its goals and objectives are with the U.S. Executive Order. Both call out the need for testing, and enhanced safety and security measures, with key privacy and transparency for consumers. However, a significant distinction exists: the EU AI Act is a legal framework with proposed penalties, while the Executive Order will need US federal government cross-party support and influence to legislate. 

In partnership with the IAPP, QA have launched AI governance course aimed at people looking to start using AI in their business. To help them understand the governance, safety, security, privacy, and risk challenges around it. It’s not a generative AI engineering technical course, so it’s much more widely accessible for all audiences seeking to use and understand AI in their roles. 

The Certified Artificial Governance Professional (AIGP) curriculum provides an overview of AI technology, survey of current law, international government AI safety guidance, cooperation, and strategies for risk management, security and safety considerations, privacy protection, bias and trustworthiness, and other topics. Designed to ensure AI is used in a safe way, an AIGP trained and certified professional will know how to implement and effectively communicate across teams the emerging best practices and rules for responsible management of the AI ecosystem.

In summary an AI governance professional will take the responsibility of steering the adoption of AI and its implementation in a way that minimises risk, enhancing business growth and opportunity, while ensuring safety and instilling trust.