Assessing AI Risks & Controlsin Blog Article by Jeff Hare
Written By Fred Roth, CISA, Sr. Adjunct Lecture at ERP Risk Advisors
Artificial Intelligence (AI) is fast, complex, and limitless. The risks and rewards are in the news daily. As with any new technology, security and controls lag technological growth.
Who will assess the security and controls of this innovative technology for your enterprise?
“…with AI being one of the latest hot topics, I do feel it is critical for auditors to be able to address its risks and to articulate these to management.,” said Maria Mendes, CISA, CIA, CRMA, CCSA, an Internal Audit and Risk Consultant at the Hartford, Conn. ISACA training seminar this past spring (2023).
A significant risk is inaccurate AI results and consequent flawed management decisions. AI results are only as good as the AI data integrity, the reliability of the AI application logic/algorithms, the thoroughness of “training” the AI system, and the integrity and security of the human interface. Additionally important is the reliability of the underlying AI technology which includes machine learning algorithms, neural networks, and Robotic Process Automation (RPA) components
An important AI data governance aspect is effective AI monitoring and oversight with AI roles & responsibilities clearly defined such as ownership of AI applications. Management also needs to address liability considerations should AI create inaccurate results adversely impacting external clients or customers.
An inherent complication when using third-party AI solutions is assessing key controls to determine if the AI application is delivering accurate results. The right to audit clause with the third party will most likely require a SOC-2 report which may not provide adequate control assurance.
New laws and regulations are being passed with the most significant of them to date being the European Union AI Act. Those in violation of the EU AI Act face fines of up to 30 million Euros or 6% of global profits, whichever is higher. For example, for Microsoft, which is backing ChatGPT, it could mean a fine of over $10 billion if found violating the EU AI Act.
We asked ChatGPT AI “Why should I be concerned with AI? Here is AI’s response (more depth was provided for each bullet which is not provided here).
“Being concerned about AI is a reasonable and responsible approach, as the technology continues to advance rapidly, affecting various aspects of our lives. While AI has the potential to bring numerous benefits and improvements, there are also significant challenges and potential risks associated with its development and deployment. Here are some reasons why you should be concerned about AI:
- Job Displacement
- Bias and Fairness
- Privacy and Security
- Lack of Accountability
- Inaccurate AI Results
- Ethical Concerns
- Unemployment and Economic Inequality
- Lack of Regulation and Standards
While AI offers numerous benefits and has the potential to solve some of humanity’s most pressing modern-day challenges, it is essential to approach its development and implementation with careful consideration of the potential risks and societal impacts. This calls for collaboration between policymakers, technologists, ethicists, and the public to ensure that AI is developed and used responsibly, transparently, and in a way that benefits all of society.”
** End of ChatGPT AI Dialogue **
Our new seminar “Assessing AI Risks & Controls” is an 8-CPE on-demand seminar providing the opportunity for audit and risk management to influence AI security and controls as the technology evolves. Included in the seminar will be a chapter written entirely by ChatGPT AI.
We will NOT be addressing the risk of AI destroying humanity or starting World War III. Instead, we will be focused on the risks & controls pertaining to your enterprise developing AI applications or using AI third-party services. The controls discussed are relevant to all AI applications whether developed in-house or using a third-party AI application.