Skip links and keyboard navigation

The Queensland Government is in a caretaker period until after the state election. We will only make minimal updates to this site until after the election results are declared.

Foundational artificial intelligence risk assessment guideline

Document type:
Guideline
Version:
v1.0.0
Status:
CurrentNon-mandated
Owner:
QGCDG
Effective:
September 2024–current
Security classification:
OFFICIAL-Public

Purpose

A Queensland Government Enterprise Architecture (QGEA) guideline provides information for Queensland Government agencies on the recommended practices for a given topic area. Guidelines are generally for information only and agencies are not required to comply. They are intended to help agencies understand the appropriate approach to addressing a particular issue or doing a particular task.

This document provides guidance to agencies on the considerations and issues to be addressed when assessing risks during an artificial intelligence (AI) lifecycle in a Queensland Government context. The purpose of the Foundational AI risk assessment (FAIRA) presented in this guideline is to promote a consistent approach to identifying, evaluating, communicating, and managing risks associated with the AI lifecycle.

The FAIRA framework (or equivalent) can assist agencies with meeting their mandated requirement to maintain a consistent and evidence-based process to evaluate AI under the Artificial intelligence governance policy. For further information on what agencies must do regarding the governance of AI, please see the AI governance policy.

The intent of this document is to establish a point of reference from which agencies can formally develop specific policies, standards, and procedures relating to the lifecycle of AI and the management of risks throughout the AI lifecycle.

Audience

This document is intended for:

  • Senior executives
  • Chief Information Officers
  • Risk managers
  • Project team members
  • Business users
  • Procurement officers.

Scope

This document sets out the considerations for identifying and documenting risks specific to AI solutions. It is intended to complement (rather than replace) any risk management frameworks currently being used by agencies. The use of AI products and services for Queensland Government is governed by the same responsibilities, obligations, and policies for the use of other digital products or services.

This document is provided as guidance only and does not seek to create new regulation governing AI within the Queensland Government or to provide legal, ethical, or implementation advice on the use of any specific product or service.

Benefits

The benefits of conducting a FAIRA (or equivalent) include:

  • promotes a common understanding of AI risk: Identifies risk features to establish common controls to AI solutions in line with existing Queensland Government legislation, values, polices, requirements, processes, and frameworks
  • supports sector-specific frameworks: Provides the basis for more detailed application- or domain-specific criteria evaluation for specific government sectors
  • supports initial risk assessment: Provides a foundational understanding of AI risk and relevant controls that can be incorporated into broader agency risk assessment frameworks
  • supports ongoing risk management: Helps inform related work on mitigation, compliance, and enforcement throughout the AI solutions lifecycle, including actions for ongoing evaluation and monitoring.

Background

The FAIRA framework is a transparency, accountability, and risk identification tool for Queensland Government agencies involved in the development, procurement, use, or evaluation of artificial intelligence (AI) solutions. The FAIRA (or equivalent) aims to help stakeholders identify risks and potential mitigation actions specific to the lifecycle of AI.  FAIRA is ‘foundational’ because stakeholders can use it to describe an AI solution in terms of technical components and their associated impacts as a foundation for action in existing compliance and risk management processes.

Agencies can use FAIRA (or equivalent) as the basis for communicating AI risks and mitigations with stakeholders and to inform other existing impact evaluation frameworks such as privacy or human rights. FAIRA (or equivalent) can clarify the requirements, implementation, and operation of an AI solution and in doing so strengthen public trust in how Government manages AI.

What is AI and when to use the FAIRA (or equivalent)

Given the broad range of definitions of AI, the suggestions below provide additional guidance to agencies teams when identifying an AI solution. A technology is an AI system if it meets one or more of the following criteria:

  1. It meets the OECD definition of an AI System found in the National Framework for the Assurance of AI in Government:
    ‘A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.’
  2. It is classified as AI under ISO 22989 Information technology — Artificial intelligence — Artificial intelligence concepts and terminology.
  3. It is classified as AI under the QGEA technology classification framework V.5.0 August 2024 when:
    1. a team identifies that the project, product, or service uses AI
    2. a vendor describes its product or service as using AI
    3. users, the public or other stakeholders believe the project, product or service uses AI.

Lack of agreement on definitions of AI should not prevent the identification of risks by teams or the ability to communicate risk to stakeholders through FAIRA (or equivalent). Agency teams may wish to adopt a specific AI definition that describes their classification of AI solutions. Products that carry some ambiguity as to whether AI is integrated can still be assessed by FAIRA (or equivalent) to identify risks surrounding the human-machine interface, as many of these risks may overlap with risks relating to non-AI automated ICT and decision-making software. See the definitions section for further clarification on key terms. If an agency is unsure whether their project, solution, or service includes AI then they may wish to seek guidance from Data and Artificial Intelligence team within the Queensland Government Customer and Digital Group (dai@chde.qld.gov.au).

A risk assessment should be proportionate to the risk inherent in a business process and the type(s) of AI functions involved (see ISO 22989 for guidance on types of AI functions). It is recommended that a FAIRA (or equivalent) is prepared whenever a business process incorporates AI functions for reasoning or decision-making, or when AI functions of reasoning or decision-making within a business process change. An agency should develop detailed assessment criteria and risk profiles within the FAIRA framework suitable to the subject matter and commensurate to the level of complexity of the reasoning or decision making evaluated.

If an agency is unclear about whether a FAIRA is required, or they have determined that it is not necessary, it is recommended that the “When is a FAIRA needed” template (Appendix A) is completed as a record of the decision.

FAIRA (or equivalent) and existing agency risk management [1]

Agencies are required [2] to establish and maintain appropriate systems of internal control and risk management and should already have well established risk management frameworks in place. The Australian Standard AS/NZS ISO 31000:2009: Risk Management – Principles and Guidelines typically forms the basis for agency risk management frameworks. The figure below depicts the key process steps.

Key process steps for agency risk management frameworksFigure 1: Overview of the risk assessment framework

It is important to involve a wide range of stakeholders, from different disciplines within the agency - such as business, finance, security, business continuity planning, legal and IT, and ensure that the business owners of the information assets, application, and associated technologies are included during the process and at final sign-off on conclusion.

The aim of this guideline is to assist agencies in developing a risk assessment when considering AI use. It outlines the AI considerations and risks that agencies should address as part of their existing risk management framework processes.

Establish the context

The purpose of this phase of the risk assessment framework (Figure 1) is to define the parameters within which risks will be managed and set the scope for the rest of the process. This phase is concerned with developing an understanding of the internal and external context within which the department or business area operates and the factors that may influence the achievement of objectives. It also establishes the risk management context (i.e. the organisation and parameters of the risk management task itself) and scope of the target system being assessed.

Risk assessment framework with establish the context highlighted Figure 2: Risk assessment framework - Establish the context

Understand the internal and external environment

Understanding the internal and external environment is part of a broader scanning activity and provides the platform for building strategic, business, and operational objectives and understanding how the agency operates.

The primary influences on the external environment relate to the social, cultural, political, legal, regulatory, financial, technological, and economic environments within which the agency operates. Agencies should consider what external factors are relevant to their situation, and factor these into their risk assessment process. Some examples include:

  • Queensland Government information standards/policies/frameworks
  • State/Federal Statutory/Legislative Requirements e.g. Public Records Act 2002, Information Privacy Act 2009
  • foreign laws and potential jurisdictional access to information, and
  • The expectations and strategic direction of the Queensland Government
  • community and industry expectations
  • product roadmaps and the stability of the vendor marketplace and offerings.

Influences on the internal environment may include:

  • the agencies governance and accountability structures
  • policies, standards, and guidelines
  • resources availability with the agency (for example, information systems, staffing and funding)
  • organisational readiness
  • nature and extent of contractual relationships
  • the agency culture, including the security culture
  • existing risk management expertise and practices
  • budget/financial/timing constraints
  • ICT architecture and technical constraints.

Risk management context

The risk management context refers to the organisation and parameters of the risk management task itself. Key considerations include:

  • risk appetite
  • risk tolerance
  • risk impact and likelihood
  • risk matrix and responsibilities
  • risk rating responses
  • risk management maturity.

The agency’s risk management framework will outline the preferred treatment/tools in these areas.

Other considerations

Information assets

AI can assist agencies create and acquire information assets (see the Information asset lifecycle guideline). AI’s involvement in the creation or acquisition of an information asset can also raise a range of data management, data quality, and security risks that ought to be addressed consistent with an agency’s application of relevant policies. Agencies should classify their information and information assets according to business impact and implement appropriate controls according to the classification (see the Queensland Government Information Security Classification Framework).

For further information on the challenges of using AI solutions see the Use of Generative AI in Queensland Government guideline.

Automated decision making

Some AI use cases may constitute automated administrative decision making. The Commonwealth Ombudsman’s Automated Decision Making Better Practice Guide (2023) (‘the Guide') outlines important considerations to determine whether the use of AI solutions  qualifies as engaging in automated decision making. The Guide defines an automated system as “a computer system that automates part or all of an administrative decision-making process…. without the direct involvement by a human being at the time of decision” (p.5).

The Guide explains that automated systems can be used in different ways in administrative decision-making It provides the following examples:

  • they can make a decision
  • they can recommend a decision to the decision-maker
  • they can guide a decision-maker through relevant facts, legislation and policy, closing off irrelevant paths as they go
  • they may include decision-support systems, such as useful commentary about relevant legislation, case law and policy for the decision-maker at relevant points in the decision-making process
  • they can provide preliminary assessments for individuals or internal decision-makers
  • they can automate aspects of the fact-finding process which may influence subsequent decisions, for example by applying data:
    • from other sources (e.g. data matching information)
    • directly entered or uploaded to the system by an individual.

Agencies need to consider whether their use of AI constitutes automated decision making and manage any associated risks.

AI risk identification framework

Government agencies managing the lifecycle of an AI solution should use the FAIRA framework (or equivalent) to communicate risk with stakeholders. The FAIRA framework includes an AI component analysis (Part A), a values assessment (Part B), and a list of common controls (Part C) to assist teams with identification of actions that could be taken to reduce risk identified using the FAIRA.

A FAIRA (or equivalent) should be initiated at the earliest opportunity when an AI solution is under consideration for procurement, development, use, or discontinuance, and throughout its deployment and operation. Stakeholder consultation should be conducted and answers to any gaps should be sought from relevant experts to assist with clarification and confirmation (refer to Domains of AI Risk in FAIRA framework).

It is necessary to identify the boundaries/scope of the AI solution/system being targeted (e.g.- what it contains and what it entails, integration points with associated upstream and downstream systems). It is also important to ensure that the context identifies what is NOT part of the scope of the evaluation.

Action

An AI assessment team should ensure risks identified during a FAIRA (or equivalent) are communicated to appropriate stakeholders for evaluation and management through the existing risk management processes in their agency. The framework contains a list of general controls that an agency can use to mitigate risks during an AI lifecycle. The FAIRA (or equivalent) should be updated and versioned when the system is updated as this may change any associated risks. A new FAIRA (or equivalent) should be completed if an AI solution is deployed for a different purpose, in a different domain, or in a different context of use. Risk analysis should proceed within an agency’s risk management framework with similar processes to those listed below.

Risk analysis, evaluation and treatment

The risk analysis, evaluation and treatment steps are not typically considered separately. They are interrelated processes which need to be considered by the agency simultaneously.

Risk analysis

Risk analysis is about developing an understanding of the risk in order to determine the level of risk and make decisions about how the risk should be treated. Risk analysis will result in determining the risk level or risk rating for each identified risk. It involves developing an understanding of each risk, its consequences and the likelihood of the risk occurring. The risk analysis will inform the evaluation of risks, whether risks need to be treated and the selection of the most appropriate risk treatment strategy.

Risk assessment framework with Risk analysis highlighted Figure 3: Risk assessment framework - Risk analyis

Agencies will need to assess the likelihood and consequence of each risk occurring (taking into account existing controls). The process for analysing risk will differ from agency to agency. All agencies will use some sort of risk matrix mapping and ‘dashboard’ representation similar to that depicted below to identify a rating (e.g. low, medium, high, extreme) for each risk.

An example of risk mapping, liklihood vs consequence Figure 4: Example risk matrix

Agencies may use different categories for likelihood/consequence, or have differing criteria/thresholds for each category, or even have different risk ratings than those shown above. These variations do not matter. The point is that agencies will arrive at a per risk assessment as follows:

RiskLikelihoodConsequenceRatingRisk owner
Risk 1............
Risk 2............
...etc............

Agencies will often expand on the table above to outline the variation in likelihood/ consequence based on inherent risk versus residual risk (refer to your agency’s risk management framework to determine if this approach is applicable).

Risk evaluation/treatment

The purpose of risk evaluation is to make decisions based on the outcomes of risk analysis about which risks are acceptable, which risks need treatment and the treatment priorities. The highest priority should be given to those risks that are evaluated as being the least acceptable. To treat unacceptable risks, agencies may improve existing controls or develop and implement new controls. The risk evaluation stage involves the following key steps:

  1. determine treatment actions using risk rating responses (refer to your agency risk management framework for details)
  2. determine the risk target (refer to your agency risk management framework for details)
  3. determine the treatment decision.

Risk assessment framework with risk evaluation and risk treatment highlighted Figure 5: Risk assessment framework - Risk evaluation and treatment

The decision about how to treat a risk is based on the relationship between their current risk rating and the target risk rating.

  • Where the current risk rating is higher than the target risk rating, risk treatment options should be undertaken to reduce the risk to the required target.
  • Where the current risk rating is the same or lower than the target risk rating, the risk can be accepted and monitored.

It is important that risks are treated appropriately to reduce the risk to a level that is tolerable to the agency. It is also important that mitigation efforts are focussed on priority risk areas. In some instances, the risk target may be high despite the risk tolerance of the agency. This could occur in situations where no amount of reasonable mitigation treatment will effectively reduce the risk to a normally tolerable level.

When determining the treatment decision consider:

  • The causes of the risk and whether they are within the agency’s ability to manage
  • The effectiveness of existing controls to manage the causes of the risk
  • What resources would be required to implement treatment actions and what is the expected change to risk level?
  • The cost of implementing each treatment option against the benefits derived from it
  • The impact should the risk still occur despite the treatments applied
  • The gap between the current risk rating and the risk target.

The following treatment options are possible:

TreatmentDefinition
ReduceThe agency can apply risk treatments/mitigations that reduce either the likelihood or consequence of the risk/s occurring.
AvoidAgency makes an informed decision not to proceed with deployment of a particular solution/architecture in order to not be exposed to a particular risk.

There are numerous possible ‘avoid’ scenarios depending on the context and outcome of the evaluation.

Note – In practice, Agencies may undertake a risk analysis for several potential options simultaneously as part of an overall options analysis (as opposed to doing risk assessment for one option at a time, finding out it was unsuitable and then starting all over)
Share/ transfer

Agency distributes risk with other parties. Potential options could include:

  • in certain circumstances, and for certain risk types, sharing risk at a whole-of-government level may be acceptable in those cases where doing so at an agency level had been deemed unacceptable
  • shifting/sharing risk with the service provider may be an option for certain risk types. However, it is more likely that this approach would be to reduce risk only since government agencies cannot ‘outsource’ risk for their regulatory/statutory requirements. Agencies are still ultimately responsible.
AcceptDetermine that the agency can tolerate the risks introduced by the solution

There may be a mixture of risk treatments applied – for example a combination of reduce, share and accept treatments could be applied across the range of individual risks to achieve an overall acceptable level of risk for an AI solution.

Reporting requirements

An agency should consider reporting its AI investments and risks in several ways:

  • ICT Profiling
  • Assurance reviews
  • FAIRA (or equivalent) registration on Digital Tools and Resources
  • Privacy
  • Cybersecurity
  • Risk

ICT profiling

Annual ICT Profile reporting will include reporting on AI use for Queensland Government agencies included in the AI governance policy’s applicability statement.

Assurance

Any initiatives that use AI submitted for review under the Digital Investment Governance Framework should include a FAIRA or equivalent in their submission.

Digital tools and resources

Completed FAIRA (or equivalent) reports can be submitted to QGCDG for inclusion in the Digital tools and resources web site.

Privacy

Agencies should manage and report AI risks to their adherence to the Queensland Privacy Principles through their standard information privacy management processes.

Cybersecurity

Agencies should manage and report AI risks related to cybersecurity through their standard cybersecurity risk management processes.

Risk

Agencies should manage and report AI risks through their standard risk management processes.

Advice

Agencies should ensure existing governance frameworks and bodies (such as an audit and risk committees) are aware of initiatives that use AI technology and that related initiatives have been assessed and documented against the FAIRA framework or a suitable alternative framework.

Alignment

FAIRA (or equivalent) contributes to compliance with 6.1.2 AI Risk Assessment in ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system as a repeatable method to ensure that risk assessments are valid, consistent, and comparable.

Conducting a FAIRA (or equivalent) will help development or procurement teams identify the objectives of the AI solution to identify risks of AI being unable to achieve its objectives. Completing a FAIRA (or equivalent) enables an assessment of the potential consequences to Queensland Government stakeholders if risks were realised.

To determine if you should complete a FAIRA, see the checklist in Appendix A.

A FAIRA (or equivalent) should be completed in addition to Queensland Government AI guidelines and ICT risk management.

Further resources

AI policies

National Government

References


[1] Generic (i.e. non-cloud) content in this section is extracted, for the most part, from Risk Management Guideline and A Guide to Risk Management developed by Queensland Treasury

[2] Under the Financial Accountability Act 2009