The recent release by the Australian, state and territory governments of the national framework for the assurance of artificial intelligence in government (AI Assurance Framework) provides much-needed guidance that paves the way for these organisations to harness AI effectively and responsibly.
A statement issued within the framework acknowledges the importance of government leadership in the safe and responsible use of AI, with careful oversight of legal, privacy, security, and ethical risks such as bias and fairness. “To gain public confidence and trust, we commit to being exemplars in the safe and responsible use of AI,” the governments said. “This requires a lawful, ethical approach that places the rights, well-being and interests of people first.”
The development of the framework can help overcome public scepticism and boost trust in AI, paving the way for government and public sector organisations to realise its transformative potential.
The Digital Transformation Agency has since moved to enforce the responsible use of AI by Commonwealth agencies through a new policy effective from 1 September. This policy sets out how the Australian Public Service will:
- embrace the benefits of AI by engaging with it confidently, safely and responsibly
- strengthen public trust through enhanced transparency, governance and risk assurance
- adapt over time by embedding a forward-learning approach to changes in both technology and policy environments.
The agency has also released standards for officials accountable for AI within their agencies and plans to launch a standard that sets out the information agencies should release about their use of AI.
Smart public policy key to harnessing AI opportunities
Workday applauds these efforts by the Australian government. At Workday, we firmly believe that smart public policies such as the AI Assurance Framework and the policy for the responsible use of AI in government (Government AI Policy) are crucial in harnessing the opportunities presented by AI and minimising its risks. Our advocacy for smart AI regulation and frameworks is built on the foundation of more than a decade of technical expertise in developing AI and a robust, Responsible AI program that leverages principles, practices, and people to ensure our AI technologies are developed thoughtfully and responsibly.
Australian governments’ principled AI Assurance Framework and Government AI Policy are welcome steps and complement recent regional guidance to ensure the responsible deployment and use of AI across the public and private sectors. These publications, which include the recently released ASEAN Guide on Governance and Ethics, provide a starting point to address some of the biggest challenges facing organisations when it comes to AI development and deployment.
Only when AI is developed and deployed in a trustworthy manner and supported by smart public policy can we close the AI trust gap. Frameworks and policies like these provide critical reference points and rules for departments and agencies planning to deploy AI. By integrating relevant elements of these approaches into their AI strategies and implementations, organisations can bring organisational leaders and employees onto the same page and unleash the full transformative potential of AI.
Highlights of the AI Assurance Framework
We welcome the decision by Australian governments to base its AI Assurance Framework on existing AI Ethics Principles, which provide a useful signpost for businesses and governments on the safe, secure, and reliable development of AI technologies. Workday anchors its Responsible AI program in a similar set of principles focused on developing AI solutions to amplify human potential, positively impact society, champion transparency and fairness, and deliver data privacy and protection.
The AI Assurance Framework outlines five key cornerstones for AI assurance which we think are critical when establishing a practical, pro-innovation AI framework and mirror many of Workday’s existing practices, including:
- AI Governance: The framework identifies the need for cross-functional expertise, strong leadership commitment, and an AI risk-sensitive culture, as well as staff training and resources to understand and implement AI governance effectively. At Workday, our Responsible AI program starts with commitment at the very top with senior executives across the company forming a Responsible AI Advisory Board. We also operate a cross-functional network of internal champions comprising members from the product and engineering, legal, public policy and privacy, and ethics and compliance teams.
- Data Governance: We welcome the government’s acknowledgement that data governance is a key element in AI governance. At Workday, we embrace data stewardship, governance processes, and privacy-by-design principles that give our customers control over how their data is used.
- Risk-based Approach: As strong advocates of a risk-based approach forming the foundation of AI governance approaches, we applaud the Framework’s recommendation to assess and manage the use of AI on a case-by-case basis throughout the AI system lifecycle to ensure safe and responsible development, procurement and deployment in high-risk settings. We also strongly agree that risk should be managed
- Technical Standards: Technical standards play an essential role in AI assurance as they help ensure regulatory requirements can be effectively implemented in practice. We agree that governments should align their approaches on AI with emerging global standards, to ensure the technology is implemented in a safe and responsible manner, both in Australia and consistently around the world.
- AI Procurement: The Framework recommends that careful consideration and clear establishment of accountabilities in vendor relationships, access to relevant information assets and proof of performance testing throughout an AI system’s life-cycle.
Not unlike privacy, getting the policy specifics on roles and responsibilities right is an essential key to any successful governance approach that builds trust in AI—and AI governance is a shared responsibility between AI developers and deployers. As an AI developer, we provide our customers with AI fact sheets for AI tools we offer, giving them access into how our AI offerings are built, tested and trained. We are also closely following existing and developing regulations and best practices and have built responsible AI-by-design and risk mitigation frameworks that align with the dynamic regulatory environment.
Similarly, AI deployers have key responsibilities to fulfil in ensuring that AI technologies are implemented in a safe and responsible manner. Only when all parties in the AI value chain understand their respective responsibilities and commit to working together can we ensure that these technologies are used to amplify human potential and positively impact society.
Continuing the conversation about responsible AI
The newly released AI Assurance Framework and Government AI Policy represent significant advancements in Australia's approach to ethical AI adoption in the public sector. The five cornerstones of the AI Assurance Framework and the transparency obligations under the Government AI Policy provide a foundational and adaptable roadmap for responsible AI implementation.
As Australian governments continue to develop their AI regulatory approaches, an effective partnership between the industry and government can help guarantee the responsible, safe, and secure development and implementation of AI.
At Workday, we believe that meaningful and workable AI regulation can close the AI trust gap, and we remain committed to playing a constructive role to advance policies that build trust and drive responsible innovation.