AI transparency statement

The policy for the responsible use of AI in government provides mandatory requirements for departments and agencies relating to accountable officials, and transparency statements. This page provides details of the DTA’s implementation of these policy requirements. 

Accountable Officials 

The DTA has two accountable officials under the policy. These are the Chief Technology Officer (CTO), Andrew Morrison and the Chief Operating Officer (COO), Tom Gilmartin

The CTO has primary responsibility for the following areas of the AI policy: 

  • facilitating our involvement in cross-government coordination and collaboration 
  • encouraging the implementation of further actions suggested in the policy 

The COO has primary responsibility for the following areas of the AI policy: 

  • developing a policy implementation plan for the DTA 
  • monitoring and measuring the implementation of each policy requirement 
  • strongly encouraging additional training for staff in consideration of their role and responsibilities, such as those responsible for the procurement, development, training and deployment of AI systems 

The following areas have been identified as joint responsibilities of both accountable officials: 

  • embedding a culture that fairly balances AI risk management and innovation 
  • uplifting governance of AI adoption in the DTA 
  • development and implementation of AI fundamentals training for all staff  
  • encouraging the development or alignment of a DTA-specific AI policy  
  • reviewing our policy implementation regularly and provide feedback to the DTA’s AI policy team 
  • enhancing the response and adaptation to AI policy changes in the DTA 

DTA’s approach to AI adoption and use 

The DTA is trialling the adoption of AI as part of the Australian Government’s commitment to digital innovation. For more information, see the section on adopting emerging technologies in the Data and Digital Government Strategy

The DTA is committed to demonstrating, encouraging and supporting the safe and responsible adoption of AI within the Australian Public Service, and in digital and ICT investments, systems and digital services. As part of this commitment, we will implement AI fundamentals training for all staff, regardless of their role. 

How the DTA uses AI 

At this time, we are not using AI in any way that members of the public may directly interact with, or be significantly impacted by, without a human intermediary or intervention. The DTA is using AI in the domain of Corporate and Enabling, and usage pattern of Workplace Productivity. 

From 1 January 2024 to 30 June 2024, the DTA both coordinated and participated in the Australian Government’s trials of a generative AI service, Microsoft 365 Copilot. DTA now continues to make Copilot available to staff. As a prerequisite to using Copilot, DTA staff are required to complete internal training on the use of generative AI. 

We also have a policy on the use of AI tools by staff, which staff are required to confirm and acknowledge they are familiar with before accessing generative AI tools online. This policy encourages and assists staff to: 

  • not rely on the authenticity or veracity of content generated by AI, without external verification  
  • restrict the distribution of sensitive material to third parties, for example by copy-and-pasting sensitive content 

The DTA is participating in the Pilot Australian Government AI assurance framework. Through our participation in this pilot, we are exploring the potential for AI to be used by our staff and by our ICT systems.  

AI safety and governance 

Within the DTA, each ICT system has an identified system owner who is accountable for the system, and each AI use case has an identified executive sponsor. All AI use cases are recorded in an internal register to track their progress and status. For new and emerging potential uses of AI, it is the responsibility of the system owner to apply the Pilot Australian Government AI assurance framework, and to identify an appropriate executive sponsor. The ICT system owner and the AI use case executive sponsor are together responsible for: 

  • ensuring that any AI is implemented safely and responsibly 
  • monitoring the effectiveness of the deployed AI system 
  • legal and regulatory compliance of the ICT system 
  • identifying potential negative impacts of the AI use case 
  • implementing measures to mitigate potential harms from AI 

ICT system owners and AI use case executive sponsors are accountable to the Executive Board. For more information about the purpose and operation of the Executive Board, see the DTA’s annual report

This transparency statement was last updated on 4 December 2024. It will be updated as our approach to AI changes, and at least every twelve months. 

Update Publication Date  Update Comment 
1 November 2024 
  • initial version 
  • designation of Accountable Officials 
4 December 2024
  • full transparency statement 
  • brings the DTA up to date with the requirements of version 1.1 of the policy 

For further information or enquiries about the DTA’s adoption of artificial intelligence, you can contact us directly at info@dta.gov.au