Doorstop on funding an AI Safety Institute
SENATOR TIM AYRES, MINISTER FOR INDUSTRY AND INNOVATION AND MINISTER FOR SCIENCE: Good morning everyone. It’s great to be here with the Assistant Minister for Science, Technology and the Digital Economy.
As we indicated earlier in the year, the Albanese Government will be releasing our broader plan and strategy for artificial intelligence by the end of the year. That plan will be focused on making sure that Australia captures the economic opportunities of artificial intelligence; that we spread the benefits from the CBDs through the suburbs, to Australians in big workplaces and small ones; and it will also be focused on making sure that we're keeping Australia and Australians safe in a world where artificial intelligence offers enormous opportunities, but some risks for our firms, for individual Australians and for communities.
Today, I'm very happy to announce that, as part of that plan, we will be funding an AI Safety Institute that will operate from my department. It will do all of the work that is required as artificial intelligence evolves, as the opportunities and the challenges evolve, in advising government on focusing on regulatory reform or capability gaps that may emerge within government. It will test and monitor artificial intelligence models and AI platforms. And it will work with the research community to make sure that government is getting the very best advice, not just in 2025, but as an enduring commitment to making sure that we match this evolving technology with government capability and regulatory capability to keep Australians safe.
It will, of course, work across the global network of AI safety institutes that Australia signed up to just 18 months or so ago to deliver our own AI Safety Institute, but also to cooperate across global AI safety institutes, to make sure that Australia is leading developing global norms and global approaches to regulating and managing the threats and opportunities that exist with AI technology.
This is, as I say, an enormous opportunity for Australia. I want to see Australia develop a pragmatic approach to artificial intelligence. That's an Australian approach that's in Australia's interest. And the AI Safety Institute will be an important part of making sure that we do just that. I'll hand over to my colleague Andrew Charlton, then happy to take a few questions.
DR ANDREW CHARLTON MP, ASSISTANT MINISTER FOR SCIENCE, TECHNOLOGY AND THE DIGITAL ECONOMY: Well, thank you. I want to begin by acknowledging the leadership of Minister Tim Ayres on this issue, which is so important for Australia.
In 2024, the Australian government committed to establish an AI Safety Institute at the Seoul AI Summit, and today the government is delivering on that commitment. AI, in the right hands, can be a force for good, but in the wrong hands, it can be a force for harm. The AI Safety Institute that the government is announcing today will be one of the ways that the Australian government is seeking to mitigate those harms and keep Australians safe.
The way that the AI Safety Institute will work is that it will scan the horizon for emerging risks and threats and then work consultatively across the government to identify those threats and develop the best mitigation to help to keep Australians safe, working with departments and agencies to provide them with the best advice on an ongoing basis.
AI is changing so quickly, and that's why this capability is very important. Any rules that we put in place today will unquestionably be out of date in six months or twelve months’ time. But this is an evolving capability, something that continues to learn over time, establish best practices in Australia and support a dynamic approach to AI safety in Australia. Thanks.
JOURNALIST: When are you expecting it to be fully up and running?
AYRES: I expect this to be up and running early in 2026. This is, as Andrew says, a commitment that Australia signed up to in Australia's interest in Seoul in May 2024. There is, of course, work that has been undertaken over the course of the last few months to make sure that we've got that capability right. It'll be funded and announced as part of the government's approach to MYEFO. So, you'll see that in the MYEFO announcement, and it'll be fully funded and operating in 2026.
JOURNALIST: Can you give us an idea of the budget for this Institute? And also, on a separate issue, can I ask you, please, Minister, when you expect to announce a plan to save Tomago?
AYRES: So, firstly, on budget for the AI Safety Institute. I've learned as a new minister that where there's MYEFO announcements and funding announcements, it's best to leave those to the Treasurer and the Finance Minister in MYEFO. This will be funded to do its job properly. It will have all the capabilities that it needs. The AI Safety Institute will have a very big job in front of it, building capability over a short time; making sure that we're recruiting the best and brightest into this work to make sure that we're getting the advice and the capability that Australia needs. But that announcement will be made over the coming weeks when MYEFO is released.
In terms of the Tomago aluminium smelter, I have indicated consistently that we will leave no stone unturned in our approach to securing this important facility. There is no certain outcome in relation to the future of Tomago. We understand its importance, not just to the Hunter Valley and the industrial economy of the Hunter Valley, but also its importance as a vital Australian aluminium asset. It is facing a tough set of global trading environments, and the company has been, with its three owners, fighting hard to secure access to sufficient renewable energy at the right prices. We'll continue to work through that over the coming days and weeks.
JOURNALIST: Is there a timeline for a decision, though, on a plan to save the plant?
AYRES: Well, while I'd like the timeline to be today, we are just at the table, working away. I want to see a quality outcome here, and we're just going to keep grinding away at this question.
JOURNALIST: There's a new survey out from the Finance Sector Union about how workers are feeling about the AI roll out in their sector. I understand some of them are quite concerned saying it’s being rolled out without any sort of guardrails. What are you looking to do to sort of protect workers and their ability in the face of AI and its ever-evolving changes?
AYRES: There are certainly contradictions here in the way that Australians are approaching artificial intelligence. At the same time as some Australians are apprehensive about artificial intelligence technology in the workplace – and I think some of that Australian scepticism about these questions is, of course, shaped by the experience that Australians have had with other waves of technology, social media in particular. I think that frames the way that Australians think about these questions.
But the contradiction is, of course, in our work lives and our home lives and in the community, Australians are big adopters of artificial intelligence technology. These challenges are going to work their way through Australian workplaces. I’ve got to say, as a former trade union official in the manufacturing sector, I saw the impact of waves of technology as robotics, automation, digital technology was adopted across Australian workplaces, with Australians really rising to the challenge and cooperating at work. And it's certainly going to be a feature of our industrial relations system, not just in 2025 and 2026 but in the decades to come.
JOURNALIST: Just another technology related question as well. Yesterday, federal staff and politicians here in Canberra were given an email regarding some security concerns with the Chinese officials’ visit. Can you talk us through what some of those are, and why staff were instructed to shut their doors, close blinds if they were along that pathway?
AYRES: It's not my habit to discuss the security advice that may or may not be given to staff and to parliamentarians as we do our normal work. There are proper security arrangements for the building and for all of us in our work to be diligent and capable and smart about the way that we approach security questions.
Anybody else got anything else for me? Thank you very much. See you soon.
