Australian's attitudes to AI, trust and regulation

Parramatta NSW
E&OE

I want to begin by thanking Zoe and Johanna for the incredible work they do with the Tech Policy Design Institute. They have built, in a short time, a reputation for incredibly high-quality and impactful research on some of the most important questions in our economy and of our time.

This is detailed, challenging work in a new area of the economy where the government is always looking for partners to help us with expertise from the frontier — to make the right decisions in difficult circumstances, in fast-moving environments. So to have partners like Johanna and Zoe helping us think through these challenges is extremely welcome, and we're grateful for the hard work you do and for the supporters who make TPDi possible.

I also want to acknowledge that we are at the Western Sydney Startup Hub in this incredibly beautiful building. As Zoe said, we are in the heart of what is called the Female Factory here in Parramatta — an institution that used to incarcerate women and put them to work making textiles and other manufactured goods in the early part of the 19th century.

This is also the site where what is widely described as one of Australia's first industrial actions took place — where those self-same women rose up, complaining about their lack of rations and poor conditions, and started a riot here. And we celebrate that riot in Parramatta every day on Riot Day. Your adventurous, bold policy-making is a wonderful ancestor of that same spirit.

I want to acknowledge Shuoyan Zhu, who looks after this facility for the people of Parramatta. She's downstairs being busy — she does a terrific job for us here. At this hub, dozens and dozens of Western Sydney startups come through, meet other founders, get connected with capital and ideas, and create the businesses that are going to drive our region forward.

I also want to acknowledge another incredible institution that calls this building home — which is WSTI, the Western Sydney Tech Innovators. Lailei is a friend of mine, and just an extraordinary individual. Out of his own passion and interest and curiosity — and community spirit, and a sense that something was happening around AI that was going to change all of our lives and jobs and our world — Lailei created WSTI.

It started with just a small group of people coming together, meeting in the Parramatta library, learning about AI — a few people, a couple of young people, a couple of school kids, a couple of older people, people from all different walks of life. And that community has grown from half a dozen to a few hundred, to a thousand, to now more than 3,000 members.

An entirely grassroots organisation of people coming together to learn, to share, to recognise that this technology is important — but that the most important thing about this technology is that everybody understands it and has access to it, and nobody is excluded from it. That is the mission, the message of WSTI, and I want to thank Lailei for everything he does in our community to make that a reality. It's a model that I hope is copied right across the country.

This research comes at just the right time, and it has some really powerful messages in it. This research tells us that Australians want the benefits of AI. They don't want to be left behind. They know that it's going to be important for their careers, for their businesses, for their competitiveness. They want to adopt it — but they want to adopt it safely. They want to know that they can have trust in it.

And more importantly, this report makes the connection between adoption and trust — and it shows that these things are not in tension with each other. We don't have to face a choice between: do we make AI safe, or do we make AI accessible and used by all? Those are not alternatives. They are not a trade-off. They are something that we can achieve best together.

And the report says very clearly that by making AI safe, by providing trust, we will enhance adoption — enhanced acceptance — and drive those benefits further. I think that's an incredibly important message, and one certainly that the government has put at the heart of our approach to making sure that Australia benefits from AI.

Our approach in the Albanese government has been very simple: we want to make sure that technology works for Australians, not the other way around. And in our AI Plan released last year, we had three objectives:

First — to make sure that Australia captures the benefits of AI. To make sure that we have the companies here, the infrastructure here, the investment here, to put us at the forefront — to enable us to remain competitive, to be on the leading edge of the global digital economy.

Secondly — we don't just want to see Australia capture those benefits. We want to make sure those benefits are shared widely, and that every Australian sees something in this for them. This isn't something which is just a benefit for the big end of town, or the startup scene, or the universities. This is the technology from which every Australian, every Australian worker, sees a benefit. The benefits of increased productivity flow through to their wages. The benefits of better government services are going to be seen in their schools, in lower wait times at their hospitals, in more time available for nurses to spend with them as patients — real, tangible benefits for every person in Australia.

And the third thing that is important for us — the third pillar of our AI Plan — is to make sure we keep Australians safe. That means mitigating the harms, giving people trust and confidence in this technology.

At the centre of our plan to deliver those three objectives is a regulatory approach which we think is the best approach to keep Australians safe. 

That approach is not an approach to build a central AI authority. We looked at that approach and we thought: if we try and build a central AI authority, we would end up duplicating a lot of the existing regulatory functions across our economy and across our society.

We don't think that AI is a vertical that can be treated as one discrete thing in our government or our economy. It is a horizontal embedded in every single part of our economy. And therefore, our approach is not to create an AI regulator — it's to make every single Australian regulator recognise that AI is their job, it is their core responsibility, and the harms in their area related to AI need to be at the top of their agenda.

That's our approach: to take the great work that Australian regulators do right across the board and put AI as a core part of their business. Now, to make that regulatory approach operational, we need two things. Firstly, we need to make sure that all of those regulators have clear boundaries, roles, and responsibilities. And so that's why we set up the AI Safety Institute.

The job of the AI Safety Institute is to help identify risks that are coming down the road towards us in AI — then to allocate those risks to existing agencies, make sure that everybody's clear about who has responsibility for mitigating what risks — and then to provide the technical capability, when required, to make sure that those agencies can mitigate those risks.

So our approach is to have every single Australian agency realise that they can't abdicate their responsibility to another agency. We're not duplicating the irresponsibility in another part of government — the responsibility is vested with them. But they have clear responsibility lines and accountability, and they have the support that they need. We think that is the best way to keep Australians safe.

In terms of regulation and legislation, our approach is to act when we see real risks. And we've already in areas like privacy, in areas like child safety — to ban deepfake nudify apps. But where we see AI risks that require a new legislative response, we have acted and we will continue to act.

We think this approach — of making AI core business for all of our agencies, of acting on specific threats when they come up — is the best way to keep Australians safe, and the best way, as this report identifies, to make sure that Australians have trust in AI.

The final thing I want to say — and I've been banging on about this a little bit this week — is that I think another element of Australians having trust in AI is Australians feeling like we have sovereign AI. Not only do we have the best technology from around the world being used here in Australian businesses — but in addition to that, we also have Australian businesses developing their own AI. Australian businesses that are able to create value here, export that value to the world.

This sovereign Australian AI industry is in its infancy. We have a lot of capability — more than 1,500 startups, incredible researchers across our universities and government science institutions. But we're in a critical window right now, where those capabilities, where those young companies, will either be adopted and embraced by the Australian community, by Australian organisations and corporates — or not.

And we think that if they are, then we build a dynamic, sovereign, domestic Australian AI industry that will deliver prosperity and trust for a long time to come. If we fail to seize that opportunity, we run the risk of being a renter of foreign intelligence — a perennial importer of the technology of the 21st century.

So we are motivated as a government to keep Australians safe. We are motivated as a government to support domestic AI companies — great Australian companies, including the kinds of companies represented in this room, as well as many of the organisations coming up through institutions like this building right here. And we're motivated to make sure they are backed by the Australian government and by Australian organisations. 

It's terrific to be here. Thank you all so much for coming. Thank you for your support for TPDi. This is the first report this year — we look forward to the rest of the work and contributions that TPDi will make. Thank you.