Keynote speech to the HTI Symposium on Regulatory Certainty for AI Adoption in Australia

Sydney
E&OE

I begin by acknowledging the traditional custodians of the land on which we meet today, the Gadigal people of the Eora Nation, and pay my respects to their elders, past and present.

I pay my respects to any First Nations people in the audience today.

I’d also like to acknowledge:

  • Professor Nicholas Davis, HTI Co-Director
  • Prof Ed Santow, HTI Co-Director
  • Professor Sally Cripps, HTI Co-Director
  • My esteemed fellow panellists

Opening - TECHNOLOGY NEEDS TO WORK FOR PEOPLE, NOT THE OTHER WAY AROUND

Thank you for that introduction.

I want to acknowledge the Human Technology Institute for providing an opportunity to speak today.

HTI has a reputation for asking searching questions about AI: what it should do, who it should serve, and how our institutions will need to evolve. And HTI does this by bringing a focus on the human dimension —

How to make technology work for people, and not the other way around.

As an elected Member of Parliament with the privilege of serving in a Labor Government, that work matters enormously to me.

Labor governments exist to advance the interests of working people.

And throughout our history, technological progress has been one of the vectors of that advancement. Technological progress creates the economic surplus that can be transformed into social progress.

For that reason, Labor governments see tremendous value in technological progress. And for that reason, we see great promise in artificial intelligence.

But the nexus between technological progress and social progress is not always direct, is rarely self-executing and is never guaranteed.

Too often the gains from technological progress are concentrated among the few, and the costs of that progress are born by the many.

In these instances it has required governments, workers, businesses and society to step in and actively work to ensure that technological progress delivers broad benefits to people.

This historical pattern informs and shapes our approach to artificial intelligence. We are determined to make it work for people.

We are – now beyond doubt – living through something genuinely new.

In the last few years, large language models have moved from single-task tools to systems that reason across domains, solve scientific problems, write code, and increasingly act with autonomy in the world.

The pace of improvement has surprised even the people building them – and transformed the nature of the challenge facing governments.

There are very real impacts that AI is already having on the workforce. From conversations with unions and workers, I’ve heard stories about:

  • call centre staff being replaced by chatbots.
  • gig economy drivers being fired automatically by AI algorithms
  • warehouse workers monitored by AI surveillance systems that track every movement and set punishing targets.
  • young people locked out of careers because some entry-level jobs are disappearing.

But I’ve also heard workers talk about AI’s potential

  • Teachers who see the potential for AI to provide free multi-modal tutoring for disadvantaged students
  • Health workers and professionals gaining access to AI tools that enable them to see more patients and work more productivity; and

We need a cleared-eyed understanding about both benefits and risks.

Naivety, blind optimism or unrelenting moral panic – or some combination thereof - can be your own worst enemy.

My son is 12. He’s been studying the Industrial Revolution at school. As we talk about it over dinner, three things stand out for him.

The first lesson is positive: that the technologies of that period brought unprecedented progress. For the first time, humanity broke free from the Malthusian trap when most people went hungry and few lived beyond their 30s.

The second lesson is less positive: technological change was brutal. In the early decades of the Industrial Revolution, thousands of artisans lost their livelihoods, families were driven into long shifts for meagre pay in grim conditions, and life expectancy went backwards. Social progress eventually came, but it took a heavy human toll.

The third, and most important lesson as I outlined earlier is this: what turned the injustice of the early industrial revolution to the progress that followed was not automatic. It required action. Governments stepped in. Unions fought for working people. Together, through legislation and organisation, they turned exploitation into progress.

This same story repeats through history right up to our lifetimes. In the early 20th century railways and combustion engines displaced jobs, but built national economies. In the mid-century, mass electrification put lamplighters out of work, but ultimately gave us reliable power, safer streets and modern homes.

As the work of the Human Technology Institute and others has shown, moving too slowly is not the biggest risk.

The deeper risk is moving unpredictably – from enthusiasm to backlash, from light touch to over-correction, from passivity to prohibition.

One of the important insurance policies we have is regulatory certainty, underpinned by clear principles with broad buy-in.

National AI Plan

The central instrument the Albanese Government has for delivering that certainty is the National AI Plan, which we released before Christmas.

The Plan is a clear signal that Australia is serious about widespread adoption and equally serious about effective regulation.

Responsible innovation goes hand-in-hand with our economic and social policy objectives and that is why the plan is underpinned by three core principles:

Capturing the opportunities of AI; spreading the benefits; and keeping Australians safe.

The first pillar of our plan is to capture the opportunities of AI for Australians.

Australia captures the benefits of AI by becoming both a maker and a taker of these technologies.

We have many Australians jumping in to be ‘makers’ of AI, building AI companies that are creating value for our country.

One of those companies - Harrison.ai – which the government has invested in through the National Reconstruction Fund – uses AI to review CT scans and X-rays to support the detection and diagnosis of medical conditions.

But Australia can also benefit from AI through adoption as well as generation.

For example, Radiologists using Harrison.ai’s technology have seen a more than 45 per cent increase in diagnostic accuracy.

In his book, Technology and the Rise of Great Powers, Jeffrey Ding makes one central argument: technological leadership is not just about inventing new technologies, it’s about how societies absorb, diffuse, and apply those technologies across their economy and institutions.

Like electricity and the internet, AI is a general-purpose technology that has the potential to lift productivity across almost every sector.

AI also has immense potential to contribute to the goals we’ve set out in Future Made in Australia, the government’s $22.7 billion commitment to restore our industrial capability and secure the critical industries our prosperity and security will rely on in the decades to come.

It can make factories smarter, supply chains more resilient, lead times shorter, and help firms scale up more quickly.

The second principle of the National AI Plan is sharing the benefits.

This is a core value for Labor governments – access, equity, opportunity. Ensuring we can grow the pie and have all Australians benefit from it.

Our engagement with unions, workers and businesses is a central plank of building a culture that recognises technology is here to work for people, and not the other way around.

If the economic benefits of AI are concentrated to the biggest firms, or a handful of CEOs, then we’ll have failed.

Whether it is increasing productivity, improving service delivery, supercharging science and research, or helping Australians who have been left behind – every lever of our policy energy must work towards creating benefits that the Australian people experience and value.

That’s why the Australian Government is focused on building an AI-ready workforce, supporting workers to upskill, supporting all communities to access AI skills and training.

It’s why we’ve tasked the Future Skills Organisation (FSO) to undertake an economy-wide consultation on AI skills.

This will build on the research undertaken by Jobs and Skills Australia, and support a fairer, stronger Australia where every person benefits form this technological change and the dignity of work.

The third goal of the National AI Plan is safety – which brings me to the AI safety institute regulatory approach the Government has decided upon.

AI Safety Institute

Minister Ayres and I are often asked why the government has chosen not to introduce an omnibus piece of AI legislation up to this point or a central AI authority.

The first reason is that we believe the best way to keep Australians safe is not to create a single agency responsible for AI, but to empower every existing agency across government to take responsibility for AI.

There is not a single part of government or our economy that won’t be touched by AI. If we attempted to create a separate authority that would play in the space of every regulator, we would risk duplication or abdication of responsibilities by existing agencies. 

Many of those agencies already have strong frameworks for privacy, consumer protection, anti-discrimination and other harms.

Instead of creating a new source of authority, we want every existing agency to be responsible and empowered to address the risks and opportunities of AI. This is a whole of government approach.

Our second concern is agility. AI is moving fast. A single AI Act could be out of date the moment it gets passed. Instead we want a dynamic capability that can constantly scan the horizon and identify risks and opportunities.

But this system wide approach requires us to build new capabilities and linkages within government.

That is why the Albanese Government is investing in permanent institutional capability through the establishment of the AI Safety Institute.

The role of the AI Safety Institute is to support the system wide approach by providing crucial capability to each responsible agency across government. The AI Safety Institute will identify risks on the horizon, engage with departments and regulators, and provide technical capability to support them to manage risks – as well as adapt where gaps have been identified.

We believe this combination of new technical capability at the centre of government, supporting with confirmed responsibilities for every agency across government is the best way to keep Australians safe.

One of the unique value propositions of the Safety Institute will be its ability to tap into expertise – whether that be from international counterparts, technology experts, unions and worker representatives and consumer advocates - and provide well-informed insights to relevant parts of the systems.

The design of the institute and the regulatory architecture we have chosen will go hand in hand to support this.

Closing

I’d like to end where I began.

The choice before Australia is not whether we will have AI. We will.

Nor is it a choice about whether AI will shape our economy. It already is.

The real choice is how we make it compatible with the national interest.

The Human Technology Institute put it well when it wrote that:

'The National AI Plan doesn’t just set a vision; it commits the Government to action. Success will be judged on how quickly the Government fulfils its commitments.'1

I agree. That’s the standard against which our approach should be judged.

Through the National AI Plan, the National AI Centre, and the AI Safety Institute, the Albanese Government is building a regulatory architecture that promotes stability and predictability.

One that supports innovation as a default, prevents harm where possible, adapts where we need to, and balances risk in a way that allows Australian workers and businesses to proceed with confidence.

Our partners and our critics will all play a role in shaping public understanding and public trust.

The Human Technology Institute will be central to that important task.

Thank you.

 

[1] https://www.uts.edu.au/news/2025/12/australias-national-ai-plan-time-to-act