Release details
Release type
Related ministers and contacts
The Hon Richard Marles MP
Deputy Prime Minister
Minister for Defence
Media contact
The Hon Ed Husic MP
Minister for Industry and Science
Ministerial contact
Minister Husic's office - 02 6277 7070
Release content
3 November 2023
SUBJECT: AI Safety Summit.
RICHARD MARLES, DEPUTY PRIME MINISTER: It's fantastic to be here at the AI Safety Summit and to be here with Australia's Minister for Industry and Science, Ed Husic. It's been a really important Summit and we want to thank the UK Government for convening it. Artificial intelligence is an incredible technology and one which offers so many benefits to humanity; improving diagnoses for rare conditions, in improving education, in giving much better modelling around weather, and therefore dealing with issues around climate change. But what comes with the development of any technology is the need to make sure that that is developed in a way which is safe, which is why this Summit has been so important in terms of having that conversation really early on around how we can make sure that there is safe regulation and good conversation between industry and governments around the world. And we very much welcome a declaration which has come out of the Summit. We welcome the establishment by the UK and the United States of their respective AI safety institutes – we look forward to the work that they will do and we also look forward, from Australian point of view, in working with them.
ED HUSIC, MINISTER FOR INDUSTRY AND SCIENCE: Thank you. Well, I think what's really important here, and I think what Richard and I have certainly witnessed is the reality that with a technology that spreads across all corners of the globe, and where there's previously been a much more hands off approach to regulation, we've seen this week, as part of the backdrop towards the summit, a seismic shift in the way that the world thinks about the building and the application of technology, particularly around artificial intelligence. It gives us greater chance for countries across the world to work together to make sure we get the balance right. As Richard rightly pointed out, there are a lot of great things that come from using AI that can churn through data a lot quicker, a lot more efficiently, be able to give us information about possible trends, things that we can work on, can improve us in terms of health, education, the way our companies work. But there are other downsides that have to be acknowledged and by acknowledging them, dealing with them as well. And being able to have a global framework that gives us some sense about how we test and evaluate AI systems before they're released – you've seen in the US Executive Order a big push for companies to be able to say directly what they anticipate will happen with the models that they're using. These are big things that are happening and Australia is particularly keen in our own country to ensure that we've got a safe and responsible framework for the operation of AI. And that has now been helped by this broader global approach. It's not about the company setting all the rules, but rather that you've got the companies, civil society that help test and push some of the thinking, plus governments being able to set up a framework of regulations to give people assurance and comfort, importantly, trust about how technology works. So it has been a big week. And it's been important that Australia is here, that we can not only have our voice heard in a forum like this, but that it can also inform the type of work that we need to do back home in Australia.
JOURNALIST: Deputy Prime Minister, the fact that you're here suggests that Australia is taking this somewhat seriously. And this summit wouldn't be happening if there weren't concerns about the risks of AI. So can you tell Australians what those risks are to our country?
MARLES: I might let Ed have a go at this as well. I mean, there are risks that people often articulate in terms of long-term, existential risks around the development of AI. It's obviously important that we are thinking about that early on in the development of the technology. There are short term risks and I think they're really issues that are focusing people's attention around ways in which people's data can be used, privacy can be breached, bad actors can be more sophisticated in the way in which they phish for that. And so we need to be looking at ways in which we are dealing with those near-term risks, as well as thinking about the bigger question. But I think what's really important about this week is that as we move forward with what is a defining technology of our age that we are doing so with a clear focus on how that can be regulated in a way which ensures that safety and not just safety, but fairness, in terms of the way in which the artificial intelligence is applied, and also inclusiveness to make sure that the global (inaudible) gets access to artificial intelligence and the benefits that come from that, a lot of countries that surround Australia.
HUSIC: And I think that's right, Richard, in terms of the existential, the focus on what might happen, as I've described it elsewhere, if the technology gets ahead of itself, particularly around automated decision making processes. You know, a lot of the consultations we had in Australia, we had a lot within industry and civil society, people saying what's the circuit breaker? What's the handbrake that stops the technology when it's working in a way that is working against our interests? And there's some of that discussion that's happened. But I think it's really important that while you do recognise that, there are still near term challenges, not the least of which being how does the technology operate off a data set that might be biased or encourage discrimination? How does it, for example impact on workforces, on workers and their jobs and the way that they operate? What does it do for consumers, particularly if those data sets are biased and decisions are made around who gets health care or health insurance, for instance, in those scenarios? And those are things that again you have to be alive to, alert to and be able to respond to.
JOURNALIST: Minister, in your Defence capacity, I mean, how do you ensure that, you know, if our adversaries or potential adversaries or strategic competitors are using a rapidly developed– deploying AI and their military capabilities, how do we as a country not respond to that by doing likewise and get involved in some kind of AI arms race that gets ahead of the kind of regulation you’ve been talking about here today?
MARLES: Well, what's come out of this week as well is an important statement in relation to the use of artificial intelligence in defence. I mean, there is a role for the use of artificial intelligence when it comes to defence capability. But what is fundamentally important is that the rules that we have in place around the way in which we engage in warfare apply in terms of the way in which artificial intelligence is used. In other words, those rules still need to be there. And it's critically important that artificial intelligence is not deployed in a defence context which undermines the obligations that we have under a range of international treaties. And I think that's, that is the focus of how we are looking at the engagement of artificial intelligence when it comes to defence. It has a place. But it can't be a place which corrodes the very important international architecture which sits around the way defence operates.
JOURNALIST: How do you stop a robot though, deciding what the rules are?
MARLES: Well again, it's important that there is a human centered way in which we proceed with this so that the various obligations that we have around a range of treaties to which we are a party in the defence space are able to be maintained. And making sure that there is that human centered focus is fundamental so that the use of artificial intelligence as useful and important as it can be, does not erode the obligations that we have.
JOURNALIST: On a balance, are you more excited or concerned about AI?
MARLES: We should both answer this question. Excited. I mean, artificial intelligence has a huge potential, as I said earlier, in terms of diagnosing diseases that we don't have a diagnosis for, by rapidly interrogating large data sets. It has huge implications for the way in which we can engage with education, the way in which we predict and work with the climate. I mean this is a technology which can really benefit humanity. It's very important with this technology that we are walking down this path in a manner which is safe. And you know, a lot of the conversation over the course of the day has been comparing this to other technologies that we've had; aviation, pharmaceuticals, car transport, all of which have had dangers associated with them, all of which have provided enormous benefits to humanity. But making sure that there is safety built into the way in which those technologies operate is profoundly important and given the complexity of artificial intelligence, it's really important that that's done at an early stage.
HUSIC: And I completely agree. I mean, again, it's about the light and shade. You know, I think of cochlear using AI in its software to build better bionic ears that deliver the gift of sound to people previously denied it. I think of some of the AI that's been used to predict bushfires not three days, three months in advance, potentially 30 years, and be able for us to better plan the way that we live, we build homes and the way that we also protect people against hazards. So, there's also in terms of the big challenge that we faced with the pandemic. The fact that we were in the middle of a lockdown, a global pandemic, able to use AI as a tool to fast track the development of a vaccine that normally would take decades. We were able to do that in a concentrated timeframe, save lives, and also improve the health of economies that were suffering under lockdown. These are big things. But, you know, to the point– the other point that's been raised – the shade, is particularly with generative AI models that are creating all this data and all this info and all these images and texts and the concern around disinformation, and how that might be used to make decisions based on falsehoods. This is a really big issue in terms of disinformation that needs to be tackled. I'm not worried about robots taking over, I'm worried about the prospect that AI generated disinformation might. And one of the things that's come out of this Summit is we need to be able to help detect and also be able to project to the public what is synthetic or artificially generated information, and what's the real deal, And how do we protect ourselves against misinformation. And getting that balance right is really important. And this is another big shift in the way that we've talked about technology. We've gone from being completely euphoric about it and thinking that technology is great and we should never touch it and everything will always be sunny days, probably getting a bit, sort of on the negative side, getting the balance right will be important. But it has been a big shift in the way that we've contemplated technology and regulation and also understanding just because you regulate doesn't stop you from being able to innovate. We can do both.