Businesses have much to gain from embracing artificial intelligence (AI), although issues surrounding the technology are real and warrant concern, experts said at an Ai Group webinar last week.
AI’s reputation for using copyrighted material, providing false information, creating deepfakes and causing discrimination has left many businesses wary.
However, organisations are not alone in navigating the AI journey.
“Help is available,” Kaaren Koomen AM, Director Government and Regulatory Affairs, IBM, said at the webinar.
“From a business point of view, AI offers enormous opportunities for increased efficiencies and product development,” Ms Koomen said.
“It’s not just about writing a report with ChatGPT.
“There are trillions of dollars — depending on what you read — of GDP growth that might flow from it.
“If you get it right and put in place a framework that works and is transparent and trustworthy, you can enhance your reputation at the same time as grow your business.”
Fellow webinar guest speaker, Martin Ripple, CEO of ANCA Group — a market leader in CNC grinding machines and CNC systems — said he believed AI was a tipping point for productivity.
“It’s an exciting future and we’re only at the beginning of the AI journey,” he said.
“Innovation is accelerating around us, and productivity gains are what the future has in store.”
But, there are real issues which need to be addressed.
“The way the AI algorithms work means the decisions they make are not always easily understood,” Ms Koomen said.
“They may not operate fairly or neutrally. They may even create hallucinations.
“For example, you may ask an AI tool to give you some financial offerings from 10 Australian banks so you can consider the best solution for your financial arrangements.
"You might get nine great responses, but the way in which the algorithm predicted words and names means you may be presented with a solution based on an entity that just does not exist.
"So, that’s a major error. You would be none the wiser unless you knew ‘X’ bank was not a real bank.
“These are the sorts of things you need to be aware of.”
AI can also use copyrighted material, including music, artworks and codes and generate deepfake images and voices.
“If you think about the way in which we have voice recognition in lieu of passwords to access services, it can be quite significant if your voice can be used without your knowledge or consent,” Ms Koomen said.
There is a wide range of existing laws and regulations that apply to AI, including industry-specific laws.
“From an HR point of view, for example, if you were to discriminate in terms of employment, it doesn't matter whether it’s a person or AI that’s discriminating,” Ms Koomen said.
“It's the act itself that is discriminatory, and if it can be proven, then that is in breach of Australian law.”
Improvements to laws and regulations are ongoing.
“As with the introduction of any new technology, there are gaps, and the Australian government and governments elsewhere are looking at ways in which those gaps can be filled so there can be a higher level of trust and transparency across the industry,” Ms Koomen said.
“The world standards bodies are working very much in this space.
“There are standards to help explain AI concepts and terminology and issues such as how to treat unwanted bias, how to run risk assessment processes, how to set up evaluations, how to set up lifecycle data and frameworks and how to evaluate and have impact assessments.
“Standards Australia has been exceptionally active and doing a terrific job to ensure Australian industry has the opportunity to input into the standards that are voluntary.”
Ms Koomen represents Ai Group on Australia’s standards body on AI while she and webinar host Louise McGrath, Ai Group’s Head of Industry, Development and Policy, both sit on CSIRO’s Responsible Use of AI at Scale Thinktank, which explores the experience of consumers and how companies can use AI responsibly.
Developing an ethical and transparent framework for AI in the workplace is vital.
“There are good reasons why you need to think about implementing a regime for AI ethics to get the benefit of AI technologies already available,” Ms Koomen said.
“Think about your company values and reputation. You don't want to be a company known for being discriminatory, ripping off people's copyright or not being transparent to your customers when you're using AI.
“Not only that, but organisations that focus on the ethical use of generative AI are 27 per cent more likely to outperform on revenue growth than those that don’t.
“There are real opportunities and huge efficiencies, and if you get it right and put in place a framework that works and is transparent and trustworthy, then you can enhance your reputation at the same time as grow your business.”
Organisations need to think about:
“So, you have the governance at the higher level — setting the principles and how it's supposed to be implemented — and then you have AI development and/or deployment within the business applying these strategies in practical terms.
“You also need clear and transparent reporting mechanisms and checkpoints to ensure there's compliance against the AI policy.
“Then you need to present your findings to the board or leadership team to show the framework is being implemented.
“You need to continually test for factors such as bias, explainability, transparency and privacy protections, etc.”
Proactive upskilling of the workforce is needed for optimum AI adoption, Mr Ripple says.
“This is not something that is going to happen automatically,” he said.
“It’s estimated about 50 per cent of your staff will have to be retrained or upskilled to use AI.
“And to stay ahead of your competition, you'll need to seek out more than just commercially available tools — you will need your own data sets, algorithms and models.”
ANCA has a three-step approach to AI.
“We've set ourselves objectives for next year to use standard commercial tools and to use those tools internally. This is partially to automate core processes and partially to commit to deploy commercial tools to use predictive analysis for our own manufacturing and to deploy and use AI to review our internal procedures and to simplify, automate and accelerate.
“We're also looking at using intelligent distraction-reduction (delaying email notifications etc) to increase efficiencies and the productivity of our employees.”
“In 2025, we will work with employees who have shown an interest in AI and productivity,” Mr Ripple said.
“We will also look to see how far we can develop proprietary tools and commercial tools to improve existing products.
“It could be a customised offering. It could be after-sales reach out. We're looking at ‘predictive everything’ — so predicting the temperature of our machines, the vibrations, the sounds, the usage etc to then predict for our customers when the machines could potentially break down and what can be done to extend the lifetime until the spare part is available.
“Supply chain optimisation is where AI can make a real difference.”
From 2025, ANCA will explore how AI can be used to redesign its products and offerings to the market.
“If you let loose a couple of interested youngsters on what ideas they could come up with using AI, I think you'll be amazed,” Mr Ripple says.
“We’ve already put those people in the room. We have challenged them, from a board and leadership perspective, to come up with three game changers, asking: ‘Where could this lead us in 2026?'
“It’s early days, but we have seen some very powerful and interesting ideas.
“The adoption of these tools is a journey. It takes time and effort to uncover the full potential, along with investment in training.”
AI raises serious issues for business and society, so ethics, safety, responsibility and fairness are critical considerations and guardrails are needed.
“This is a technology that has incredible power for good and for ill, and it needs to be harnessed responsibly,” Ms Koomen said.
“However, you don't have to reinvent the wheel.
“Many open-source tools are freely available to help you develop AI tools in-house, ensure your business can get the benefit of responsible and safe AI and enhance your reputation.
“The business has a key role to play, but you don’t have to feel isolated or feel you have to be an expert in all these areas.”
Consider the risks carefully, but also consider the rewards and the opportunities.
“AI will not replace humans,” Mr Ripple said.
“Rather, individuals who leverage AI and human intelligence through the tools at their disposal will ultimately replace you.
“The purpose of AI is to augment, not replace, human intelligence.
“Leadership teams, as well as boards, should drive this.”
Embrace the challenge, Ms Koomen said.
“Fear is your enemy.”
This webinar was held to shine a light on NAIC's inaugural AI Month (Nov 15-Dec 15), which aims to champion the responsible creation and adoption of AI technologies in Australia. NAIC and Ai Group are also presenting an information day, AI in Industry Day, tomorow. Click here for more information and to register.
Ai Group encourages businesses to use AI and embrace the opportunities it brings. Our Innovation team plays an active role in shaping the regulation of AI in Australia and is part of the National AI Centre’s (NAIC) Responsible AI Network.
Gradient Institute is also helping to ensure AI is being rolled out responsibly.
Wendy Larter is Communications Manager at the Australian Industry Group. She has more than 20 years’ experience as a reporter, features writer, contributor and sub-editor for newspapers and magazines including The Courier-Mail in Brisbane and Metro, the News of the World, The Times and Elle in the UK.