One in three people is still at the starting line of their AI journey. 

This was the finding in a poll of 120 attendees surveyed during an Australian Industry Group webinar on Friday. 

Another 38% said AI was used on an ad hoc basis by their staff, while 20% said the technology was used by only a limited number of employees. 

Ten per cent of respondents said their organisation had an informal AI approach, while only 1% has a formal AI strategy. 

“There's a lot of hype about how artificial intelligence is going to take away jobs or make our working lives super-efficient, but that’s not quite the case,” Samantha Lazarus, webinar co-presenter and Australian Industry Group’s National Cloud Applications Specialist, said. 

Two Danish studies totalling 25,000 workers from 7000 workplaces across 11 occupations found AI saved workers about 13 minutes a day — just over an hour a week. 

Fellow presenter Mark Schmidt, Australian Industry Group’s Senior Cyber Consultant, said: “An hour a week may not seem much, but it’s still a saving.  

“However, it’s not as dramatic as some of the hype, so bear that in mind when you consider investing in this technology.” 

Here are some key takeaways from the webinar — Implementing AI with confidence: a two-part series for business leaders. 

Different types of AI 

There are several types of AI.  

Most people use Generative AI (Gen AI), with tools such as ChatGPT and Microsoft Copilot increasingly popular in the workplace. 

Other types of AI, such as machine learning and deep learning, have been around for at least 15 years but are ‘hidden’ in the different systems and solutions we use. 

It’s all about the prompt 

Gen AI can create new content like text, images and code by learning patterns from existing data.  

It uses algorithms to identify these patterns and generates new outputs based on the input. 

When using tools such as ChatGPT or Microsoft Copilot, it’s important to craft detailed prompts to get the best output response. 

“A prompt is just a list of instructions, so think carefully about what you want and how you want it structured,” Ms Lazarus said 

“Be as clear as you can. Do you want something short and to the point or do you want a response that’s long and technical — or long and non-technical? 

“There's no such thing as the perfect prompt, and you're almost certainly never going to get it right the first time. 

“You’ll be able to refine it as you go, based on what you've already been given as a result.”  

Don’t believe everything you read 

“Be careful about what is being returned, and check citations you receive for accuracy,” Ms Lazarus said. 

If AI doesn’t have the information you’re seeking, it’s not above making things up. 

There’s even a term for it: hallucinations. 

“It just wants to predict something for you — to please you,” Mr Schmidt said. 

“Sometimes, I’ll add an instruction in my prompt along the lines of: ‘If you don’t know, just say the answer is unknown’.

"OpenAI has published a paper indicating that a third of the time, the model is giving you some kind of error.

"That should be a wake-up call for people relying heavily on AI. Other studies show it could be as high as 65% of the time. 

“The models can't be fully trusted,” Mr Schmidt said. 

Bias 

As much as we might like to think ChatGPT and the like understand what we’re asking and they’re giving, they don’t. 

“AI doesn't understand morals, ethics or right or wrong,” Mr Schmidt said. 

“It doesn't have any feelings, and it doesn't understand our feelings.  

“It’s just predicting. Sometimes, even for summarisation, it just picks out key sentences. It doesn't read through and go ‘well, that there was an argument, and they finally ended up at this point’.  

“No, it'll just pick out the key sentences and give them to you. 

“It’s bound to have bias, based on its training and prompt data.  

“We've got to have our wits about us as to what it's giving us and double checking it. 

“We don't know how the models are trained, so as they evolve, we've got to be on our guard.” 

Dip your toe in, but don’t overshare  

“It’s worth setting up free accounts and having a play, but be super careful about what information you share in your prompts,” Ms Lazarus said. 

“The main players all have free versions they want you to use to become loyal users of their product. 

“If you're using a free version, expect that your data is being used to train and enhance the model. 

“The paid versions normally don’t harvest your information.” 

Ms Lazarus suggests using AI tools to save time when performing simple, repetitive tasks such as summarising documents, organising online meetings, writing policies or putting together a PowerPoint slide deck. 

“They all have pros and cons, and the models keep evolving as they undergo further testing.” 

Implement AI wisely 

It's important to consider how you implement AI in your organisation. 

“Be strategic about it,” Mr Schmidt said. 

“Build a holistic vision of where you want to head, and bring your staff along on that journey.  

“Fear will proliferate without this focus: in one corner, you’ll have employees playing with AI and singing its praises and in another, their colleagues will be concerned about their jobs if they don’t have that understanding. 

“It’s critical to consider your strategy.” 

Join Mark and Samantha for part 2 of this member-exclusive webinar series: Putting AI into practice – a secure implementation roadmap on Friday, July 4.  

This webinar will provide a step-by-step approach to implementing AI in your organisation and will include demos showing how to build and use an AI Agent.

Register here.

Wendy Larter

Wendy Larter is Communications Manager at the Australian Industry Group. She has more than 20 years’ experience as a reporter, features writer, contributor and sub-editor for newspapers and magazines including The Courier-Mail in Brisbane and Metro, the News of the World, The Times and Elle in the UK.