Australia has no national strategy for regulating artificial intelligence in the workplace — and a new report from the John Curtin Research Centre, backed by the SDA union, warns the country risks repeating the regulatory failures it made with social media unless it acts fast. The report landed on the same day Workplace Relations Minister Amanda Rishworth convened the first meeting of the AI Employment and Workplaces Forum in Adelaide — a tripartite body bringing government, unions, and employer groups to the table for the first time on AI.
The timing matters. This isn't an abstract policy discussion. The Productivity Commission's Danielle Wood told the AFR Workforce Summit that roughly 4% of Australian jobs — about 600,000 positions — could be fully eliminated by AI. Jobs and Skills Australia Commissioner Barney Glover went further, warning the actual figure could exceed both that 4% automation threshold and the 30% augmentation estimate. Meanwhile, 97% of Australian hiring managers now expect new hires to demonstrate AI proficiency — but 88% say they can't find candidates who have it.
The gap between how fast AI is moving and how slowly regulation is following is becoming untenable. For business owners, the question is no longer whether workplace AI rules are coming, but how much confusion you'll navigate before they arrive.
What the report actually says
The John Curtin Research Centre's report, titled For All of Us, argues that Australia's patchwork of state and federal employment laws leaves workers exposed as AI embeds itself into daily working life. The report's co-author, Dominic Meagher, drew a pointed comparison to the social media era: "AI is so much more powerful than social media," he told the ABC. "We do not have the luxury of getting it wrong this time."
The recommendations are concrete. The report calls for:
- A national AI taskforce to coordinate a federal strategy
- A review of the Fair Work Act to specifically address AI-related workplace risks
- An AI expert advisory panel within the Fair Work Commission to assess AI-related disputes
- Mandatory human oversight wherever AI is deployed in workplace settings
- Mandatory consultation with workers and unions before AI tools are introduced
- Universal access to AI education and upskilling
Workplace relations lawyer Shannon Chapman, a partner at Lander and Rogers, confirmed the regulatory maze is real. Asked about implementing biometric data scanners, she said the answer would be "jurisdiction specific" — dependent on the type of data gathered, how it's stored, and how it might be used. Federal anti-discrimination, human rights, and Fair Work legislation all intersect in ways that remain untested for AI-driven decision-making.
This is the practical problem. If you're a business operating across multiple states, the compliance landscape is genuinely uncertain — and any new AI-specific legislation could add layers rather than simplify.
The government's response — a forum, not a framework
Minister Rishworth's answer is dialogue, not legislation. The AI Employment and Workplaces Forum brings together the ACTU secretary, Business Council of Australia CEO Bran Black, the Australian Industry Group, and other peak bodies. The group will meet at least three times, structured around five themes: trust, capability, transparency, safety, and productivity.
Critically, Rishworth ruled out a union veto — the forum is consultative, not legislative. "Tripartism should not involve a right of veto," she stated at the AFR Workforce Summit.
The government is also conducting a "gap analysis" — initiated at Treasurer Jim Chalmers' economic roundtable in August 2025 — to determine whether existing workplace institutions and legislative frameworks are fit for purpose. Preliminary data suggests AI hasn't accelerated the overall pace of compositional change in the jobs market, though occupations most exposed to AI, such as filing clerks and keyboard operators, are showing a "slight softening" in growth.
That framing — gradual, manageable change — sits uneasily alongside the Productivity Commission's 600,000 job-loss estimate and Commissioner Glover's warning that the real numbers will be higher. The government appears to be building monitoring capability. Whether it's building it fast enough is the open question.
The battle lines between unions and business
The forum opened amid sharp disagreement. Australian Services Union national secretary Emeline Gaske told the AFR her members in IT and administrative roles were already experiencing "AI-driven productivity demands, after-hours messaging, and the threat of digital surveillance." She rejected the government's characterisation of limited job impact: "We do not agree that AI is not affecting jobs. The best time to regulate to make sure it's done in a fair way is before the horse has bolted."
Finance Sector Union national assistant secretary Nicole McPherson went further, claiming some organisations were deliberately outsourcing roles before deploying AI — allowing them to argue no direct displacement had occurred.
The Business Council of Australia pushed back firmly. CEO Bran Black pointed to the EU and Canada as cautionary examples of jurisdictions where early regulation deterred investment: "Both are starting to roll back their position because they've realised they missed out on investments, they missed out on opportunities."
This tension — regulate now or risk stifling growth — is the defining fault line of Australian AI policy. And it's one that previous Heygentic coverage has tracked closely: Australia already leads the world on responsible AI governance but ranks last on productivity gains, according to KPMG's Global AI Pulse survey. The risk of over-indexing on caution is real — but so is the risk of letting workers absorb harms that clearer rules could have prevented.
What this means if you run a business
The practical takeaway is uncomfortable: you're operating in a regulatory grey zone, and it's going to stay grey for a while.
Minister Rishworth's immediate focus isn't job displacement — it's work intensification. "I'm not 100% sure that the recent adoption has led to people sitting around twiddling their thumbs," she said. "My mind is more focused on making sure we don't have cognitive burnout." Safe Work Australia is currently conducting an occupational health and safety review into AI-linked work intensification and psychosocial risk — a signal that even without new legislation, enforcement pressure is building through existing WHS frameworks.
If you're deploying AI tools today, the report and the surrounding policy signals point to several practical steps:
- Get your AI policy in writing. Shannon Chapman's advice is blunt: if you don't have a policy covering what AI use is and isn't appropriate, what consequences apply for breaches, and how employees are trained, you're exposed. This applies whether new legislation arrives or not.
- Audit your monitoring tools. Digital forensics expert Matt O'Kane warned that international AI monitoring tools entering Australia — tracking on-screen activity, keystrokes, and more — were developed in jurisdictions with different workplace privacy expectations. Test whether your tools are reasonable by Australian standards.
- Involve your workers in deployment decisions. The evidence supports this beyond ethics. As Dr Meagher noted, "companies where they are working with their workforce, where it's actually integrating it in their workflow, those companies are able to turn AI adoption into more profit." That aligns with CSIRO research showing AI-adopting firms hire 36% more workers, not fewer — suggesting collaborative adoption drives growth.
- Don't forget the December deadline. The workplace AI debate is happening alongside Australia's Privacy Act amendments requiring automated decision-making transparency by 10 December 2026. If your AI makes decisions about people — recruitment, performance, rostering — you'll need disclosure and human-review processes in place.
What to watch
The forum is scheduled to meet at least three times. Watch for whether those sessions produce agreed outcomes or just communiqués. The government's labour market gap analysis report is due to be finalised soon — its findings will shape whether the current "monitor and convene" approach holds, or whether the Albanese government is forced toward legislation before the next election.
NSW has already moved independently with the Work Health and Safety Amendment (Digital Work Systems) Act 2026, formally extending employer WHS obligations to cover AI and digital systems. If other states follow, the patchwork problem the John Curtin Research Centre report identifies will only deepen — adding pressure for a federal response.
The deeper pattern is clear: Australia is now running three parallel regulatory tracks on AI — workplace rights through Fair Work, privacy through the December 2026 automated decision-making rules, and safety through WHS. None of them are coordinated. For a country that only 1% of employers are driving two-thirds of AI hiring, the question isn't whether regulation is needed. It's whether it arrives in time to be coherent.
Sources
- Australia lacks national strategy to regulate AI spread in workplace, report states — ABC News
- Australia urged to act now on workplace AI before rules become unworkable — Human Resources Director
- Canberra rules out workplace AI union veto as Minister Amanda Rishworth establishes AI Employment and Workplaces Forum — Australian Financial Review
- Inside the government's plan to manage AI's effect on jobs — SmartCompany
- AI expected to wipe out 4% of the nation's jobs — Human Resources Director
- AI skills expected in new hires; generative AI making it harder for Australian employers to assess candidates — Robert Half
- AI in the Workplace: New WHS Duties Employers Can't Ignore in 2026 — Zenergy Group
