INTHEBLACK February / March 2026 - Magazine - Page 42
F E AT U R E
“While AI is evolving faster than regulation, many existing laws
already address inappropriate uses. CPA Australia supports
implementing risk-based AI regulations only where material
gaps in existing legislation are identified.”
GAVAN ORD, CPA AUSTRALIA
Global ethicist Dr Catriona Wallace
defines responsible AI as technology
that does not cause harm or unintended
consequences. “That’s the baseline,” she
says. “But who is responsible for that?
That’s where we still see some disconnect.
From investors to boards and executive
teams, right down to frontline employees,
there needs to be a company-wide
understanding of what ethical AI is and
a commitment to create a responsible
AI-first strategy, which is needed for
sustainable innovation.”
The difficulty for businesses is knowing
how to build such a strategy.
AI GOVERNANCE IN AUSTRALIA
While the Australian Government introduced
a framework in 2024 to steer the safe and
responsible use of AI in the public sector, no
such AI technology-specific laws exist for the
private sector. Currently, voluntary guidelines
and existing laws are the main drivers of
responsible AI in private business.
The AI Ethics Principles, released in
2019, outline eight voluntary principles for
responsible AI design and use.
In 2024, the National AI Centre
(NAIC) Voluntary AI Safety Standard
added 10 guardrails covering areas
such as testing, transparency and
accountability. That same year, the
government proposed mandatory
guardrails for AI in high-risk
settings, though is yet to
confirm whether these will
be enacted.
In August 2025, the
Productivity Commission
recommended that AI regulation be built on
existing legal frameworks, rather than new
AI-specific laws.
“Over-regulation of AI will likely stifle
innovation and investment into it,” says
Gavan Ord, business investment and
international lead at CPA Australia.
42 INTHEBLACK February/March 2026
“While AI is evolving faster than regulation,
many existing laws already address
inappropriate uses. CPA Australia supports
implementing risk-based AI regulations only
where material gaps in existing legislation
are identified.”
In an October 2025 speech, Senator the
Hon Tim Ayres, Minister for Industry
and Innovation, and Minister for Science,
reaffirmed that Australia already has “strong,
adaptive laws” that support consumer rights,
privacy and fair competition. These include
the Australian Consumer Law, as well
as the Privacy Act 1988, plus criminal,
anti-discrimination, online safety, defamation
and intellectual property laws. He added
that good AI adoption requires thoughtful
conversations in each workplace, rather than
“new national bureaucracies that duplicate
what we already have”.
“It is up to businesses to ensure AI is used
ethically, effectively and democratically in
workplaces, and responsibly in their goods
and services,” he said.
INNOVATION TIGHTROPE
AI is built on data and deployed through
software systems, making it vulnerable to
risks common to other digital technologies
including cybersecurity breaches, privacy
violations and data misuse. Other hazards
include algorithmic bias, discrimination
and non-compliance with existing laws.
Businesses also face risks from AI
performance or system failures, such
as hallucinations or adversarial attacks.
CPA Australia’s business technology
report found the most frequently cited
negative impacts of AI were data/privacy
concerns and dependence on AI with
reduced oversight. “Human oversight
remains essential, especially in finance
functions requiring accuracy, assurance
and verification,” says Ord.
He points to several frameworks that
can help leaders ensure ethical, secure