INTHEBLACK June 2025 - Magazine - Page 49
P O D C AS T
AI and cybersecurity:
Penetration tester
reveals key dangers
SUBSCRIBE
to the podcast
Words Lisa Uhlman
“Everyone wants to capitalise on this AI hype,
so they rush to push an AI system to production
for their customers, but they don’t consider
the security implications.”
MIRANDA R., MILEVA SECURITY LABS
As organisations race to adopt artificial
intelligence (AI), many overlook a crucial step:
securing it. While this technology can unlock
powerful new capabilities, it also brings risks
that current cybersecurity practices cannot
fully address.
In this episode of the INTHEBLACK podcast,
offensive security team member at Malware
Security and an AI vulnerability researcher
at Mileva Security Labs, Miranda R., urges
businesses to treat AI like any other critical
system — by testing for weaknesses, training
staff and validating outputs.
But they should also recognise how AI
differs from traditional IT systems, which follow
strict rules.
AI systems are inherently probabilistic and
involve uncertain outcomes, explains Miranda,
who has worked on the Australian Signals
Directorate’s Cyber Hygiene Improvement
Programs (CHIPs) team, scanning and reporting
on the cybersecurity of government and critical
infrastructure.
“That uncertainty is what makes AI so
powerful and so good at what it does,” she says.
“But it also makes it really vulnerable, because
the uncertainty also leads to it being prone
to errors and to being biased, unpredictable
and manipulable.”
“DISRUPT, DECEIVE, DISCLOSE”
Despite their different capabilities,
all AI systems use the same underlying process,
which attackers exploit through the “three D’s”
of adversarial machine learning: disrupting
models, deceiving them into performing
unintended tasks, and making them disclose
information they shouldn’t.
Machine learning models “follow the same
lifecycle of data gathering, data pre-processing,
model training and then deployment,” Miranda
explains. “All of those systems can most
definitely be exploited to access sensitive data
throughout any of the stages in that lifecycle.”
Many organisations rush AI systems into
production without ensuring proper security
measures are in place, or even understanding
the risks.
“Everyone wants to capitalise on this AI hype,
so they rush to push an AI system to production
for their customers, but they don’t consider
the security implications.”
HOW TO MITIGATE RISK
AI systems should undergo high-scrutiny
testing, risk profiling and secure-by-design
coding practices, she says. Validation and
human oversight are also important.
Miranda also stresses the need for strong
training and organisational policies regarding
AI. Companies should be aware of regulatory
and policy responses, which tend to lag behind
technological advances.
“Knowing what is coming into play and
at what times will help organisations move
through that space.”
LISTEN
to the full
podcast episode
intheblack.cpaaustralia.com.au 49