Australian expertise leaders are dropping sleep over growing regulatory complexity, rising harm from ransomware, and the challenges of near-ubiquitous synthetic intelligence (AI) and deepfakes, a brand new survey by cybersecurity physique ISACA has discovered.
Generative AI (genAI) and enormous language fashions will drive the agenda in 2026, with 64% of Oceania respondents to ISACA’s 2026 Tech Tendencies & Priorities Pulse Ballot – which surveyed almost 3,000 international safety professionals – naming them as key.
As a transformative power, that locations genAI forward of AI and machine studying (60%), information privateness and sovereignty (34%) and provide chain danger (34%).
But for all its promise, genAI has these danger and safety professionals frightened – with 67% saying that AI-driven cyber threats and deepfakes will preserve them up at night time in 2026, and solely 8% saying they’re very ready to handle its dangers.
Some 45% fear most concerning the “irreparable hurt” in the event that they fail to detect or reply to a serious breach, whereas 41% fear about provide chain vulnerabilities like those who hit the likes of Qantas, Dymocks, and British Airways.
Technical points comparable to cloud misconfigurations and shadow IT (named by 38% of respondents) are additionally inflicting safety executives to toss and switch, as are fears that regulatory complexity (36%) will put growing strain on safety practices.
To deal with this, respondents named regulatory compliance (58%), enterprise continuity and resilience (52%), and cloud migration and safety (48%) as prime focus areas – with three-quarters anticipating cyber rules will increase digital belief.
Safety leaders “are coping with fixed AI-driven threats, tighter regulation and rising expectations from executives, all whereas struggling to seek out and preserve the appropriate folks,” ISACA Board vice chair Jamie Norton mentioned.
“It’s an ideal storm that calls for stronger management give attention to functionality, wellbeing and danger administration.”
The monster underneath the mattress
For an trade that was already extremely annoying, the brand new threats posed by genAI have solely made issues worse – ratcheting up the strain on chief info safety officers (CISOs) that have been already feeling the strain lengthy earlier than it emerged.
ISACA’s findings corroborate latest surveys comparable to Proofpoint’s latest 2025 Voice of the CISO survey of 1,600 CISOs, which discovered 76% of Australian CISOs have handled the fabric lack of delicate info over the previous 12 months.
With 80% of Australian CISOs feeling that they’re held personally accountable when a cybersecurity incident occurs – nicely above the worldwide common of 67% – genAI is just exacerbating what was already a major supply of stress.
It “provides to the strain on CISOs to safe their organisations within the face of a quickly altering menace and technological panorama,” Proofpoint discovered, with “expectations excessive and growing numbers feeling the strain and experiencing burnout.”
Accounts of the center assault suffered by former SolarWinds CISO Tim Brown – who not solely struggled to scrub up the key 2020 SolarWinds breach however was charged with fraud by the US SEC – have highlighted simply how massive a human toll the strain is taking.
The 2026 agenda
AI providers and infrastructure are driving a worldwide surge in international ICT spending, Gartner not too long ago mentioned, predicting spending will develop 9.8% subsequent 12 months and move $9 trillion ($US6 trillion) for the primary time – a lot of it pushed by genAI applied sciences.
ISACA Oceania ambassador and ACS Fellow Jo Stewart-Rattray mentioned solely 8% of tech leaders really feel ready for the dangers of genAI. Picture: Equipped/Info Age
But for all their corporations’ spending plans, executives see managing these applied sciences as a serious a part of the problem, with many respondents to the ISACA survey frightened they gained’t be capable of discover the workers to assist them achieve this correctly.
Some 37% of Australian organisations count on to develop their hiring subsequent 12 months in comparison with this 12 months, ISACA discovered – however the third who plan to rent audit, danger and cybersecurity professionals subsequent 12 months count on to have issues discovering the appropriate folks.
This disconnect was recognized within the latest main OECD Science, Know-how and Innovation Outlook report, which famous that the safety and resilience elements of Australia’s science, expertise and innovation insurance policies “are comparatively much less obvious”.
This, then, is the crux of the fears that ISACA survey respondents carry with them – with compliance seen as essential however 30percentnot very or in no way ready to really ship the oversight they want.
“With solely 8% saying they really feel very ready for generative AI’s dangers,” ISACA Oceania ambassador and ACS Fellow Jo Stewart-Rattray mentioned.
“There’s an pressing must steadiness experimentation and utilization with strong oversight.”
- This story first appeared on Info Age. You’ll be able to learn the authentic right here.