
Over the previous three years, Generative AI (GenAI) has dominated conversations round enterprise know-how. In the present day, nevertheless, the highlight has shifted to agentic AI; an innovation that guarantees to ship even better effectivity and automation.
The potential of agentic AI
Not like conventional software program that executes predefined directions, agentic programs make adaptive, autonomous selections grounded in reasoning. Because of this, agentic AI can automate advanced enterprise programs, work together straight with prospects, and even study from information to adapt to altering info.
“With agentic AI, you’re creating extra autonomy. You may construct programs and assign duties the place these agentic programs can take motion, they’ll plan, motive, and might execute advanced workflows,” explains Leigh Bates, International Danger AI Chief, Companion PwC UK.
The challenges of scaling agentic AI
Nonetheless, agentic AI’s promise of better automation and effectivity can solely be realised if deployed responsibly, with cautious consideration to regulatory and moral obligations, in addition to the necessity for sturdy cybersecurity.
In its new whitepaper, “Safe, ruled, and business-ready: scaling agentic AI with belief and confidence,” PwC argues that agentic AI should operate inside a clearly outlined moral and governance framework. It notes a number of key issues:
- Knowledge governance. Autonomy with out sturdy information governance creates new dangers. Excessive-quality information, mixed with embedded guardrails and safeguards, is crucial to make sure the protected and resilient adoption of AI brokers.
- Abilities. Enterprises want expert workers to know the place agentic programs are greatest deployed, when handy over workloads to people, and an understanding of the moral necessities wanted for accountable deployment.
- Compliance. AI instruments could make errors and there’s additionally a threat of leaking delicate information, particularly if companies join agentic programs to information sources with out ample controls. More and more, these controls will likely be mandated via regulation, such because the EU’s AI Act, and DORA.
- Cybersecurity. Cyber menace actors are already utilizing agentic AI to scale and sharpen their operations. As an example, in 2025, social engineering, particularly for preliminary entry, noticed widespread use of AI-generated emails, voice, and video by ransomware and BEC teams.[1]
- Resilience. When brokers aren’t tightly constrained, they might entry instruments past their remit or provoke unintended actions. Even small misconfigurations can cascade into vital downstream penalties that threaten operational resilience.
Constructing belief at scale
To beat these challenges, CISOs ought to first study their governance frameworks, together with AI insurance policies, to ascertain whether or not they’re prepared for agentic AI.
“Safety and governance for AI brokers aren’t non-compulsory. They should be included from the beginning”, recommends Narayan Kumar Gupta, International Microsoft Safety Affiliation Lead, Senior Supervisor, PwC Eire.
This consists of introducing agentic know-how via low-risk proofs of idea, utilizing guardrails to manage how AI brokers function, particularly when dealing with delicate information, and ongoing monitoring, with a human within the loop.
CISOs can discover the potential of Microsoft’s AI and safety ecosystem to assist safe agentic deployments. This consists of AutoGen for orchestrating agent behaviour and Azure AI Agent Service in Azure AI Foundry for managed experimentation, amongst different instruments.
Harnessing agentic AI for development
Agentic AI is accelerating digital transformation, empowering trailblazers to reshape their industries and scale buyer influence. However success received’t come from innovation alone. It would additionally rely on preparedness. Some groundwork is already in place, via toolsets and frameworks from companions like PwC and Microsoft, however companies ought to transfer swiftly to make sure their groups are absolutely geared up to unlock agentic AI’s potential whereas persistently adhering to greatest follow in safety and governance.
Obtain PwC’s new whitepaper to study extra about scaling agentic AI with belief and confidence.
PwC: This content material is for common info functions solely, and shouldn’t be used as an alternative choice to session with skilled advisors.
© 2025 PwC. All rights reserved.
[1] PwC, “Cyber Threats 2024: A 12 months in Retrospect,” 2024