Tenable Warns No-Code Agentic AI Can Enable Financial Fraud and Data Leaks
LOGO - Tenable-Logo2021-Reversed/ Tenable
Tenable warned that similar vulnerabilities in real-world deployments could lead to serious consequences, including:
- Data breaches and regulatory exposure, particularly involving payment card information (PCI)
- Revenue loss and financial fraud caused by unauthorised changes to pricing and transactions
- Loss of trust due to compromised customer data and automated decision-making failures
“AI agent builders democratise the ability to build powerful tools, but they also democratise the ability to execute financial fraud,” said Keren Katz, Senior Group Manager of AI Security Product and Research at Tenable on their official press release received by Avnmedia.id
“That power can quickly become a real and tangible security risk.”
AI Governance Is Critical Before Deployment
Tenable stressed that organisations must prioritise governance and enforcement before deploying agentic AI tools across business operations.
To reduce the risk of data leakage and misuse, Tenable recommends:
- Preemptive visibility into which systems and data an AI agent can access
- Least-privilege access, limiting permissions strictly to essential functions
- Active monitoring to detect abnormal behaviour or deviations from intended workflows
As enterprises continue to scale AI-driven automation, Tenable cautions that security must evolve just as quickly, or organisations may unintentionally hand over control of sensitive systems to manipulable AI agents. (jas)



