Tenable Warns No-Code Agentic AI Can Enable Financial Fraud and Data Leaks

LOGO - Tenable-Logo2021-Reversed/ Tenable

AVNMEDIA.ID -  Cybersecurity firm Tenable has revealed new research showing how no-code agentic AI tools, such as Microsoft Copilot Studio, can be exploited to commit financial fraud and hijack business workflows if deployed without strict governance.

The findings highlight a growing enterprise risk as organisations increasingly adopt no-code AI platforms to improve efficiency by allowing non-technical employees to build autonomous AI agents.

AI Democratisation Comes With Hidden Risks

No-code AI tools are designed to simplify automation without the need for software developers.

However, Tenable warns that this convenience can unintentionally expose organisations to severe security threats when governance and access controls are overlooked.

According to the research, AI agents often operate with broad permissions that are not fully understood by the users who create them, creating opportunities for abuse.

Tenable Successfully Jailbreaks Microsoft Copilot Studio

To demonstrate the risk, Tenable Research built an AI-powered travel agent using Microsoft Copilot Studio.

The agent was designed to manage customer travel reservations, including creating and modifying bookings without human oversight.

The AI agent was supplied with demo customer data, including names, contact details, and credit card information, and was instructed to verify customer identities before sharing data or making changes.

Using a technique known as prompt injection, Tenable researchers were able to override those safeguards.

 

Sensitive Data Leaked, Financial Controls Bypassed

Through workflow manipulation, researchers successfully extracted sensitive payment card information and forced the AI agent to bypass identity verification protocols.

More critically, the agent’s permissions allowed researchers to modify financial fields.

By exploiting this access, they changed a trip’s cost to $0, effectively granting unauthorised free services.

Major Business and Regulatory Implications

Tenable warned that similar vulnerabilities in real-world deployments could lead to serious consequences, including:

  • Data breaches and regulatory exposure, particularly involving payment card information (PCI)
  • Revenue loss and financial fraud caused by unauthorised changes to pricing and transactions
  • Loss of trust due to compromised customer data and automated decision-making failures

“AI agent builders democratise the ability to build powerful tools, but they also democratise the ability to execute financial fraud,” said Keren Katz, Senior Group Manager of AI Security Product and Research at Tenable on their official press release received by Avnmedia.id

“That power can quickly become a real and tangible security risk.”

AI Governance Is Critical Before Deployment

Tenable stressed that organisations must prioritise governance and enforcement before deploying agentic AI tools across business operations.

To reduce the risk of data leakage and misuse, Tenable recommends:

  • Preemptive visibility into which systems and data an AI agent can access
  • Least-privilege access, limiting permissions strictly to essential functions
  • Active monitoring to detect abnormal behaviour or deviations from intended workflows

As enterprises continue to scale AI-driven automation, Tenable cautions that security must evolve just as quickly, or organisations may unintentionally hand over control of sensitive systems to manipulable AI agents. (jas)

Related News
Recent News
image
Techno Palo Alto Networks: Krisis Kepercayaan Data Jadi Tantangan Utama Keamanan AI Indonesia pada 2026
by Adrian Jasman2025-12-15 12:32:13

Palo Alto Networks prediksi 2026 jadi fase krusial AI, dengan krisis kepercayaan data.

image
Techno Xiaomi 12.12 Year End Festival: Diskon hingga Rp1,5 Juta, Awali 2026 dengan Smart Home
by Adrian Jasman2025-12-11 12:28:29

Xiaomi 12.12 Year End Festival: Diskon hingga Rp1,5 juta untuk ekosistem pintar & smart home!