The price of creating convincing artificial media has collapsed – and so has society’s skill to tell apart between actual and fabricated data. So what are Europe’s rising startups doing about it?
In response to the most recent examine by Dutch cybersecurity startup Surfshark, reported losses linked to deepfakes have now surpassed €1.3 billion, with €860 million stolen in 2025 alone, up €500 million year-on-year.
As Oliver Quie, CEO of British cybersecurity startup Innerworks commented: “We’re going through AI-powered deception that may mimic authentic customers with horrifying accuracy. Present safety corporations have develop into out of date as a result of they assume threats will behave in another way than authentic customers.”
Just a few years in the past, producing a one-minute deepfake video might price wherever between €257 and €17k, relying on high quality. With the arrival of extensively accessible AI video instruments corresponding to Veo 3 and Sora 2, that very same minute can now be generated for just some euros.
This dramatic worth collapse has made deception cheaper to run and much simpler to scale.
Scalable deception and “misplaced pet” scams
As prices fall, new classes of fraud have emerged. One putting instance is the misplaced pet rip-off – fraudsters now generate AI-made pictures of supposedly discovered pets, tricking anxious house owners into paying small ‘restoration charges’, usually round €43, within the hope of reuniting with their animals.
“As the price of fabricating lifelike pictures and movies approaches zero, scammers are industrialising deception,” mentioned Miguel Fornes, Data Safety Supervisor at Surfshark. “The lost-pet rip-off is a transparent instance: it exploits emotion for small sums, making victims much less suspicious and much much less prone to pursue authorized motion. For criminals, that’s a perfect mannequin for mass-scale fraud.” (Translated)
Fornes provides that such small-ticket scams are solely a part of the image. The bigger risk comes from deepfake-enabled funding schemes and identification spoofing.
Deepfakes have been utilized in company recruitment processes to bypass background checks, together with one case the place a cybersecurity firm unwittingly employed a North Korean hacker who efficiently faked his video interview and credentials.
A wave of European startups fights again
So how is Europe combating again?
The surge in AI-driven deception has triggered a corresponding wave of innovation – and funding – throughout Europe. To date this 12 months, EU-Startups has reported on a number of funding rounds concentrating on the detection and prevention of deepfake-enabled fraud.
- Acoru (Madrid, Spain) – Simply as we speak they raised €10 million (Collection A) to assist banks predict and forestall AI-powered fraud and cash laundering earlier than transactions happen. Its platform displays pre-fraud intent indicators and makes use of consortium-based intelligence sharing to cease scams on the supply.
- IdentifAI (Cesena, Italy) – secured €5 million in July 2025 to broaden its deepfake detection platform, which analyses pictures, video and voice to authenticate content material and flag AI-generated materials. The startup reviews a rise in demand from newsrooms and law-enforcement shoppers since early 2024.
- Trustfull (Milan, Italy) – raised €6 million in July 2025 to broaden its fraud-prevention suite to cowl “deepfake scams and large-scale phishing campaigns.” The corporate mentioned this can be a pivotal second for the worldwide fraud detection and prevention market, which is projected to just about triple from €28.4 billion in 2024 to €77.4 billion by 2030.
-
Innerworks (London, UK) – raised €3.7 million in August 2025 to broaden its AI-powered platform for stopping synthetic-identity and deepfake-driven fraud. The corporate reported that fraud makes an attempt utilizing deepfakes rose by over 2,000% since 2022.
- Keyless (London, UK) – closed a €1.9 million spherical in January 2025 to strengthen its privacy-preserving biometric expertise, designed to thwart injection assaults and deepfake-based identification spoofing. It reviews its shoppers have seen a 73% discount in Account Takeover (ATO) fraud and 81% discount in assist desk prices.
Collectively, these startups mirror a continental effort to counteract a brand new layer of cyber-risk. Italy specifically stands out, with two lively ventures within the house, suggesting a growing nationwide cluster round biometric and deepfake-detection innovation.
Regulatory backdrop: EU insurance policies tighten in 2025
The rising financial influence of AI-enabled deception has coincided with new EU-level measures aimed toward rising accountability and transparency in synthetic intelligence and digital providers.
In February 2025, the EU Synthetic Intelligence Act started making use of key provisions. It requires clear labelling of AI-generated content material and transparency when people work together with AI methods. These guidelines instantly goal the misuse of generative instruments for manipulation or fraud, together with deepfakes and voice cloning.
AI methods that deceive or exploit weak customers can now be labeled as posing an “unacceptable threat,” making them unlawful inside the EU market.
In the meantime, underneath the Digital Providers Act (DSA), giant on-line platforms are actually obliged to evaluate and mitigate systemic dangers arising from manipulative or fraudulent content material – a framework that extends to deepfake media utilized in phishing and impersonation scams.
The monetary sector has additionally been addressed. In July 2025, the European Banking Authority (EBA) issued an opinion highlighting how AI is being exploited for cash laundering and fraud, together with by means of fabricated identities and deepfake paperwork. It urged monetary establishments to adapt anti-money-laundering (AML) methods to account for AI-enabled dangers – a transfer that aligns intently with the missions of startups corresponding to Acoru and Trustfull.
Collectively, these coverage developments present that 2025 is shaping up because the 12 months Europe tightened its authorized web round AI misuse – introducing a compliance-driven incentive for startups combating fraud and artificial deception.
What are you able to do?
“AI has modified the face of fraud and cash laundering. You merely can’t anticipate expertise inbuilt 2010 to fight fraud occurring in 2025,” – Pablo de la Riva Ferrezuelo, Co‑founder and CEO of Acoru.
His phrases mirror a wider sentiment amongst Europe’s cybersecurity Founders: {that a} new era of expertise – constructed for an period of artificial media and AI‑powered deception – is urgently wanted.
This angle is echoed throughout the sector. Founders and buyers alike describe 2025 as a turning level, with the convergence of regulation, consciousness, and monetary backing fuelling an arms race between scammers and defenders.
Whereas fraudsters exploit generative AI to govern voices, identities and video, Europe’s new breed of startups are deploying AI to reveal, confirm and block malicious exercise earlier than it reaches victims.
On the person degree, business consultants nonetheless stress the significance of vigilance and schooling as the primary line of defence:
- Use robust cybersecurity and identity-verification instruments and guarantee common employees coaching.
- Confirm surprising requests – particularly these involving cash or delicate information – by means of trusted channels earlier than appearing.
- Scrutinise media for refined inconsistencies: overly clear audio, slight lip-sync errors, or unnatural hand motion.
- Resist urgency or emotional hooks – frequent ways in scams.
- Implement multifactor checks and require out-of-band confirmations for funds or account adjustments.
- Equip high-risk groups (finance, HR, buyer assist) with in-person verification protocols.
- Deal with contacts with new domains or digital numbers as potential purple flags.
The Surfshark examine used information from the AI Incident Database and Resemble.AI to create a mixed dataset of deepfake-related incidents from 2017 to September 2025. Solely instances involving falsified video, picture or audio content material reported within the media have been included. Fraud-related incidents with a clearly quantified monetary loss have been additional labeled into 12 particular sub-categories.
For the whole analysis materials, go to Surfshark’s analysis hub.