A help desk agent picks up a call, the voice introduces itself as Sara’s, seems in a hurry, mentions a recent incident, cites internal projects while adopting the familiar expressions that Sara usually uses. She requests a simple password reset, nothing seems unusual, yet it’s not Sara on the line.
Spoofing remains the central mechanism of social engineering. Nearly 60% of attacks of this type rely on imitation of an employee, service provider or line manager. What changes today is the speed and precision with which these imitations can be prepared. Attackers leverage AI tools to aggregate elements of voice, language, behaviors, and internal references. They don’t always need a voice clone, they just need to be believable, in an exchange designed to sound ordinary.
Recent data sheds light on this development. The Data Breach Investigations Report 2024/25 indicates that 68% of breaches involve a non-malicious human factor, and around 17% of confirmed breaches rely on social engineering. Attackers target less systems than interactions, taking advantage of the fragmentation of collaborative environments.
Because we work simultaneously via email, Slack, Teams, Zoom, WhatsApp and internal ticketing systems. And the markers of trust are dispersed between these spaces: a Slack DM evoking an emergency, a Teams call with a deactivated camera, a message mixing internal references and familiar tone can pass checks which, technically, have not detected any anomaly. The DBIR also notes that nearly 30% of incidents linked to third parties or supply chains now emerge in these collaborative tools rather than in messaging.
Existing defenses have difficulty detecting these attacks, because they were designed to analyze content (links, attachments, malicious payloads) and not the identity coherence of an interaction. Communication platforms do not integrate robust mechanisms for verifying the real identity of their users. Thus, compromised accounts, aliases close to existing identities, hijacked sessions or newly created accounts are all maneuvers that can blend into normal traffic, and current attacks often combine several vectors combining the automated collection of information, the imitation of the employee, then the impersonation of the IT department to get an employee to install a remote access tool.
The voice is no longer a reliable benchmark, because imitators reproduce the rhythm, the hesitations, or even collected audio fragments, and the analysis of the audio stream is no longer enough to distinguish the authenticity of an interaction.
This fragmentation of the trust signal should lead us to say not “is this message suspicious?”, but “can this conversation be considered reliable?”. It is this question that guides the approach of Imper.ai, an Israeli startup. The company aims to provide a real-time risk signal based on hard-to-falsify indicators: device fingerprints, network dynamics, behavioral consistencies.
Imper.ai announces an €18.7 million ($22 million) Series A round led by Redpoint Ventures and Battery Ventures, with participation from Maple VC, Vesey Ventures and Cerca Partners. The company had raised 5.5 million euros ($6.5 million) six months earlier. Founded in 2024 by Noam Awadish, Anatoly Blighovsky and Rom Dudkiewicz, all from Unit 8200, Imper.ai develops a real-time spoofing prevention platform based on contextual, behavioral and network signals.