A new study reveals how scammers use cloned voices and emotional manipulation to defraud victims, with attacks expected to reach record highs this year.
NEW YORK, NY / ACCESS Newswire / July 2, 2025 / A new data study from ReversePhone reveals that AI deepfake voice scams have reached unprecedented levels in 2025. As artificial intelligence becomes more accessible and sophisticated, scammers are leveraging it to clone voices, impersonate loved ones, and emotionally manipulate victims into handing over money or sensitive information. The report is based on over 1,000 user-submitted scam reports and projects this year to have the highest number of AI voice scams on record.
The study identifies the most common and alarming trend: family emergency and kidnapping scams. These scams use AI to mimic the voice of a child, spouse, or other loved one, often calling in a state of distress to demand money immediately. Another tactic gaining traction is the "hello and silence" scam. In this scheme, a caller prompts the recipient to speak-often asking "Can you hear me?"-and records their voice to use later for impersonation or to bypass voice authentication systems.
Romance scams have also evolved. What once relied on deceptive messaging now features AI-generated voice and video to build fake emotional connections on dating apps and social platforms. Victims are persuaded to send money or personal information to people they believe they've formed real relationships with, only to discover they've been interacting with an AI-driven scammer.
The groups most frequently targeted include the elderly, who are often contacted with fake medical or Medicare-related emergencies. Parents are another vulnerable demographic, especially when scammers pretend to be their children in danger. The study also notes a sharp rise in scams targeting social media users and those who speak Spanish or French, indicating that AI is now being used to tailor scams by language and cultural familiarity.
Scammers are deploying advanced tactics to build trust. They spoof local phone numbers, mimic voicemail greetings to clone voices, and use repeated callbacks to keep victims engaged. Many victims report receiving multiple attempts over weeks or months, especially following major news about AI or security breaches.
The study urges consumers to let unknown calls go to voicemail, avoid responding to unfamiliar numbers, and confirm any emergencies through secondary channels. ReversePhone's reverse lookup tool could help users quickly verify suspicious numbers and stay informed through community reports.
Contact Information
Press Contact
press@reversephone.com
SOURCE: ReversePhone
View the original press release on ACCESS Newswire