‘Hi Mum, it’s me’: how text message scammers learnt to prey on our emotions

Sophie* had just moved house in the UK, so it made sense to her mother to receive a message from her daughter asking for money for an urgent bill. “Hi Mum it’s me,” said the message, explaining that she had broken her phone and was using a different one. “She dropped everything to sort it out,” says Sophie of her mother, who had begun the process of transferring £3,000. But the message was not to be trusted. The transfer was destined for a scammer. At that moment, Sophie – the real Sophie – happened to text her mother with an update on her new kitten, which they had picked up together a few days earlier.

Sophie’s mother rang her “in blind panic, convinced she had just been talking to me.” Fortunately, Sophie’s mother alerted her bank and Action Fraud, the UK’s national reporting centre for fraud and cybercrime; although she hadn’t yet sent any money, the scammer now had her account details. In the end, no financial loss was incurred, but some embarrassment was. “It made me angry that afterwards she felt embarrassed by it,” says Sophie, “when the whole point of the scam is to exploit a parent’s instinct.”

Not so long ago, online fraud seemed most often to take the form of an email from a Nigerian prince. He’d ask in broken English for a loan that would enable him to unlock his fortune, with your help entitling you to a large share of the princely proceeds. It was a simpler time, the era of the temporarily embarrassed Nigerian royal, and it was, at least in terms of online fraud, a safer one. You skimmed the email, dismissed it, and carried on with your business. You didn’t quite shake the hand of the scammer as you parted ways, but nor (by and large) did you get conned.

Today, though, online fraud is resurgent. The princes have been deposed. Modern scams include the parcel you need to pay a delivery fee for; the job offer sent via WhatsApp; the “Hi Mum” sent from an unfamiliar number.

All in all, human-initiated fraud attacks rose 92 per cent last year, an analysis found. Email-based fraud has largely had its day, explains security awareness advocate Javvad Malik, because it has been hampered by spam filters and improved public awareness of email scams. Meanwhile, we have come to use instant messaging near-constantly, making our SMS and WhatsApp inboxes the more happening destination for modern scammers.

Hence the inundation. “Even if you’re using it for work purposes,” says Malik, who represents the security awareness company KnowBe4, “your phone still feels like a personal thing. So people are more likely to respond [to fraudulent messages], especially in the context of multitasking.”

This means that scammers catch people when they’ve just woken up, when they’re in a rush, when their mind’s on other things – times, in short, when we’re not as vigilant as we ought to be. Those suspicious messages you receive – those in which scammers pretend that they’re holding a parcel for you, or that they’re offering you a job, or that they’re the taxman, or eFlow, or a potential romantic partner, or your child, stranded abroad and short of a few grand – are all attempts to catch you off-guard. Naturally, it’s always urgent; the requests flood you with panic, giving you little chance to think things over carefully.

And as far as the scammers are concerned, the wider the net they cast, the better their chances of success. “On my phone, I use the ‘Report’ feature a lot” – says Malik, referring to the reporting option on WhatsApp – “so that blocks it. But I will get these bursts where sometimes I might even get two or three a day, for a few weeks at a time.”

The messages are likely to come from call centres run by criminals in the developing world. These centres seem to be increasingly active, but it’s not just the frequency of these scam attempts that has changed; it’s their emotional sophistication. “That’s a result of the technologies becoming more mature,” says Malik. “To directly bypass security controls is really difficult. So criminals have spent a lot more time focusing on how they bypass the human.”

In the old days, says Richard de Vere, that was as simple as sending an email saying, “Hi, I work in the tech department. Can I have your password please?”

That is a method that has often worked for de Vere, who plays the role of scammer in his work at the business IT company Ultima, which tests companies’ security with methods used by fraudsters. These days, employees don’t fall for the “Can I have your password?” trick. But they still fall for more subtle cons. “A classic one, that almost never fails, is: ‘Hey, I’m just wondering about what company device you have. We’re doing some upgrades and we have some iPhone 15s available.”

Because iPhones are widely desired, “it’s a great way of getting under the skin of people”. And it’s illustrative of de Vere’s theory of effective scamming. He shows me a slide of Maslow’s hierarchy of needs, a pyramid with our most basic needs – food, shelter – at the bottom, and our most profound – morality, creativity, confidence, respect for others – at the top. Scammers might not be aware of Maslow’s work, says de Vere, but their most effective work enacts it. People on lower incomes might be more likely to be targeted with claims that, for example, a direct debit isn’t going to be met. Those whose most basic material needs are met might be more likely to fall for scams that fit into the “Love and Belonging” tier of the Maslow pyramid. These are scams of the “Hi Mum” variety that swindled an elderly woman in the UK out of almost £3,500 when she fell for a four-day impersonation of her daughter conducted via WhatsApp. Bank of Ireland has been warning Irish consumers about a huge rise in this type of text here this year too.

As for society’s very wealthiest, look to the top of the pyramid. “If you think Jeff Bezos is getting an email about his kid’s last £100 quid,” says de Vere, “forget it. Jeff Bezos is going to be targeted with something like, ‘Be an ambassador for our charity.’”

The broad-brush psychological savviness of digital fraud is being supplemented by technological progress. Imagine the horror you’d feel if you received a call in which your daughter, screaming, told you she was being kidnapped. This is the reality of a particularly pernicious new scam, which uses artificial intelligence to mimic real voices.

Hearing a loved one in distress, says Sabrina Gross, triggers an immediate reaction. “You won’t think. You will spring into action and try to save them.”

Gross, who spent 15 years investigating fraud with European law enforcement agencies, and who now works at the biometrics firm Veridas, says that we need to wise up to these new methods of scamming. Just as we now check email addresses, and treat package-related messages with scepticism, we should, in cases where we receive a panic-inducing call, take a deep breath. We should call the person back; we should call the police before we hand over money or personal details. The task of not being scammed requires constant scepticism. Sophie recalls another attempted scam: one in which someone impersonated her CEO on WhatsApp, to ask her to do an odd job. “It seemed strange,” says Sophie, “but I was eager to make a good impression in my new job and keen to make my new bosses happy. I showed colleagues and they agreed it would not be a great look to ignore it.”

Only when the criminal explained that the odd job was buying “eBay or Apple cards” did Sophie realise it was an attempted scam. “It was another creepy example of a scammer trying to create panic to exploit a moment of vulnerability.”

This increasing inventiveness might necessitate other precautions. “As an individual,” says Gross, “of course I can’t have a voice biometric system” – a system that conducts a computational analysis of voices in order to check that they’re genuine – “installed in my day-to-day-life, or at least, not yet. But I can have a safe word, and we can establish some ground rules of how we can safeguard ourselves, especially if there are kids involved.”

So you might agree with your family that, in times of genuine peril, you confirm your identity by using a pre-agreed safe word. (“Avocado,” suggests Gross.) And you tell your children that if they receive a horrifying, pressing call from you, they hang up and call you back. In terms of email, says Gross, “all of us have been through this learning curve. And now we need to learn that this” – voice mimicry – “is happening, as well.”

Nick France, of the digital certification firm Sectigo, recounts the use of this technique in a scam in which an energy firm executive, convinced he had just been instructed to by his boss, sent £200,000 to what turned out to be a scammer. “In the next five years,” says France, “AI is only going to increase the sophistication of this kind of deepfake scams, making the technology more democratised, easier to perform and more accurate to the persona they are mimicking.”

This technology, says Gross, isn’t just “Tom Cruise dancing on TikTok”. It’s a powerful technology that is “progressing constantly, very very rapidly.” We will see an improvement in our ability to label such content as spoofed, and we can hope that businesses “actually provide these tools to people”. Yet we will always be playing catch-up: “We’re constantly chasing after the scam of the month.”

What, then, is the scam du jour this month? Or next month? Hester Abrams, of Stop Scams UK, says that “if it’s in the news, it’s going to be a scam”. These scams, akin to what we might call the evergreen scams, combine vulnerability – a threat to something you care about – with urgency.

“It may have been about payday loans,” says Simon Miller, a colleague of Abrams. “It may have been the war in Ukraine, and it is currently about the conflict in Israel and Gaza.” These scams might take the form of a purported charity appeal, like those that sprang up in the Covid pandemic, or even the pretence that a family member is in trouble.

Don’t think that you’re too canny to fall for fraudsters’ swindles. Many of the experts I spoke to confessed to falling for scams. “Everyone’s vulnerable,” says Abrams, “and nobody’s excluded.” Young people are probably better able to spot suspect messages, but they’re more likely to be over-trusting of fraudsters claiming to be vendors on platforms such as Facebook Marketplace.

Some cons, such as those that claim postage fees, are smaller stakes than others, but they are widespread enough to make significant contributions to the overall fraud figures. Research by the University of Portsmouth suggests that in Britain, residents could be losing as much as £8.3bn per year to individually-directed fraud: more than £1,000 per person. This money doesn’t go to hard-up Nigerian princes. Sometimes it goes to opportunists, but the schemes generally originate within organised crime. “People are working in call centres,” says Miller, “a lot of them in Pakistan at the moment, in places like Karachi, but also across southeast Asia, in horrific conditions where they are being forced to use all of these different technology platforms to seek to defraud people in the UK. And I think it’s tragic.”

The stolen money, says Miller, funds human trafficking, modern slavery, drug smuggling, and “a whole host of hostile actors”; Malik cites North Korea as one of the entities that benefit from cybercrime.

To fall for a scam, says Miller, causes “internalised shame”; we feel greedy, or stupid. “And actually, that’s not the case. We’ve had an incredibly sophisticated crime perpetrated against us, and we can empower ourselves by reporting it, because the more loopholes and gateways will be closed down, the safer we will be.”

Source: The Irish Independent