Clickbait written by AI proves effective, but is mostly unregulated, according to researchers from Singapore’s Government Technology Agency. (Source: Storyblocks)

AI Uses Natural Language Processing for Successful Spearphishing Tests

If you spend any time on Facebook you have seen plenty of Phishing Posts. You know, the ones where they ask you for your favorite teacher’s name, or your first car. And you might be tempted to answer these silly little questions. Don’t.

These are proven to be people seeking little details about you and your life to take advantage of your data. They may even buy a new home in your name but not include you in the title. But there is an even larger, more intricate effort to use your data. In the terms of the users, it’s called “spearphishing” and it is for finding and using your personal data to make big bucks or even big societal changes.

Researchers have long debated whether it would be worth the effort for scammers to train machine learning algorithms that could then generate compelling phishing messages. Mass phishing messages are simple and formulaic, after all, and are already highly effective. Highly targeted and tailored “spearphishing” messages are more labor-intensive to compose, though. That’s where NLP may come in surprisingly handy, according to a story on Wired.com.

“At the Black Hat and Defcon security conferences in Las Vegas this week, a team from Singapore’s Government Technology Agency presented a recent experiment in which they sent targeted phishing emails they crafted themselves and others generated by an AI-as-a-service platform to 200 of their colleagues. Both messages contained links that were not actually malicious but simply reported back clickthrough rates to the researchers. They were surprised to find that more people clicked the links in the AI-generated messages than the human-written ones—by a significant margin.”

Lily Hay Newman wrote the Wired magazine story that explains the NLP and its recent contest with a human, one that AI won handily. The Singapore team that presented their results at the computer security company conferences hinted that it will become a major problem.

“Researchers have pointed out that AI requires some level of expertise. It takes millions of dollars to train a really good model,” says Eugene Lim, a Government Technology Agency cybersecurity specialist. “But once you put it on AI-as-a-service it costs a couple of cents and it’s really easy to use—just text in, text out. You don’t even have to run code, you just give it a prompt and it will give you output. So that lowers the barrier of entry to a much bigger audience and increases the potential targets for spearphishing. Suddenly every single email on a mass scale can be personalized for each recipient.”

Let’s not kid ourselves. This effort has been used in recent political campaigns and has been proven to be very successful in moving a group of voters. Personalizing messages, advertisements, or political policies to an individual makes them feel important and part of the ‘insider’ group.

The recent experiment by the government of Singapore showed frightening weaknesses in using NLP for nefarious purposes.

“The researchers used OpenAI’s GPT-3 platform in conjunction with other AI-as-a-service products focused on personality analysis to generate phishing emails tailored to their colleagues’ backgrounds and traits. Machine learning focused on personality analysis aims to be able to predict a person’s proclivities and mentality based on behavioral inputs. By running the outputs through multiple services, the researchers were able to develop a pipeline that groomed and refined the emails before sending them out. They say that the results sounded “weirdly human” and that the platforms automatically supplied surprising specifics, like mentioning a Singaporean law when instructed to generate content for people living in Singapore.”

What will work in Singapore will work everywhere and it could deeply affect the way you get to live your life, no matter where you call home.

read more at wired.com