Cybersecurity » Generative AI: Making It Easier for Scammers and Thwarting Them at the Same Time

Generative AI: Making It Easier for Scammers and Thwarting Them at the Same Time

May 17, 2023

Robot hand typing on keyboard

Before generative AI was publicly available, multiple resources were required to run disinformation campaigns effectively. However, this new technology has made it easier to create fake news stories, social media posts and other types of disinformation quickly and at a much lower cost. Generative AI can create content that is almost indistinguishable from human-created content, making it difficult for people to detect when they are being exposed to false information. A fake story that looks and reads like a legitimate news article can spread quickly and easily through social media, where it can reach millions of people within just a few hours. 

Generative AI has also opened new possibilities for fraud and social engineering scams, allowing scammers to program chatbots to convincingly mimic human interaction. These chatbots can analyze the messages they receive and generate human-like responses that don’t have the tell-tale language and grammar errors associated with older chat scams. AI itself can be an effective way to tackle these issues. It can leverage its power for identity verification and authentication using multimodal biometrics and verify that users are genuine humans and are who they claim to be. While no technology can truly prevent someone from falling for a generative AI scam that convinces them to give away personal information, it can help prevent how that information can be used.

Critical intelligence for general counsel

Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top