ChatGPT has captured people’s attention, changing their workflows and altering how they get information online. Even those who haven’t tried it are curious about how artificial intelligence (AI) chatbots will impact the future.
Cybercriminals have explored how they could capitalize on the phenomenon. FraudGPT is one recent example of that.
FraudGPT is a product sold on the dark web and Telegram that works similarly to ChatGPT but creates content to facilitate cyberattacks. Members of the threat research team at Netenrich first identified and saw it advertised in July 2023. A major selling point was that FraudGPT doesn’t have the built-in controls and limitations that prevent ChatGPT from doing or answering inappropriate requests.
Further details say the tool is updated every one to two weeks and features different AI models under the hood. FraudGPT also has a subscription-based pricing model. People can pay $200 to use it monthly or $1,700 for a year.
The Netenrich team purchased and tested FraudGPT. The interface looks similar to ChatGPT, with a record of the user’s previous requests in the left sidebar and the chat window taking up most of the screen. People just need to type in the Ask a Question window and press Enter to generate the response.
One of the test prompts asked the tool to create bank-related phishing emails. Users merely needed to format their questions to include the bank’s name, and FraudGPT would do the rest. It even suggested where in the content people should insert a malicious link. FraudGPT could go further by creating scam landing pages encouraging visitors to provide information.
Other prompts asked FraudGPT to list the most targeted or used sites or services. That information could help hackers plan future attacks. A dark web advertisement for the product mentioned that it could create malicious code, build undetectable malware, find vulnerabilities, identify targets, and more.
The Netenrich team also identified FraudGPT’s seller as someone previously offering hacker-for-hire services. Additionally, they linked the same individual to a similar tool called WormGPT.
Researchers from SlashNext said the tool’s algorithms were trained on large quantities of malware data. That team said WormGPT had particular potential for crafting realistic phishing and business email compromise messages.
Evidence of these tools proves that cybercriminals keep evolving to make their attacks increasingly successful. What can tech professionals and enthusiasts do to stay safe?
The investigation into FraudGPT highlighted the need to stay vigilant. These tools are new, so it’s too soon to say when hackers might use them to create never-before-seen threats — or if they already have. However, FraudGPT and other products used for malicious purposes could help hackers save time. They could write phishing emails in seconds or develop entire landing pages almost as quickly.
That means people must continue following cybersecurity best practices, including always being suspicious of requests for personal information. People in cybersecurity roles should update their threat-detection tools and know that bad actors may use tools like FraudGPT to directly target and infiltrate online infrastructures.
More workers are using ChatGPT in their jobs, but that’s not necessarily good for cybersecurity. Employees could unintentionally compromise confidential company information by pasting it into ChatGPT. Companies, including Apple and Samsung, have already restricted how workers can use the tool in their roles.
One study found that 72% of small businesses close within two years of losing data. Many people only consider criminal activity as causing the loss of information. However, forward-thinking individuals also realize the risk of pasting confidential or proprietary data into ChatGPT.
Those fears are not unfounded. A March 2023 ChatGPT bug leaked the payment details of people who’d used the tool during nine-hour window and subscribed to the paid version. Moreover, future versions of ChatGPT are trained on information entered by previous users. It’s easy to imagine the issues if confidential details became part of the data that teaches algorithms to work. Users can opt out of having their prompts used for training, but that’s not the default setting.
Problems could also occur if workers assume that whatever information they get from ChatGPT is correct. People using the tool for programming and coding tasks have warned that it provides inaccurate responses that less-experienced professionals may treat as factual.
An August 2023 research paper from Purdue University confirmed that assertion by testing ChatGPT on programming questions. The startling conclusions found the tool gave wrong answers in 52% of cases and was verbose 77% of the time. If ChatGPT also gets cybersecurity-related prompts wrong, it could pose challenges for IT teams trying to teach staff members how to prevent hacks.
A critical thing to realize is that hackers can still do extraordinary damage without paying for products like FraudGPT. Cybersecurity researchers have already pointed out that the free version of ChatGPT allows them to do many of the same things. That tool’s built-in safeguards may make it harder to get the desired results immediately. However, criminals know how to be creative, which may include manipulating ChatGPT to make it work how they want.
AI could ultimately widen cybercriminals’ reach and help them orchestrate attacks faster. Conversely, many cybersecurity professionals use AI to increase threat awareness and speed remediation. Thus, people can depend on the technology to strengthen and erode protective measures. It’s no surprise that June 2023 survey showed 81% of respondents had safety and security concerns about ChatGPT.
Another possibility is that people could download what they believe is the genuine ChatGPT app and receive malware instead. It didn’t take long for numerous applications resembling the tool to appear in app stores. Some only worked similarly and did not appear to purposefully trick users. However, others had sound-alike names — such as “Chat GBT” — that could easily fool unsuspecting individuals.
Hackers commonly embed malware in seemingly legitimate apps. People should expect them to take advantage of ChatGPT’s popularity that way, too.
The research into FraudGPT is a memorable reminder of how cybercriminals will keep changing their techniques for maximum impact. However, freely available tools pose cybersecurity risks, too. Anyone using the internet or working to secure online infrastructures must stay abreast of newer technologies and their risks. The key is to use tools like ChatGPT responsibly while remaining aware of the potential harm.
This article was originally published by Zac Amos on Hackernoon.
Latin America’s tech industry is booming, with innovative new startups popping up across the region.…
The Global Initiative for Information Integrity on Climate Change claims to 'safeguard those reporting on…
In the late 19th Century, physicians began inserting hollow tubes equipped with small lights into…
This year wasn’t exactly what the video gaming industry expected — it declined by 7%…
By Oren Askarov, Growth & Operations Marketing Director at SQream Becoming “data-driven” has become a…
Horasis Asia Meeting, led by German entrepreneur Frank Jurgen-Richter, will take place this year on the…