Gabi Power 31 January 2023 11:46:37 AEDT 16 MIN READ

ChatGPT: The Good, the Bad, and the Ugly

When it was first released in late November 2022, ChatGPT became an overnight sensation. Within days, social media was flooded with posts and videos highlighting the chatbot’s amazing features, and the hype hasn’t died down yet. In just two months, the application has amassed 75 million unique monthly users, a number that is likely to continue growing.

Put simply, ChatGPT’s capabilities include search, code generation, text summarisation and chatbot functionality. In practice, its uses seemed almost limitless – from more complex requests such as creating a family meal plan, complete with recipes and a shopping list, writing an essay with references, or even explaining quantum physics, to answering mundane questions such as “is it okay for dogs to eat blueberries?”. It is, by the way.  

Screenshot 2023-01-24 at 1.44.22 pm

 

ChatGPT’s launch left people questioning if this could be the application that would finally knock Google off its throne, with many preferring to have an almost human-like chatbot answer their questions (and any follow-ups), rather than having to wade through results looking for the correct answer. And although this certainly isn’t the first application of its kind, it is the first to have a dataset of this size. The current version of ChatGPT (GPT-3) was trained on 570 gigabytes of text and has 175 billion parameters, making it “able to perform tasks it was not explicitly trained on, like translating sentences from English to French, with few to no training examples”.   

GPT-3 is so promising that in the midst of laying off almost 5% of their workforce, Microsoft announced they were investing $10 billion into the software. With CEO Satya Nadella referring to AI as “the next major wave of computing” the week prior, “the new deal is a clear indication of the importance of OpenAI’s technology to the future of Microsoft and its competition with other big tech companies like Google, Meta and Apple.”  

Beyond this investment, OpenAI, the artificial intelligence research laboratory behind ChatGPT, have also been quick to monetise the service in an attempt to cover the estimated $100,000 per day it costs to run. For $42 a month, users can access a Premium account which offers exclusive benefits, such as faster response speed, priority access to new features, and no downtime. At a minimum, a premium account will prevent you from seeing messages like this:  

Screenshot 2023-01-30 at 9.51.39 am

Despite the overwhelming benefits of ChatGPT, in the two months since its launch, industry professionals have been quick to point out its flaws. Of particular concern is the potential negative repercussions it may have on the cybersecurity industry.  

So, how exactly will the emergence of GPT-3 impact the cybersecurity sector?  

The Good: 

By far one of the greatest benefits of ChatGPT from a cybersecurity perspective is its threat detection capabilities. Security teams can use the service to analyse large amounts of data, such as log files and network traffic, to identify unusual behaviour or communication patterns that may indicate a cyberattack. This even extends to spam/scam classification. For example, the email below was taken from a ‘Junk’ inbox. When we asked ChatGPT whether it was a scam or safe, it produced the following answer:  
  

Screenshot 2023-01-25 at 12.53.19 pm

 

Not only does the response verify that it is indeed likely to be a scam, helping the user to avoid the immediate risk, but it also provides information on what to look out for in the future, thus improving their cyber awareness and reducing the chance of them falling victim to such scams.   

To reduce the potential of cybercriminals using the service, ChatGPT has strict content policies which prohibit individuals from using the chatbot to generate malicious content and it will deny outright requests, as shown below:  

Screenshot 2023-01-25 at 12.49.17 pm

On top of this, the chatbot can also be a valuable tool for businesses to educate non-technical stakeholders about complex security problems or concepts. By being able to generate explanations in a human-like manner, it can “translate” tech jargon into everyday language, helping to bridge the gap between technical and non-technical team members, and allowing for better communication and understanding of potential risks.  

 This is especially useful for companies with a large number of non-technical employees, such as those in the healthcare sector which has become a common target for cyberattacks in the past year. The clear and concise explanations that ChatGPT produces can help to ensure that everyone in the organisation is aware of potential security risks and understands how to mitigate them, which ultimately can lead to a more secure business.  

If it all sounds too good to be true, that’s because it is (kind of).  

The Good: 

Some of the loudest critics of the app so far have been schools and educators, concerned about the negative impact ChatGPT could have on a child’s learning. At the start of January, New York City’s education department banned the use of ChatGPT on all school devices and networks amid concerns that it could be used to cheat on assessments or plagiarise work. Jenna Lyle, a department spokesperson stated, “While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success”. Seattle Public Schools also quickly followed suit, banning and blocking the app due to a requirement for “original thought and work from students”. Many other countries and schools are following suit, including a number of Australian states.

The concerns about plagiarism are not unfounded, with ChatGPT itself confirming that it’s a real possibility: 

 

Screenshot 2023-01-27 at 10.49.06 am

In response to these apprehensions, Sam Altman, CEO of OpenAI, has stated that “the company will develop ways to help schools discover AI plagiarism, but he warned that full detection isn’t guaranteed.”  

Questions have also been raised about the accuracy of the app’s output. Due to ChatGPT being a machine learning model, its performance is highly dependent on the diversity and quality of the data that it’s trained on. For day-to-day personal use, it may not be significant, but in cybersecurity, these inaccuracies can be problematic. For example, if a business is reliant on the software for incident detection, there’s a potential that they may receive a false positive or negative, leading to a security incident being wrongly identified or overlooked.   

The inaccuracies also flow into the code that ChatGPT produces, which can create vulnerabilities and open businesses up to attacks. For this reason, coding question and answer website, Stack Overflow, created a temporary policy in December banning posts or replies that include ChatGPT-generated code, stating “The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce…The posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers.” Despite OpenAI releasing an update in mid-January to reduce the inaccuracies, Stack Overflow’s ban remains in place.  

Further problems lie with the potential for bias that the app has. As ChatGPT itself says:

“Like any other AI models, ChatGPT can be trained on biased data, which can lead to biased output. This can be particularly problematic in the cybersecurity industry, where decisions based on biased data can have severe consequences. 

Another concern is that biases can lead to discrimination against certain groups of people or organizations. For example, if a security system is trained on biased data, it may be more likely to flag or block legitimate traffic from certain demographics, such as women or people of colour, while allowing malicious traffic from other groups to pass through undetected.” 

Although the masterminds behind ChatGPT have made it apparent that they’re aware of the limitations of the app, it’s encouraging to see the steps they’re taking to actively minimise these concerns.  

The Ugly: 

As the saying goes, every rose has its thorn. ChatGPT is already being used by professionals in a wide range of industries to help streamline their jobs, including cybercriminals (or wanna-be cybercriminals). Because the barrier to entry is so low and it’s currently a free service, even non-technical individuals can use it to create and use malicious content.  

Recognising the potential dangers of the service, MailGuard CEO Craig McDonald turned to his LinkedIn network to question “What do you think the main cybersecurity risks of using ChatGPT and other similar applications are”? 

Screenshot 2023-01-24 at 11.17.25 am

  • A mere 5% voted for ‘Business email compromise’, 
  • The majority (62%) of voters nominated the biggest cybersecurity risk as ‘Advanced phishing attacks’, and 
  • One-third of votes were for ‘Creation of malicious code’ 

As mentioned earlier, ChatGPT’s content policy strictly prohibits the use of the service to generate malicious material and will refuse to output anything it deems may violate this policy. However, articles, blogs and forums have quickly sprung up explaining ways to “jailbreak” the policy. And for a scammer, this could be as simple as asking it to write an email from DHL requiring payment for a holding fee, rather than outright requesting it to create phishing material.   

Screenshot 2023-01-30 at 10.27.03 am

Craig’s network was right to show concern about ChatGPT’s ability to create sophisticated phishing emails. In just seconds, a chatbot can write a grammatically correct and natural-sounding email, eliminating the “tell-tale” signs that typically indicate a phishing attempt, such as spelling errors or unusual phrases. This makes it harder for individuals to identify phishing emails, particularly those that originate from countries where English is not the primary language. 

As shown in the poll, the use of chatbots in business email compromise (BEC) attacks is also a growing concern. With the use of AI-powered chatbots, attackers can impersonate individuals with a high degree of accuracy and natural-sounding language, making it difficult for employees to detect the scam. Given that chatbots can be trained to mimic the writing style, tone, and even the specific language of a target, the chances of a successful attack are greatly increased.  

Beyond emails, AI-powered chatbots can also be used to produce code and copy for websites that mimic those belonging to legitimate companies. As the capabilities of these chatbots continue to improve, it's likely that we'll see a sharp increase in the number of phishing and BEC attacks, as well as in the number of victims. 

To combat these threats, it’s imperative that businesses invest in robust email security solutions that can detect and block suspicious messages. This is especially important as it is no longer feasible to rely on individuals to identify these attacks based on grammar and other standard cues. If you have been hesitant to add an email security solution to your security, we encourage you to take another look into it.   

Another point of concern for cybersecurity professionals is the potential for attackers to use ChatGPT to write malicious code. Although this too goes against the content policy, researchers recently discovered that by using an authoritative tone, they were able to bypass the policy to create polymorphic malware, which is capable of changing its code to evade detection by antivirus programs. “They could create a polymorphic program by continuous creation and mutation of injectors. This program was highly evasive and hard to detect…By using ChatGPT’s capability of generating different persistence techniques, malicious payloads, and anti-VM modules, attackers can develop a vast range of malware.”  

In some circumstances, individuals have been able to evade the chatbot’s restrictions by writing a part of the malicious code themselves, and then having ChatGPT complete it and check for errors.  

Altman himself has acknowledged that AI poses a huge cybersecurity risk, and while he hasn’t yet commented on the service being used to produce malware, as ChatGPT’s capabilities continue to evolve and more updates are released, it is reasonable to expect that it will eventually be able to detect and reject requests for malicious code. 

We'd love to know your thoughts. Do you think the benefits of ChatGPT and other similar chatbots outweigh the potential negatives?  

Fortify your defences

No one vendor can stop all threats, so don’t leave your business exposed. If you are using Microsoft 365 or G Suite, you should also have third-party solutions in place to mitigate your risk. For example, using a specialist cloud email security solution like MailGuard to enhance your Microsoft 365 security stack. 

For more information about how MailGuard can help defend your inboxes, reach out to our team at expert@mailguard.com.au.      

Stay up-to-date with MailGuard's latest blog posts by subscribing to free updates with the button below. 

We’re on Facebook, Twitter and LinkedIn. 

Keep Informed with Weekly Updates