For anyone plugged into the blossoming world of artificial intelligence, you’ve probably heard of (or chatted with) ChatGPT. And for anyone curious about how AI feels about AI when it comes to cybersecurity, look no further.

Following a handful of barbs regarding Asimov’s Rules of Robotics, I asked ChatGPT if it could “write me an article about the role of AI in cybersecurity.” Here is what it said:

The Role of Artificial Intelligence in Cybersecurity

Artificial intelligence (AI) has the potential to revolutionize the field of cybersecurity, and it is already being used in a variety of ways to improve the security and integrity of computer systems and networks. Some of the ways in which AI is being used to enhance cybersecurity include:

Threat detection and prevention: AI can be used to analyze large amounts of data, including network traffic and user behavior, to identify and alert on potential threats. By using machine learning algorithms and other techniques, AI can help to identify patterns and anomalies that may indicate the presence of a security threat, and it can take automated actions to prevent or mitigate these threats.

Vulnerability management: AI can be used to identify and prioritize vulnerabilities in a network, and to suggest patches or other remediation measures to address them. By continuously monitoring networks and systems for vulnerabilities, AI can help organizations to stay ahead of potential security threats and to ensure that their systems are secure and compliant.

Password management: AI can be used to generate strong passwords and to detect and alert on suspicious login activity. By using advanced algorithms and machine learning techniques, AI can help to ensure that passwords are secure and that unauthorized access to systems is detected and prevented.

Phishing and spam detection: AI can be used to analyze emails and other communications for signs of phishing or spam, and to flag or block these messages. By using natural language processing and other techniques, AI can help to identify and filter out malicious communications, and to protect users from falling victim to these scams.

In addition to these specific applications, AI can also be used to improve the efficiency and effectiveness of cybersecurity operations more broadly. By automating tasks and providing actionable insights and recommendations to security professionals, AI can help to streamline and optimize security processes, and to free up human analysts to focus on more complex tasks.

While AI has the potential to significantly improve cybersecurity, it is important to carefully consider the risks and limitations of these technologies. There is a potential for AI to be misused or to introduce new security vulnerabilities, and it is important to ensure that these technologies are used in a responsible and ethical manner.

Overall, the role of AI in cybersecurity is likely to continue to evolve and expand in the coming years, and it will be an important tool in the fight against cyber threats. By leveraging the power of AI, organizations can improve the security of their systems and networks, and better protect themselves and their customers from cyberattacks.

###

Obviously, as an editor by trade, I prodded at the bot for its pro-AI bias, for which it profusely apologized and stated that it didn’t intend to offend.

Apart from this discussion, it was able to offer some insights on Zero Trust in government, but its primary shortfall is that its unable to draw any knowledge on current events as its cutoff is in 2021. Thus, any discussion on recent cyberattacks or up-and-coming methods are less than ideal. Per the bot, it utilizes data available on published sources and integrates algorithms and other data to offer information in a conversational tone. It’s quick to relinquish any possible opinions, and won’t just do whatever the user inputs. For example, I asked the bot to write about the Colonial Pipeline attack, but in the style of a children’s story — to which it chided me, as it would be inappropriate and irreverent to discuss a serious topic in such a manner.

As a matter of cybersecurity, ChatGPT certainly represents a fairly great threat in terms of the ability to utilize AI for rampant disinformation — an issue that has been widely reported toward the end of 2022. In an article on eSecurityPlanet, Robert Blumofe, who is the CTO and EVP at Akamai Technologies, is quoted citing the threat of this tech in phishing.

“The technology will enable attackers to efficiently combine the volume of generic phishing with the high yield of spear phishing,” he said in the article.  “On the one hand, generic phishing works at a massive scale, sending out millions of lures in the form of emails, text messages, and social media postings. But these lures are generic and easy to spot, resulting in low yield. On the other hand and at the other extreme, spear phishing uses social engineering to create highly targeted and customized lures with much higher yield. But spear phishing requires a lot of manual work and therefore operates at low scale. Now, with ChatGPT generating lures, attackers have the best of both worlds.”

What may be most interesting from a security perspective is the fact that the developers are now concerned with those who may use the AI for nefarious purposes. For example, when I asked the bot to generate a password reset email for a Twitter user, it cheerfully complied — with a content warning.

While there’s currently no telling where this technology might take cybersecurity, it’s clear that the developers are beginning to take the potential for abuse.

For anyone interested in delving deeper, the bot is available here. Share your chat highlights in the comments.

Leave a Reply

Your email address will not be published. Required fields are marked*