HP TECH TAKES /...

Exploring today's technology for tomorrow's possibilities
Role of LLMs and Advanced AI in Cybersecurity

Role of LLMs and Advanced AI in Cybersecurity

Linsey Knerl
|
Reading time: 7 minutes
The U.S. Cybersecurity and Infrastructure Security Agency reported that in 2023, it remediated 14 million known exploited vulnerabilities and blocked over 900 million malicious DNS requests. These attacks targeted some of the nation’s most important infrastructures, including schools, public utilities, and transportation networks.
More recently, government and private entities have been working together to stop further threats by using large language learning models (LLMs) and artificial intelligence (AI) technology. Understanding how each works can help prepare you for the changes happening in cybersecurity.

Understanding LLMs and Advanced AI

What are LLMs?

Large language models are a form of “generative AI” that can recognize text and even produce new strings based on past text patterns. It uses a statistical model to anticipate the relationship and likelihood of words and then creates text from this likelihood.
Built on machine learning (ML), most LLMs get more precise over time; both are based on new texts being added to power the learning model and through human feedback. Telling an LLM that a text output was accurate, for example, affirms the use of the model and helps direct future output.
Because LLMs use huge databases of language, language examples, and user feedback, they can be used for a wide range of supervised tasks. Notable examples of LLMs include OpenAI’s ChatGPT-4, but several proprietary models are being used for private and government work today.

What is advanced AI?

Not surprisingly, advanced AI includes LLMs, and tools like Bard or ChatGPT are often the first thing people think of when talking about AI. However, AI goes much further than that single application and offers many advantages to the security field.
One use of advanced AI is threat detection for attacks before they happen by scanning and analyzing large volumes of data to look for trends. AI also maximizes resources so that threats can be addressed quickly and with fewer negative consequences. AI workflows can also free up experts to work on more complicated problems.

Applications in cybersecurity

LLMs and AI have been integrated into almost every industry, but they are producing some interesting use cases in threat detection and response.
Notable examples include:
  • Using LLMs and data from past password breaches to create stronger passwords. Useful for both consumers and enterprises, this encourages better passwords and more frequent updating of passwords (and overall better password hygiene).
  • Creating deceptive scenarios to bait attackers and deceive them into giving up their position or information about future attacks. Instead of using deepfakes to create chaos, these “imaginations” can draw out bad actors before they can do harm.
  • Using AI to develop new software tools and more secure or innovative solutions than past versions. (Github’s Copilot, for example, helped developers see a 55% increase in efficiency than developers who didn’t use it.)
  • Patch management, driven by AI insights, can identify, prioritize, and fix vulnerabilities much quicker than before. This reduces the time vulnerabilities sit unresolved while attackers can exploit them.

Enhancing Threat Intelligence and Response

LLMs help refine threat detection in a number of ways. While the technology is still in the early stages, it’s being adequately utilized here:

Conversion of raw data

What used to take analysts hundreds of hours is now part of a day’s work for LLMs. The technology can take pieces of data that are not even in the same format or naming convention, process it, and convert it into usable, recognizable data for reporting repurposes.

Breaking down of data silos

Collecting data hasn’t been the challenge that connecting data has been. LLMs and AI make it easier for disparate data sources to be integrated into a larger data ecosystem, stitching together data from all over. This makes it more likely to identify threats, as each piece is identified by tools to help teams collaborate.

Expanding coverage

LLMs can work anywhere text exists and make the most of natural language processing (NLP). Locations where text can be found include message boards, social media platforms, and the dark web. While it’s not practical to have agents and analysts monitoring these channels all hours of the day, AI technology certainly can and may even catch the things humans won’t.

Finding new threats

Finally, one of the more exciting aspects of the technology is the discovery of recent threats. Typically, we would have to wait until a new type of attack and adjust our response to account for this new threat. Now, LLMs and AI tech are tipped off to data patterns that seem likely to precede an attack based on previous attacks.
Or, it can watch one attack in process and then share that information across systems in real time. The machine learning aspects help it catch up to what's happening now instead of forcing a cybersecurity post-mortem.

Challenges and ethical considerations

In a recent CISA strategic report, the organization recognized that AI tools are adept at protecting against traditional and emerging cyber threats. However, it also acknowledges that AI software systems themselves need monitoring, protecting, and safeguarding to prevent them from being used in dangerous ways. In short, AI is the perfect example of the phrase “with great power comes great responsibility.”
Here are just a few examples of how this technology can be misused:
  • Leaking of sensitive data from the data stores and data lakes used by AI tools
  • Creation of misinformation, deep-fakes, or other false narratives to influence social and political outcomes
  • Allowance of Shadow AI or unauthorized AI systems left to run without adequate human or regulatory control
  • Adversarial machine learning, or the weaponization of machine learning models by malicious individuals to manipulate data, analysis, and outcomes
AI is increasingly under moral and ethical scrutiny, as well. While humans would most likely make major decisions regarding war, commerce, or the education of children, the data that informs those decisions could be influenced heavily by AI algorithms and analysis. Studies have shown AI to carry bias since the datasets it runs can contain human bias.
It’s worth considering if AI would result in complacency in making small decisions that add up to larger outcomes. We’ve seen how generative AI can make mistakes. Without the proper vetting and validation of datasets and analysis that can only come from humans, AI may not be considered a source of truth.

Future trends and developments

2023 was definitely the year of Gen-AI, with all eyes on how ChatGPT and other LLMs changed the way we learn and work. These technologies will continue to evolve and bring about new opportunities to protect against cybercriminals.
We may see interesting developments in the cybersecurity job market in the next year or so, with a higher demand for these positions. According to the U.S. Bureau of Labor Statistics (BLS), the number of information security analyst jobs is expected to grow 32% over the next decade; this is much higher than the industry average for all jobs of just 3%.
And while there’s no substitute for educated and experienced humans, LLMs and AI can help fill labor gaps by providing much-needed resources. The technology is already supplementing human call center agents tasked with helping victims of cybercrimes; it can screen cases, assign them to the appropriate professional, and even give suggestions for the best ways to resolve issues.
Technology will supplement people's expertise so they can focus on the most important work: keeping us safe.
We will also see either the start of legislation around appropriate AI use or clarification on executive orders like the one issued by the Biden administration last fall. Expect private companies to share their concerns about new rules promoting responsible AI developments. Whether guardrails come more in the form of explicit rules or as a larger set of industry “best methods” is yet to be seen. However, 2024 is ripe for big changes, as shared in the CISA’s latest Roadmap for Artificial Intelligence.

Advanced AI in cybersecurity: A summary

The rapid pace of AI's development means that this once futuristic idea will likely come to fruition very soon. Not only are humans who create AI tools learning how to improve upon these solutions quickly, but machines are also gaining knowledge and churning out more informed outcomes.
One takeaway to remember when looking at these advancements is that every good piece of tech can also be used for bad. Some of the very technologies that keep us safe can be exploited to do incredible harm, and cybercriminals count on us to drop our guard at some point.
What’s the solution? As with most industries using AI, the focus should always be on humans. What can they do uniquely? That’s what they should focus on, with AI and LLMs available to fill in gaps and make them more productive. As long as we have talented and diligent humans at the helm of these technologies, we can create more secure systems than ever that improve our lives while mitigating harm.

About the Author

Linsey Knerl is a contributing writer for HP Tech Takes.

Disclosure: Our site may get a share of revenue from the sale of the products featured on this page.