Cybersecurity and AI: The Evolving Security Landscape

In An Era Of Rapid Change, We Must Find Ways To Systematically Apply Security Best Practices

Cybersecurity and cyberattacks cost hundreds of billions of dollars annually.1 Rapid progress in AI will dramatically increase the stakes.

In the worst case, AI may greatly reduce the effort required to unleash a devastating attack on critical infrastructure systems. New AI models are already being used in the wild to increase the scope and reduce the cost of attacks,2 a trend which will only accelerate.3

Attacks on critical infrastructure are limited due to restraint on behalf of state-level actors. However, AI may soon enable unaccountable non-state actors to carry out large-scale, sophisticated attacks.

AI will also enable better cyberdefense. However, due to structural issues in the way systems are developed, deployed, and managed, we frequently fail to take advantage of existing best practices, let alone the rapidly evolving capabilities that AI will provide. To navigate the coming wave, the technology and regulatory sectors must coordinate to address these issues. In the best case, AI can serve as both an enabling technology to improve defenses, and a wake-up call to address long-standing deficiencies in our approach to security.

The Cyberattack Overhang over Critical Infrastructure

It is hardly necessary to review the scale of the cybersecurity challenge. While many attacks go unreported, there is no lack of well-known incidents, from the Chinese breach of detailed personnel records for millions of Americans working in secure positions,4 to the NotPetya attack (with damages estimated at over ten billion dollars5), to the Equifax data breach of records covering over 160 million Americans and British citizens.6 A recent attack on the US’ largest health care payment system is currently imperiling the finances of medical practices across the country.7

However, this is potentially just the tip of the iceberg. A vast swath of critical infrastructure – the electrical grid, communications systems, water treatment facilities, air traffic control, port facilities,8 military systems, and much more – relies on vulnerable systems. In 2021, a ransomware attack resulted in a major East Coast oil pipeline being shut down for several days,9 causing panic buying and long lines at gas stations in multiple states. In December 2022, Southwest Airlines’ crew scheduling system collapsed for several days,10 resulting in over 15,000 flight cancellations. Numerous government reports detail the extent of vulnerabilities in critical infrastructure systems.11 As the world becomes increasingly dependent on software systems (including increasing use of AI), a worst-case cyberattack could have severe consequences.

Disconcertingly, our lack of routine infrastructure failures seems to stem more from the reticence of potential attackers than from the inherent security of our systems. In other words, we are living under a "cyberattack overhang".

Indeed, the US NSA and other agencies have reported on “Volt Typhoon”, an extensive Chinese effort to “preposition themselves on IT networks for disruptive or destructive cyberattacks against U.S. critical infrastructure”.12 Targets included ports, energy and water controls,13 often near military bases. Increasingly capable AI will expand the potential scope of such attacks, as well as empowering non-state actors who might be less deterred by the consequences of an infrastructure attack. Given the difficulty of identifying the ultimate perpetrator of a cyberattack, the results could be destabilizing.

AI Impacts on Cyber Offense and Defense

AI has many foreseeable applications to cybersecurity. Some capabilities will primarily help attackers, while others will help defenders.

On the offensive side, AI will soon plausibly be able to automate the entire attack chain,14 from intelligence gathering (analyzing public information to identify the software used by a target), to invoking a known exploit against the target software, and analyzing data in the target system to determine next steps.15 Google projects that within the year, AI will also be used to scale “social engineering” attacks16 (tricking personnel into revealing information or otherwise assisting an attacker); given the facility of large language models at crafting persuasive prose, this could have a large impact.

Conversely, AI should be able to automate many aspects of defense, such as detecting and correcting unpatched or misconfigured software. AI techniques are already used for “anomaly detection” – identifying unusual behavior that might indicate an attack.17 Coding assistants may eventually be able to automatically detect more bugs, as well as help “harden” software by rewriting it to use more secure languages, libraries, and techniques, potentially including mathematical proofs of correctness.

In principle, progress might, on balance, favor defense. A system designed and operated by an ideal defender would have no vulnerabilities, leaving even an ideal attacker unable to break in.18 Also, AI works best when given large amounts of data to work with, and defenders generally have access to more data.19 However, absent substantial changes to cyber practices, we are likely to see many dramatic AI-enabled incidents.

The primary concern is that advances in defensive techniques are of no help if defenders are not keeping up to date. Despite decades of effort, it is well known that important systems are often misconfigured and/or running out-of-date software.20 For instance, a sensitive application operated by credit report provider Equifax was found in 2017 to be accessible to anyone on the Internet, simply by typing “admin” into the login and password fields.21 A recent CISA report notes that this government agency often needs to resort to subpoenas merely to identify the owners of vulnerable infrastructure systems, and that most issues they detect are not remediated in the same year.

As AI enables increasingly sophisticated, large-scale, fast-moving attacks, defenders will need to move faster than ever to keep up. However, millions of individuals are in some fashion responsible for the security of a digital system. Experience shows that we cannot rely on everyone to universally follow best practices, especially given the many practical difficulties that such practices often entail.22 To prevent AI from enabling a tidal wave of cyberattacks, and in particular to have any practical hope of securing our many critical infrastructure systems against increasingly capable attackers, we must find ways to shift the playing field.

Mitigation

As cyberattacks become even more prevalent and sophisticated, it will be necessary to apply defenses in a more systematic fashion. In this section, we briefly present some potential approaches, with an emphasis on approaches that can leverage AI, asymmetrically benefit defenders, and reduce the burden on individual system operators.

Stronger Foundations

The best remedy for a security flaw is to prevent the flaw from existing in the first place. Modern coding practices23 can help reduce the number of exploitable software bugs, but these practices may involve additional effort and/or require rewriting older software. AI-based tools can assist with this work.24

Many security breaches owe as much to user error as to software bugs. New technologies such as passkeys eliminate the possibility of users employing weak passwords or being tricked (“phished”) into revealing their password to an attacker.

Systematic Defense

Rather than relying entirely on overburdened system operators to individually maintain the highest standards of security, we should look for opportunities to supplement security with systematic approaches.

Attackers perform “vulnerability scans” to locate misconfigured or out-of-date servers, or sensitive information that has accidentally been placed in public view. If an attacker can find a vulnerability, a defender should be able to find it first.25 In particular, we should enable “good guys” to systematically scan the Internet to identify and remediate vulnerabilities. Realizing this in practice will require addressing a number of practical, organizational, and legal challenges,26 but companies such as Google are already performing similar activities as a public service.27 In a related practice, “Dark web monitoring” can systematically monitor data leaks if the data appears on the dark web.28

Because it is difficult to maintain systems in a state of perfect security, operators often use firewalls (which limit access to potentially vulnerable systems), signature scanning (watching for known malicious software,29 as well as known vulnerabilities) and anomaly detection software (which looks for unusual access patterns that might indicate a security breach).30 Cloud computing platforms, software-as-a-service providers, and networking equipment could provide more such functionality. This would facilitate additional, constantly-updated, professionally managed security by default.31 The introduction of firewalls into PCs and Internet service providers (ISPs) is one of the main reasons that Internet worms like Conficker are no longer prevalent.32

Facilitating Security Patches

In a world where the ability of attackers to identify and exploit ever-more-subtle bugs is continually advancing, it is critical that security patches be applied in a timely manner.33 Unfortunately, as mentioned earlier, this is not always possible. Even if the bug fix itself was small, an entirely new version of the software must be created, tested, and installed – a significant burden for both the software provider and the user, and possibly subject to regulatory hurdles.

We must seek out new approaches to software design and distribution that facilitate the application of security patches,34 and regulations must be updated to streamline such updates. Solutions must encompass scenarios where a system manufacturer (or one of their suppliers) has gone out of business.

Safety Culture

Cybersecurity suffers from a lack of transparency. It is difficult to tell which organizations follow good practices; security lapses are often not reported, unless they lead to an outright breach which impacts customers. This reduces the impetus to prioritize cybersecurity, especially because the impact of a breach often falls heavily on third parties. (For instance, when a company fails to secure customer information, it is the customers who are vulnerable to identity theft.)

Contrast this with the airline industry, where a strict safety culture, including stringent reporting requirements and blameless investigation practices, covering near-misses and procedural failures as well as full-blown accidents, have yielded a remarkable level of safety despite the inherent complexity of air travel.

Cybersecurity can be enhanced through strict requirements for reporting security lapses, along with whistleblower provisions to ensure compliance. As in the airline industry, the focus should be on learning from breaches and near-misses, rather than assigning blame.

Other forms of transparency can create social pressure to adhere to good security practices, including the other measures described here. For instance, rating agencies could be established to evaluate software providers on timeliness of security updates, and cloud operators according to the quality of their internal security and degree of assistance with customer security. Whenever possible, ratings should be based on outcome metrics, rather than adherence to specific practices that might not always correlate with actual security.

Responsible Release of Dangerous Capabilities

No matter how much we manage to strengthen defenses, we should still attempt to minimize the potential for increasingly advanced AI to assist attackers.

Many capabilities are “dual-use”, i.e. of value to both attackers and defenders. The release of new tools that advance such capabilities should follow responsible disclosure policies, allowing time for vulnerabilities to be patched before attackers gain access. Research into automatic patching of vulnerabilities is also called for.

General-purpose AIs should be designed to refuse requests to assist in cyberattacks.35 However, it is difficult to impose such restrictions in a robust manner, especially for open-source models, which is why responsible disclosure is always important.

Advanced models should be rigorously evaluated to determine whether they provide new capabilities for attackers. Release of such a model should be delayed until defensive measures can be updated (for example, release of advances in bug detection should be delayed until they can be applied in private to widely used software packages, and any issues found have been fixed and the patches deployed widely).

Conclusion

Virtually every aspect of modern life, from the operations of corporations large and small to critical energy, transportation, water, and other infrastructure, rely on systems that are vulnerable to cyberattack. By default, progress will leave us open to devastating attacks, as attackers will quickly make use of new AI capabilities but defenders often lag behind. Our current sense of relative security depends in part on the reluctance of state-level actors to cause visible damage, but as AI provides leverage to smaller actors, this “cyberattack overhang” could translate into startling consequences.

To mitigate this danger, we must regulate development of, and access to, AIs with dangerous capabilities. However, this cannot be our only line of defense. Legacy software must migrate to modern, safe coding practices; we must shift responsibility for security from individual system operators to professional organizations; we must move away from the assumption that all systems can be frequently updated with security patches. This will require coordinated efforts across the entire technology sector. The advent of advanced AI must be our wake-up call to finally address long-standing issues in our approach to cybersecurity.

Guest author Steve Newman, a co-founder of eight startups including Google Docs (née Writely), is now searching for ways to reduce the tension between progress and safety by building a more robust world. His blog is Am I Stronger Yet?

Thanks to Brendan Dolan-Gavitt, Dan Hendrycks, Fish Wang, Ido Yariv, Mark Bailey, Massimiliano Poletto, Michael Chen, Nathaniel Li, Will Hodgkins, and Yan Shoshitaishvili for contributions and feedback. No endorsement is implied.

Footnotes

1.Trustworthy figures are hard to come by. Near-future projections ranging into the tens of trillions of dollars are widely circulated. Plausible estimates of spending on cybersecurity solutions are in the low hundreds of billions, and this does not include in-house costs or the damage caused by attacks.

2.Microsoft and OpenAI recently reported that they have detected “state affiliated adversaries … using LLMs to augment cyberoperations”.

3.The UK’s National Cyber Security Centre projects that “Artificial intelligence (AI) will almost certainly increase the volume and heighten the impact of cyber attacks over the next two years.” A recent paper, LLM Agents can Autonomously Hack Websites, examines practical capabilities of current large language models. Another paper demonstrates an LLM being adapted to identify vulnerabilities by examining software source code.

4.Office of Personnel Management data breach - Wikipedia

5.The Untold Story of NotPetya, the Most Devastating Cyberattack in History | WIRED

6.2017 Equifax data breach - Wikipedia

7.Cyberattack Paralyzes the Largest U.S. Health Care Payment System; With Cyberattack Fix Weeks Away, Health Providers Slam United.

More than two weeks after a cyberattack, financially strapped doctors, hospitals and medical providers on Friday sharply criticized UnitedHealth Group’s latest estimate that it would take weeks longer to fully restore a digital network that funnels hundreds of millions of dollars in insurance payments every day.

UnitedHealth said that it would be at least two weeks more to test and establish a steady flow of payments for bills that have mounted since hackers effectively shut down Change Healthcare, the nation’s largest billing and payment clearinghouse, on Feb. 21.

But desperate providers that have been borrowing money to cover expenses and employee payrolls expressed skepticism at that estimate, worrying that it could be months before the logjam of claims and payments cleared up.

8.Biden Hardens Protection Against Cybersecurity Threats to Ports - The New York Times

9.Cyberattack Forces a Shutdown of a Top U.S. Pipeline - The New York Times

10.See 2022 Southwest Airlines scheduling crisis - Wikipedia. This incident was triggered by disruption due to a blizzard, rather than a cyberattack, but it illustrates how dependent our transportation systems are on complex computer systems.

11.See, for example, Cybersecurity High-Risk Series: Challenges in Protecting Cyber Critical Infrastructure and Critical Infrastructure: Actions Needed to Better Secure Internet-Connected Devices.

12.PRC State-Sponsored Actors Compromise and Maintain Persistent Access to U.S. Critical Infrastructure

13.China had "persistent" access to U.S. critical infrastructure

14.A certain degree of cyberattack automation does not require AI, but generative AI promises to greatly increase the sophistication and scope.

15.As noted in footnotes to the second paragraph.

16.See Google warns of surge in generative AI-enhanced attacks, zero-day exploit use in 2024 and Brace Yourself for a Tidal Wave of ChatGPT Email Scams.

17.Anomaly detection’s usefulness today can be limited, due to false alarms. Advances in AI might help distinguish genuine attacks from noise.

Conversely, AI may also help attackers to camouflage their behavior by imitating normal usage. The Volt Typhoon report notes that “actors may have abstained from using compromised credentials outside of normal working hours to avoid triggering security alerts on abnormal account activities”; in the future, AI models could be trained to better imitate legitimate activities.

18.For instance, formal verification methods could be used to ensure that a system precisely meets its specification, with no bugs. Formal verification of complex systems is impractical today, but AI may eventually change this.

Of course, it is an oversimplification to say that no attacker could ever possibly breach a formally verified system. There could be flaws in the specification, in which case formal verification merely ensures that the system faithfully follows the incorrect specification. The system could be subverted by an authorized user, who might be coerced or tricked into taking inappropriate actions. There are side-channel attacks that lie outside the scope of formal verification, and so forth. Still, it seems plausible that powerful AI, if vigorously applied to improve security, could shift the balance in favor of defense. For instance, AIs could help review specifications and detect suspicious behavior by unauthorized users.

19.This idea is inspired by a comment made by Trail of Bits CEO Dan Guido during a presentation at the Technical Advisory Committee of the Commodity Futures Trading Commision.

Defenders have access to large amounts of internal monitoring data. They are also in a position to share data across organizations. In particular, providers of security tools may be able to leverage information they gather across their customer base.

It is worth noting that in the case of intelligence gathering, attackers also have access to large amounts of data.

20.For instance, here are some reasons that software might not be updated with security patches:

  • The system may be “forgotten”, at least by personnel responsible for cybersecurity.
  • The manufacturer has gone out of business and is not supplying software updates.
  • Regulatory requirements make it difficult to apply software updates – for instance, for medical devices.
  • Applying updates may be burdensome, and hence not done frequently. Updated software needs to be tested; compatibility requirements may require updating multiple pieces of software at once.

21.https://www.bbc.com/news/technology-41257576

22.See a previous footnote that outlines reasons that software might not be regularly updated.

23.Such as eliminating use of older, “unsafe” languages like C and C++, or using formal verification techniques to prove that critical software is implemented according to its specification.

24.See, for example, security.googleblog.com/2024/01/scaling-security-with-ai-from-detection.html.

25.Especially because defenders may have access to additional information, such as application source code and configuration files.

Other examples of vulnerability scanning might include, monitoring for open ports, default passwords, and other visible misconfigurations; or use of out-of-date software with known vulnerabilities.

This might be dubbed “reverse dual-use”, repurposing an offensive tool (vulnerability scanning) for defensive use.

26.The fundamental issue is that no organization exists that is responsible for such a systematic “white hat” security scan across the entire Internet. Other barriers:

  • When an insecure system has been identified, there is no systematic way of identifying and notifying that system’s owner.
  • Even if the owner could be contacted, they might not have the expertise or motivation to successfully repair the issue.
  • Performing security scans, even with positive intent, can result in legal liability. Security researchers have been targeted for finding and ethically reporting flaws; see https://github.com/disclose/research-threats.

All of these challenges are amenable to solutions, but it would take a large, multi-faceted effort.

27.Google’s OSS-Fuzz project continuously analyzes hundreds of critical open-source projects for vulnerabilities. GitHub, a service used by many organizations to manage software source code, routinely scans for sensitive information (“API keys”) that may have been absent-mindedly pasted into public repositories. 

28.What is Dark Web Monitoring? [Beginner's Guide] - CrowdStrike

29.Massive unsupervised ML methods could also model binaries and learn to detect malicious obfuscated payloads.

Detection must shift from fixed signatures to behavioral analysis, so as to identify obfuscated payloads.

30.Researchers could also create ML systems that model software behavior and detect whether programs are sending packets when they should not.

31.This is not to say that deploying anomaly detection, signature scanning, and other security measures at the service provider and networking hardware level would be a simple matter. There are many complicating factors, such as the potential for accidentally blocking legitimate usage. However, there are success stories, such as Cloudflare’s cybersecurity features or the Windows Defender firewall enabled by default on computers running Microsoft Windows.

32.Note that firewalls are by no means a panacea. Network traffic increasingly uses encryption, which is a very good thing, but makes it harder for firewalls (and anomaly detection systems) to have a clear picture. Also, firewalls are often placed only at the periphery of a network, leaving the interior systems vulnerable if an attacker is able to gain access to a single weak-link system somewhere inside. Thus, it is important to find ways to deploy these tools so that they are able to observe and protect activity at a detailed level, including internal communications within an organization’s network. At the same time, such tools then represent an extremely juicy target for hackers; extreme care must be taken to ensure that the security tools themselves are not vulnerable to breaches, and also to ensure that the sensitive information they observe is not collected or logged in ways that would make it vulnerable to privacy breaches or abuse by central authorities.

33.One benefit of cloud offerings, such as software-as-a-service and infrastructure-as-a-service, is that software upgrades are generally not the responsibility of individual customers.

34.See, for instance, Rapidly Patching Legacy Software Vulnerabilities in Mission-Critical Systems.

35.At least, outside of verified legitimate use cases, such as penetration testing.