The Role of LLMs in Denial of Service (DoS) and Distributed Denial of Service (DDoS) Attacks
Abstract
The rapid advancement of artificial intelligence has significantly influenced the cybersecurity landscape. Large Language Models (LLMs), originally developed to support natural language processing tasks are increasingly being examined from a security perspective due to their potential misuse by malicious actors. While LLMs provide numerous benefits for automation, information analysis and user interaction, they may also be leveraged to facilitate various forms of cyberattacks.
One area of growing concern is the potential role of LLMs in the development and execution of denial-of-service (DoS) and distributed denial-of-service (DDoS) attacks. These attacks aim to disrupt the availability of services by overwhelming systems with excessive requests or computational workloads. The ability of LLMs to generate automated scripts, coordinate complex attack strategies and produce highly variable request patterns introduces new challenges for traditional detection and mitigation mechanisms.
This study examines the relationship between Large Language Models and cyberattacks with a primary focus on their potential use in DoS and DDoS attack scenarios. The research explores how LLMs may assist attackers in automating malicious traffic generation, enhancing social engineering capabilities and targeting AI-powered services such as chatbot systems. Additionally, the study discusses emerging risks related to AI-driven botnets, computational denial-of-service attacks targeting AI infrastructures and adaptive attack strategies enabled by artificial intelligence.
Finally, the study presents mitigation strategies and defensive approaches that organizations can implement to protect AI-enabled services and critical infrastructure from emerging threats. Understanding these risks is essential for developing secure AI deployments and strengthening cybersecurity defences in an increasingly AI-driven digital environment.
Keywords:
Large Language Models, Cybersecurity, Artificial Intelligence, Denial of Service, Distributed Denial of Service, AI-driven cyberattacks, LLM security, AI chatbot attacks, Botnets, Economic Denial of Service
CONTENTS
1.Introduction.
2.Large Language Models (LLMs).
3.Denial of Service (DoS) and Distributed Denial of Service (DDoS).
4.Denial of Service Attacks Against LLM Systems.
5.Real-World DDoS Trends and Statistics (2024–2026).
6.Prompt-Based Resource Exhaustion Attacks.
7.Application-Layer and Economic Denial of Service.
8.AI-Assisted DDoS and Agentic Attack Frameworks.
9.Traffic Evasion and AI-Generated Request.
10.Exploiting AI Chatbots and Web Assistants.
11.Exploiting Autonomous AI Agents.
12.AI-Driven DDoS Attack Lifecycle.
13.Emerging Research Metrics.
14.Architecture of an AI-Assisted DDoS Attack Pipeline.
15.Defensive Strategies.
16.Case Study: Hypothetical Denial-of-Service Attack Against an AI Customer Support Chatbot
17.Conclusion.
18.References.
1.Introduction
Artificial intelligence technologies are becoming a larger part of modern computing systems, cloud services, and digital platforms. One of the most significant developments in recent years is the rise of Large Language Models (LLMs). These models can produce human-like text, which helps with programming tasks and analysing complex data.
These technologies offer major benefits in automation and productivity, but they also create new security issues. Cybercriminals are increasingly looking at how generative AI tools can assist in cyberattacks. This includes phishing campaigns, malware creation, discovering vulnerabilities, and network attacks.
Among the most disruptive cyberattacks are Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks. The goal of these attacks is to interrupt the availability of systems and services by using up computational resources or overwhelming network infrastructure.
As LLM systems become common as cloud-based APIs and integrated application services, researchers are starting to study how these models can be targets of DoS attacks. They are also looking at how LLMs can help attackers launch more advanced denial-of-service campaigns. This study explores the link between LLM technologies and denial-of-service attacks, with a focus on new research about AI-assisted DDoS operations, application-layer resource exhaustion, and agent-based cyberattack frameworks.
2.Large Language Models (LLMs)
Large Language Models are powerful artificial intelligence systems designed to process and generate natural language. These models usually train on large datasets that include books, articles, websites, and other text sources. [1]
Most modern LLMs rely on the transformer architecture, which helps them understand relationships between words and tokens in extensive text sequences. This structure allows the models to produce coherent and contextually relevant responses for various tasks.
LLMs can perform tasks such as:
-Natural language understanding.
-Automated text generation.
-Programming help and code generation.
-Document summarization.
-Question answering.
-Conversational interaction.
Because these models can interpret complex technical information and generate executable code, they are increasingly used in software development, data analysis, cybersecurity research, and automation in businesses.
However, the same capabilities can also be misused by bad actors. LLMs may help attackers analyse technical documentation, create attack scripts, or automate parts of cyberattack workflows. [3]
3.Denial of Service (DoS) and Distributed Denial of Service (DDoS)
A Denial of Service (DoS) attack aims to make a system, service or network unavailable to legitimate users. This is typically achieved by overwhelming the target with requests or exhausting critical resources such as CPU processing power, memory or network bandwidth. [3]
A Distributed Denial of Service (DDoS) attack is an extension of this concept. Instead of a single attacking system, a DDoS attack uses multiple compromised devices, often organized into a botnet to generate traffic simultaneously.
These botnets may consist of:
-Compromised personal computers
-Infected servers
-Internet-of-Things devices
-Misconfigured cloud infrastructure
DDoS attacks typically exploit several technical methods, including:
-TCP SYN flood
-UDP flood
-HTTP request floods
-Amplification attacks using DNS, NTP or Memcached services
-Application-layer request flooding
These attacks can disrupt online services such as websites, financial platforms, gaming infrastructure and government systems.
4.Denial of Service Attacks Against LLM Systems
Large Language Models themselves can also become targets of denial-of-service attacks. Because LLM inference requires significant computational resources, attackers may attempt to exhaust these resources through carefully crafted interactions.
According to the OWASP Top 10 for Large Language Model Applications, this threat category is known as Model Denial of Service (LLM04). In this scenario, an attacker deliberately interacts with an LLM in a way that consumes excessive computational resources, resulting in degraded performance or service disruption. [4]
LLM systems are particularly vulnerable to this type of attack because they process large token sequences and often require heavy GPU computations. If attackers repeatedly submit computationally intensive prompts, the system may experience increased latency, degraded service quality, or complete unavailability. [5]
Additionally, attackers may exploit the context window, which represents the maximum amount of text an LLM can process at once. By repeatedly submitting inputs that approach or exceed this limit, attackers can force the model to process unusually large amounts of data, leading to resource exhaustion. [4]
5.Real-World DDoS Trends and Statistics (2024–2026)
Although AI-assisted denial-of-service attacks are still an emerging research topic, traditional distributed denial-of-service attacks have continued to grow in both scale and frequency in recent years. Several large internet infrastructure providers publish periodic reports on global DDoS activity.
According to the Cloudflare DDoS Threat Report, the number of distributed denial-of-service attacks increased significantly between 2023 and 2025, with attackers increasingly targeting application-layer services rather than only network bandwidth. Cloudflare reported that many modern attacks focus on HTTP request floods designed to overwhelm backend application logic rather than simply saturating network links.
Similarly, the Akamai Technologies State of the Internet report identified a growing trend toward Layer-7 attacks, where attackers target web applications directly through high-volume HTTP requests.
In addition to the growing frequency of attacks, the scale of distributed denial-of-service attacks has also increased dramatically. Some of the largest attacks recorded in recent years have exceeded multiple terabits per second (Tbps) of traffic.
In 2023, Google researchers reported one of the largest HTTP-based DDoS attacks ever observed, reaching approximately 398 million HTTP requests per second. This attack exploited the HTTP/2 protocol to amplify the number of requests generated by relatively small numbers of machines.[6]
Security researchers have also observed a growing shift toward short-duration but extremely high-intensity DDoS attacks, which can overwhelm infrastructure within seconds before mitigation systems fully activate.
These trends highlight an important shift in attacker strategy:
-Attacks are becoming more automated.
-Attacks increasingly target application infrastructure.
-Attackers rely more heavily on cloud infrastructure and distributed services.
As artificial intelligence technologies continue to mature, researchers anticipate that attackers may increasingly integrate AI tools into DDoS operations to automate reconnaissance, generate traffic variations, and dynamically adapt attack strategies.
6.Prompt-Based Resource Exhaustion Attacks
One emerging form of denial-of-service attack against LLMs involves specially designed prompts that trigger excessive workloads.
Researchers have demonstrated that adversarial prompts can cause models to generate extremely long responses or enter extended reasoning loops. In such scenarios, the model may continue producing tokens until reaching system limits, significantly increasing processing time and resource consumption.
For example, the research framework ThinkTrap demonstrates how carefully optimized prompts can degrade the performance of large language model services, reducing system throughput and potentially causing service failure even under typical rate limits.
Similarly, other studies show that adversarial prompts can intentionally suppress termination signals in LLM outputs, forcing models to generate thousands of tokens and increasing latency and computational costs.
These attacks highlight how LLM systems can be manipulated through input design alone, without requiring high volumes of network traffic.[1]
7. Application-Layer and Economic Denial of Service
Traditional DDoS attacks focus primarily on network bandwidth saturation. However, many modern applications rely on computationally intensive backend processes, including AI inference systems.
This has led to the emergence of application-layer denial-of-service attacks, which target the internal logic of an application rather than network capacity.
In the context of AI services, attackers may target endpoints such as:
-Chatbot APIs.
-AI-powered search engines.
-Document analysis services.
-Recommendation systems.
These attacks often exploit computational asymmetry, where generating a request is relatively inexpensive for the attacker, but processing that request requires significant computational resources from the target system.
OWASP notes that LLM denial-of-service attacks can lead not only to service disruption but also to substantial increases in operational costs due to the high computational requirements of model inference.
Because many AI services are billed based on token usage or compute time, repeated high-complexity prompts may significantly increase infrastructure costs.[3]
8.AI-Assisted DDoS and Agentic Attack Frameworks
Another emerging research topic involves the potential use of LLMs to assist in planning and coordinating denial-of-service attacks.
While LLMs do not directly generate network traffic, they may assist attackers in several phases of the cyberattack lifecycle, including:
-Reconnaissance of target systems.
-Generation of attack scripts.
-Automation of infrastructure deployment.
-Dynamic modification of attack requests.
Researchers have proposed the concept of agentic cyberattack systems, where autonomous AI agents analyze system responses and adapt attack strategies in real time.
For example, an AI-driven attack controller could theoretically monitor server response codes, firewall rejection patterns, or rate-limiting mechanisms and modify request patterns accordingly.
This creates a feedback-driven attack model, in which the attack continuously evolves based on observations of the target environment.[2]
9.Traffic Evasion and AI-Generated Request.
Traditional DDoS detection systems often rely on identifying repetitive or anomalous traffic patterns. However, LLMs may allow attackers to generate large volumes of unique and contextually realistic requests.
These requests may include:
-Unique HTTP headers.
-Varied user-agent strings.
-Dynamically generated query parameters.
-Natural language queries that resemble human activity.
Because the requests are highly variable, they may be more difficult for traditional filtering systems to detect and block.
This concept is sometimes referred to as AI-mimicry traffic, where malicious traffic intentionally imitates legitimate user behavior.[1]
10.Exploiting AI Chatbots and Web Assistants
Many modern websites deploy AI-powered chatbots to assist users with tasks such as answering frequently asked questions or providing customer support.
These chatbots typically rely on LLM APIs to process user queries and generate responses.
Attackers may exploit these systems by repeatedly sending complex queries designed to trigger computationally expensive operations. Examples include:
-Large multi-step reasoning tasks.
-Extremely long prompts.
-Repeated document analysis requests.
Because each interaction may require GPU-based inference, even moderate numbers of such requests could degrade system performance or significantly increase operational costs.[1]
11.Exploiting Autonomous AI Agents
Another emerging threat involves autonomous AI agents that browse the internet to gather information or perform tasks.
Researchers have identified a vulnerability known as indirect prompt injection, where malicious instructions embedded within web content may manipulate AI agents that later process the content.
In a theoretical scenario:
1.An attacker embeds malicious instructions within a webpage.
2.An AI agent visits the webpage during browsing.
3.The embedded instructions influence the behavior of the agent.
Although large-scale exploitation of this technique has not yet been widely observed, it represents a potential future attack vector.[3]
12.AI-Driven DDoS Attack Lifecycle
AI-assisted denial-of-service attacks may follow a structured lifecycle like other cyber operations.
Reconnaissance:
Attackers may use LLMs to analyze API documentation, web interfaces, or system configurations to identify high-cost computational endpoints.
Payload Generation:
The LLM generates large numbers of unique requests designed to bypass pattern-based detection mechanisms.
Autonomous Execution:
An automated controller coordinates the attack by adjusting traffic volume, rotating proxies, or modifying request structures.
Adaptive Feedback:
The system analyzes which requests successfully bypass defenses and prioritizes those patterns
in subsequent attack waves.[2]
13.Emerging Research Metrics.
Traditional DDoS attacks are typically measured using metrics such as:
-Gigabits per second (Gbps).
-Packets per second (pps).
However, attacks targeting AI services may require new metrics that measure computational impact rather than network volume.
Researchers increasingly analyze:
-GPU resource utilization.
-Inference latency.
-Token generation cost.
-Economic impact per request.
These metrics reflect the shift toward computational and economic denial-of-service attacks, where attackers exploit asymmetries between the cost of generating requests and the cost of processing them.
14.Architecture of an AI-Assisted DDoS Attack Pipeline
To understand how Large Language Models could be integrated into denial-of-service operations, researchers often describe a conceptual AI-assisted DDoS pipeline. In such a model, artificial intelligence is used primarily for analysis, automation, and adaptation, while the actual network traffic is generated by traditional botnet infrastructure.
A simplified architecture of an AI-assisted attack system may include several components.
Reconnaissance Module
The reconnaissance stage involves gathering information about the target system.
In an AI-assisted scenario, an LLM may analyze publicly available information such as:
-API documentation.
-Web application endpoints.
-System response behavior.
-Authentication mechanisms.
By analyzing this information, the system may identify computationally expensive endpoints such as search functions, AI chatbots, or document processing services.
These endpoints may represent ideal targets for application-layer denial-of-service attacks.
Payload Generation Engine
Once potential targets are identified, the next stage involves generating the attack payload.
Instead of using static scripts, an LLM can generate large volumes of unique requests with varying structures. These requests may include:
-Different HTTP headers.
-Randomized parameters.
-Variable query structures.
-Natural language queries.
The purpose of this variation is to reduce the effectiveness of pattern-based filtering mechanisms such as Web Application Firewalls (WAFs).
Autonomous Command-and-Control
Traditional botnets rely on centralized command-and-control servers that distribute instructions to infected devices.
In an AI-assisted attack model, this component may include an autonomous controller capable of analyzing responses from the target system.
The controller may monitor factors such as:
-Response codes.
-Latency changes.
-Rate-limiting behavior.
-Firewall rejections.
Based on this information, the controller may adjust attack parameters dynamically.
Examples of adaptive behavior include:
-Increasing request frequency.
-Modifying request headers.
-Rotating proxy servers.
-Switching attack vectors.
This creates a feedback-driven attack loop where the attack strategy evolves over time.
Distributed Execution Infrastructure
The actual network traffic in a distributed denial-of-service attack is typically generated by a network of compromised systems.
These systems may include:
-Infected IoT devices.
-Compromised cloud servers.
-Hijacked virtual machines.
-Proxy networks.
The distributed infrastructure sends requests generated by the payload engine to the target system.
Because requests originate from many different sources, it becomes difficult for the target system to block them without affecting legitimate users.
Adaptive Feedback Loop
One of the most significant differences between traditional botnets and AI-assisted attack frameworks is the presence of a continuous feedback mechanism.
In this stage, the AI system analyzes the success of different request types and prioritizes those that bypass defensive systems.
For example, if certain request patterns successfully evade firewall filtering, the system may increase the use of those patterns in subsequent attack waves.
This creates a dynamic attack model where the system continuously adapts to defensive measures.[1][2]
15.Defensive Strategies.
Mitigating denial-of-service attacks against LLM systems requires both traditional network security techniques and AI-specific protections.
Recommended defensive strategies include:
-API rate limiting.
-Input length restrictions.
-Token usage quotas.
-Anomaly detection systems.
-Monitoring of computational resource usage.
Security frameworks such as the OWASP Top 10 for LLM Applications emphasize the importance of limiting input sizes, enforcing request quotas, and monitoring system resource usage to mitigate denial-of-service attacks. [4]
16.REDACTED
17.Conclusion
Large Language Models have introduced powerful capabilities that are transforming many areas of technology and cybersecurity. However, these systems also introduce new attack surfaces and potential vulnerabilities.
In the context of denial-of-service attacks, LLMs may serve both as targets of resource exhaustion attacks and as tools that assist attackers in planning and automating cyber operations.
As AI technologies continue to integrate into web services, organizations must develop new security approaches that address both traditional network attacks and emerging AI-driven threats.
18. References
[1] PortSwigger Ltd., “Large language model attacks.” Available: https://portswigger.net/web-security/llm-attacks
[2] Deep Instinct, “The rise of AI-driven cyber attacks: How LLMs are reshaping the threat landscape.” Available: https://www.deepinstinct.com/blog/the-rise-of-ai-driven-cyber-attacks-how-llms-are-reshaping-the-threat-landscape
[3] I-Tracing, “LLM agents and cybersecurity.” Available: https://i-tracing.com/blog/llm-agents-cybersecurity/
[4] I-Tracing, “OWASP Top 10 cyberattacks for LLM applications.” Available: https://i-tracing.com/blog/owasp-top-ten-cyberattacks-llm/
[5] OWASP Foundation, “OWASP Top 10 for Large Language Model Applications.” Available: https://owasp.org/www-project-top-10-for-large-language-model-applications/
[6]Cloudflare Radar 2025 Q4 DDoS report:https://radar.cloudflare.com/reports/ddos-2025-q4