History of Stacks and Queues:
In this section, we will explore the history of stacks and queues, gaining an understanding of how these data structures are fundamental to both computing and the broader society we live in.
Origns of the Stack:

The origins of the stack are highly related to the material you will learn in CS101!
In the early 20th century, mathematicians began exploring the relationship between humans and machines through the automata theory. The goal of automata theory was to describe and analyze the behavior of computer systems and logic using abstract models of machines known as automata. These automatons serve as fundamental tools for understanding how machines function, by determining whether a function is computable and a problem is decidable (Aziz 2004).
The invention of the Turing machine by Alan Turing in 1936 was a significant result of automata theory as it is the first example of a formal model of computation capable of simulating any algorithmic process. The Turing machine demonstrated that certain problems could be classified as decidable—those for which an algorithmic solution exists—while others, such as the Halting problem, were proven to be undecidable (Aziz 2004).

Between 1955 and 1970, researchers sought to develop machines that were more powerful than simple automata, but still specialized in certain tasks. This led to the creation of Pushdown automaton (PDA). Formally introduced by Noam Chomsky and George Miller in the 1960s, PDAs were designed to recognize patterns found in structured languages, such as those used in programming. Unlike basic automata, which have limited memory, PDAs introduced a dedicated storage structure—what would later be known as the stack—to keep track of information as they processed input. They read input from left to right, one symbol at a time, while using this stack to store and retrieve data as needed. The stack’s role in PDAs was pivotal, as it provided a way to manage hierarchical and recursive structures efficiently (Henriksson 2009).

Over time, the concept of a stack extended beyond formal automata theory and became a fundamental data structure in computing. In 1957, German computer scientists Friedrich L. Bauer and Klaus Samelson at the Technical University of Munich proposed and patented the stack. Their work was motivated by the development of ALGOL 58, a pioneering programming language that laid the foundation for modern imperative languages such as C and Java. The stack was a crucial innovation for ALGOL’s design and execution, as it enabled efficient handling of function calls, recursion, and expression evaluation. This early implementation of the stack as a practical computing tool solidified its importance in both programming language design and computer architecture, ultimately shaping how modern computers process and manage data (Henriksson 2009).
![]() | ![]() |
Origns of the Queue:

The origin of the queue can be traced back to the mathematical study of waiting in lines, known as Queueing Theory. This concept was introduced in 1909 by Agner Krarup Erlang, a Danish engineer who wanted to improve telephone systems. He discovered that if phone calls arrive at random, the number of calls over time follows a predictable pattern, now called the Poisson distribution, and the time between calls follows another pattern, called the exponential distribution. His work showed how queues can be utilized when there are more callers than available lines. Just like people waiting in line at a store, a queue ensures that the first person to enter is the first to be served. By analyzing these queues, Erlang created a scientific way to predict wait times, manage congestion, and improve efficiency, laying the foundation for modern queueing systems used in everything from customer service lines to internet traffic and computer networks (Telenor, 2012 (pg41-49)).

The development of queueing theory, however, did not gain widespread attention until after World War II, when it became an area of active research within the mathematical community. Many mathematicians, working on wartime problems such as logistics, communication networks, and resource allocation, encountered queueing-related challenges that required mathematical modeling. This period also saw the emergence of operations research (OR)—a field that applied mathematical methods to optimize decision-making in complex systems (Kingman 2009).
One of the key figures in expanding queueing theory beyond telecommunications was David Kendall, a British mathematician who initially intended to pursue astronomy but was drawn into the field due to wartime priorities. In 1948, during the Berlin Airlift, Kendall became fascinated by the problem of managing the steady stream of aircraft delivering supplies to West Berlin. He realized that the issues of congestion, wait times, and service rates could be analyzed mathematically. Kendall’s 1951 paper, presented at a Royal Statistical Society meeting, played a crucial role in introducing queueing theory to a broader audience. His work formalized key mathematical models, including the M/G/1 queue, which describes a system with a single server handling randomly arriving tasks with general service times. The discussion at the meeting also led to the development of Lindley’s equation, a fundamental recurrence relation that models waiting times in single-server queues. This equation later became the basis for studying more advanced queueing systems, which sparked an extensive body of research (Kingman 2009).
By the late 1950s, researchers such as Pollaczek, Walter Smith, and Spitzer further refined these models, integrating techniques from probability theory and stochastic processes. Their work connected queueing problems to well-established mathematical theorems, such as the law of large numbers, the central limit theorem, and large deviation theory, making queueing theory a foundational tool in modern operations research, computer science, and network optimization (Kingman 2009).
With the rise of computer science, the principles of queueing theory found a natural place in computing. The queue, initially a mathematical tool for analyzing waiting times, evolved into a formal data structure used to manage tasks in operating systems, scheduling algorithms, and network protocols. Job scheduling in early computers, where processes had to be executed in order, mirrored real-world queues, reinforcing the First-In-First-Out (FIFO) principle. This made queues essential for CPU scheduling, print spooling, and memory buffering (Kingman 2009).
By the 1970s, queues became a standard abstract data type (ADT) in programming languages, with implementations in linked lists, arrays, and circular buffers. Today, queues are fundamental in areas such as multithreading, message passing (e.g., queues in distributed computing), and task scheduling in operating systems. The transformation of queueing theory from a mathematical study of waiting lines to a core data structure in computer science highlights its enduring impact on both theoretical and practical aspects of computing (Kingman 2009).
Contemporary Uses of the Stack and Queue
Stacks and queues are instrumental to many real world applications.
Stacks have numerous applications across computer science and beyond. They are essential in expression evaluation, ensuring the correct order of operations in arithmetic calculations. In function call management, stacks store return addresses and local variables, allowing programs to resume execution properly after a function completes. Many text editors use stacks for undo functionality, where each change is pushed onto the stack and can be reverted by popping the most recent action. Stacks are also crucial in backtracking algorithms, such as maze-solving or puzzles, where they track paths and allow for retracing steps when needed. Web browsers use stacks to manage browsing history, enabling users to navigate back to previously visited pages. Additionally, syntax parsing in compilers relies on stacks to validate programming language syntax, ensuring code correctness. These applications highlight the versatility and efficiency of stacks in managing sequential operations and memory.
Queues play a crucial role in solving real-world problems by efficiently managing tasks in a First In, First Out (FIFO) manner. In operating systems, queues ensure fair CPU time distribution among processes, preventing system overload and optimizing performance. Network routers use queues to manage data packets, temporarily storing them during high traffic to prevent packet loss and reduce latency. Similarly, printers rely on queues to process print jobs in the order they were submitted, ensuring fair and conflict-free execution. In event-driven programming, queues manage asynchronous event handling, enabling responsive applications by executing tasks in sequence. Real-time systems, such as air traffic control, use queues for task scheduling to prioritize critical operations that impact system stability and safety. Additionally, customer service systems employ queues to handle service requests on a first-come, first-served basis, ensuring fair and efficient resolution. These applications highlight the importance of queues in maintaining order and efficiency in various domains.
The Power and Vulnerabilities of the Stack and Queue
Stacks and queues are essential tools in computing, shaping everything from how operating systems manage tasks to how web browsers handle user history. Their structured nature makes them powerful but also introduces significant security risks. As future programmers, you must not only understand how to use these data structures efficiently but also recognize their vulnerabilities and the responsibility that comes with their implementation.
The paper “Stack and Queue Integrity on Hostile Platforms” highlights critical concerns regarding the security of these data structures in adversarial environments. The authors examine how violations of stack and queue integrity can lead to unauthorized modifications and data corruption, potentially compromising entire systems. One of their key findings is that attackers can exploit predictable stack and queue behaviors to manipulate program execution, often with minimal resource requirements. The study emphasizes the need for integrity verification mechanisms to detect and prevent these security breaches (Devanbu 2002).
Modern cyberattacks frequently exploit these vulnerabilities. Stack-based buffer overflow attacks—a concept you will learn how to address in CS105—remain a major threat, where attackers overwrite critical memory locations to hijack control of a system. This technique has been responsible for numerous historical breaches, affecting everything from early web applications to modern cloud services. Similarly, queue-based denial-of-service (DoS) attacks can flood a system with excessive requests, overwhelming processing queues and rendering essential services unusable. These attacks continue to affect network security, financial transactions, and cloud computing platforms, making stack and queue security an urgent issue (Devanbu 2002).
Stacks and queues are not just abstract concepts in a data structures course—they are foundational to modern computing and cybersecurity. Writing efficient and scalable code is important, but writing secure code is equally critical. By thinking critically about the power and risks of stacks and queues, you can develop a security-conscious mindset that balances innovation with responsibility.
Discussion Questions:
-
Think of a real-world system that relies on either a stack or a queue. How would the system behave differently if the data structure were swapped? For example, what would happen if a queue managed function calls instead of a stack?
-
Consider some modern applications of queues: hospital emergency rooms, airport security lines, or customer support centers. Should these systems always operate on a FIFO queue, or should some people be given priority? How do we balance fairness with efficiency in such systems?