What Replaced CGI Scripts?

CGI scripts were replaced by a succession of technologies: mod_perl (1996) embedded Perl in Apache for persistent processes; PHP (1997–1998) made server-side code part of HTML files; Java Servlets (1997) used a persistent JVM; ASP (1996) brought server scripting to Windows; FastCGI kept CGI processes running as daemons; Ruby on Rails (2004) and Django (2005) introduced convention-based frameworks; Node.js (2009) made JavaScript a server language; and serverless functions (2014+) eliminated the server concept entirely.

CGI (Common Gateway Interface) was the standard for dynamic web content from 1993 to roughly 2003. No single technology replaced it — instead, a series of innovations each solved specific CGI limitations. The transition happened across more than a decade, and the story involves nearly every major web programming language and platform. Here is the complete timeline.

In Brief

CGI was replaced by mod_perl, PHP, Java Servlets, Rails, Node.js, and serverless — each solving the previous generation's biggest limitation. The common thread: every replacement found a way to avoid spawning a new OS process for each HTTP request.

Why CGI Needed Replacing

To understand what replaced CGI, you first need to understand why it needed replacing. CGI was a brilliantly simple idea — pass an HTTP request to an external program and return its output — but that simplicity carried fundamental performance and architectural costs that became unbearable as the web grew.

Process-Per-Request: The Core Problem

Every HTTP request to a CGI script triggered the operating system to fork a new process. The web server called fork(), then exec() to launch the script's interpreter. For a Perl CGI script, this meant loading the Perl interpreter binary into memory, parsing the script file from disk, compiling it to Perl's internal bytecode, loading any required modules (CGI.pm alone pulled in dozens of sub-modules), executing the code, writing the response to stdout, and then destroying the entire process. All of that happened for every single page view.

The core problem: CGI created a new operating system process for every single HTTP request — loading the interpreter, parsing the script, and destroying the process each time. At 100 requests/second, that meant 100 simultaneous processes consuming up to 1.5 GB of RAM.

On a quiet personal homepage receiving ten visitors per hour, this was perfectly fine. On a mid-traffic commercial site receiving 100 requests per second, it was catastrophic. Each Perl process consumed 5–15 MB of resident memory. One hundred concurrent CGI processes meant 500 MB to 1.5 GB of RAM dedicated solely to process overhead — on servers that, in 1998, typically had 128–512 MB of total RAM. The math simply did not work.

A concrete benchmark illustrates the gap: a simple hit counter script (counter.pl) running as standard CGI on typical late-1990s shared hosting took approximately 300–500 milliseconds per request. The same logic implemented as a PHP script running through mod_php took approximately 30–50 milliseconds. The same logic as a static file served by Apache took under 1 millisecond. CGI's overhead was not a percentage penalty — it was an order of magnitude.

No Persistent State

Because each request spawned a fresh process, CGI scripts had no way to maintain state between requests within the program itself. Database connections could not be pooled — every request opened a new connection to MySQL or PostgreSQL, authenticated, ran the query, received results, and closed the connection. On a database-heavy application, connection overhead could exceed query time. In-memory caches were impossible; every request started cold. Configuration files, template files, and shared libraries had to be read from disk on every execution.

Security by Convention, Not by Design

CGI placed executable files in a web-accessible directory. The security model relied on Unix file permissions (chmod 755), correct ScriptAlias configuration, and the developer's discipline. On shared hosting — where dozens or hundreds of customers had cgi-bin access on the same server — one poorly written script could compromise the entire machine. The Apache suEXEC wrapper was created specifically to mitigate this risk by running each user's CGI scripts under their own Unix account rather than the web server's account.

FormMail, one of the most widely deployed CGI scripts, became a case study in CGI security problems. Early versions could be exploited as open email relays, a vulnerability documented by CERT in 2002 that affected millions of websites. The security evolution of FormMail mirrors the broader challenges of the CGI era.

Configuration Friction

Deploying a CGI script required several manual steps that modern developers would find astonishing. You had to upload the file to the correct directory (typically /cgi-bin/). You had to set the execute permission (chmod 755). You had to ensure the shebang line (#!/usr/bin/perl) pointed to the correct interpreter path on that specific server. You had to verify that the script used the correct line endings (Unix \n, not Windows \r\n — a common source of the dreaded "500 Internal Server Error"). And if the script sent email, you had to configure the path to sendmail. Every hosting provider had slightly different paths, and there was no package manager, no dependency resolver, and no deployment tool.

The Timeline: Every Technology That Replaced CGI

No single technology killed CGI. Instead, a series of innovations — each addressing specific CGI weaknesses — gradually made it obsolete. Here is each one, in chronological order, with the problem it solved and the problems it left behind.

FastCGI (1996) — Open Market

What it solved: FastCGI was the most conservative replacement for CGI. Developed by Mark Brown at Open Market, Inc., it kept the basic CGI model — a separate process handles requests — but made one critical change: the process stayed running between requests. Instead of the web server forking a new process for each request, it sent requests to a long-running FastCGI daemon over a Unix socket or TCP connection.

This meant the interpreter loaded once, modules loaded once, database connections could persist, and the per-request overhead dropped from hundreds of milliseconds to single-digit milliseconds. Existing CGI scripts could often be converted to FastCGI with minimal code changes.

What remained unsolved: FastCGI still required a separate process (or pool of processes) to be managed alongside the web server. Configuration was more complex than CGI. The protocol was more obscure than simply writing to stdout. But FastCGI proved remarkably durable. PHP-FPM (FastCGI Process Manager), released in 2004 and included in PHP core since 5.3.3 (2010), is the standard way to run PHP with Nginx today. Every WordPress site running on Nginx uses FastCGI. In 2026, FastCGI handles more web traffic than any other CGI successor.

mod_perl (1996) — Doug MacEachern

What it solved: Doug MacEachern's mod_perl took a radically different approach from FastCGI. Instead of keeping a separate process running, it embedded the entire Perl interpreter directly inside the Apache web server process. When Apache started, it loaded Perl. When a request came in for a Perl handler, there was no fork, no exec, no process creation at all — the Perl code ran inside the same process that was handling the HTTP connection.

The performance improvement was dramatic: 10x to 100x faster than standard CGI for the same Perl code. Perl scripts were compiled once on first request and cached in memory as bytecode. Database connections persisted across requests via Apache::DBI. Developers could write Apache request handlers, authentication modules, and content filters entirely in Perl.

mod_perl powered some of the highest-traffic Perl sites of the late 1990s and 2000s, including early versions of Amazon.com, Ticketmaster, and ValueClick (later Conversant). The mod_perl project demonstrated that Perl could handle enterprise-scale traffic when freed from CGI's process-per-request constraint.

What remained unsolved: mod_perl was complex to configure and deploy. Because Perl ran inside Apache, a memory leak in any Perl script leaked memory in the web server itself. Each Apache child process that loaded mod_perl consumed significantly more memory than a plain Apache process. And mod_perl was tightly coupled to Apache — when Nginx began gaining market share in the late 2000s, mod_perl could not follow.

ASP — Active Server Pages (1996) — Microsoft

What it solved: Microsoft released Active Server Pages with IIS 3.0 in December 1996. ASP brought server-side scripting to the Windows ecosystem using VBScript or JScript (Microsoft's JavaScript implementation) embedded directly in HTML files with <% %> delimiters. For Windows-based organizations — which constituted a significant portion of corporate IT — ASP provided a familiar development environment integrated with Microsoft's tools: Visual InterDev, SQL Server, and Windows NT domain authentication.

ASP ran inside the IIS process, eliminating CGI's process-per-request overhead. It supported session state, application-level variables, and COM object integration. For developers already working in the Microsoft ecosystem, ASP was dramatically easier than setting up Perl CGI on a Unix server.

What remained unsolved: Vendor lock-in. ASP ran exclusively on Windows with IIS. VBScript was a limited scripting language compared to Perl or even early PHP. Microsoft eventually replaced classic ASP with ASP.NET (2002), which itself evolved through Web Forms, MVC, Web API, and eventually ASP.NET Core (2016, cross-platform). Classic ASP pages with .asp extensions still run on legacy corporate intranets worldwide.

PHP (1995–1998) — Rasmus Lerdorf, Zeev Suraski, Andi Gutmans

What it solved: PHP solved CGI's biggest problem for the average webmaster: complexity. Rasmus Lerdorf created PHP/FI ("Personal Home Page / Forms Interpreter") in 1995 as a set of C programs to track visits to his online resume. By 1997, Zeev Suraski and Andi Gutmans had rewritten the parser from scratch, creating PHP 3 — a real programming language that could be embedded directly in HTML files.

The key innovation was radical simplicity. A PHP file was an HTML file with code blocks (<?php ... ?>) inserted wherever dynamic content was needed. You placed the .php file anywhere in the web root — no cgi-bin directory, no chmod 755, no shebang line, no sendmail path configuration. The web server (Apache with mod_php) automatically recognized and processed it. For webmasters who had struggled with CGI's configuration rituals, PHP felt like magic.

PHP's learning curve was essentially zero for anyone who already knew HTML. You could start with a single line of dynamic code inside an otherwise static page and gradually add more logic. This bottom-up approach to programming — starting from the HTML template rather than from a script that generated HTML — was the opposite of CGI's top-down model and proved enormously more accessible.

By 2000, PHP powered more websites than Perl CGI. By 2004, PHP 5 introduced proper object-oriented programming and the PDO database abstraction layer. As of 2026, PHP runs approximately 77% of all websites with a known server-side language (W3Techs data), driven largely by WordPress, which itself powers over 40% of all websites. PHP did not just replace CGI — it became the most successful server-side web language in history.

What remained unsolved: PHP's early versions had serious security issues. The register_globals directive (enabled by default until PHP 4.2.0 in 2002) automatically created variables from GET/POST parameters, making variable injection attacks trivial. PHP's type coercion led to comparison bugs. The language's "just make it work" philosophy sometimes prioritized convenience over correctness. Many of these issues were addressed in PHP 7 (2015) and PHP 8 (2020), which brought strict typing, JIT compilation, and significant performance improvements.

Java Servlets (1997) — Sun Microsystems

What it solved: Sun Microsystems introduced the Java Servlet API in June 1997 with Java Web Server 1.0. Servlets were Java classes that ran inside a persistent Java Virtual Machine (JVM) container — initially Java Web Server, later Apache Tomcat (1999), JBoss (1999), and WebLogic. The JVM started once and stayed running indefinitely. Each HTTP request was handled by a new thread within the JVM, not a new process. Thread creation was orders of magnitude cheaper than process creation.

Servlets brought enterprise-grade features that CGI could never offer: connection pooling, session management, security constraints, and a well-defined lifecycle (init, service, destroy). Java's static type system caught errors at compile time rather than at runtime in production. The Java ecosystem provided standardized APIs for everything: JDBC for databases, JNDI for naming services, JMS for message queues.

Servlets evolved into JavaServer Pages (JSP), then frameworks like Struts (2000), Spring MVC (2002), and eventually Spring Boot (2014). The enterprise Java ecosystem became the backbone of banking, insurance, telecommunications, and government systems worldwide. If CGI was the technology of hobbyist webmasters, Java Servlets were the technology of Fortune 500 IT departments.

What remained unsolved: Complexity. Deploying a Java web application required configuring a servlet container, creating WAR files, writing XML deployment descriptors (until annotations in Java 5), and understanding a deep stack of abstractions. The "Hello World" in Java Servlets required a class definition, method overrides, and build configuration — compared to a single-line Perl CGI script. This overhead was justified for large enterprise applications but prohibitive for small websites and individual developers.

Ruby on Rails (2004) — David Heinemeier Hansson

What it solved: David Heinemeier Hansson (DHH) extracted Ruby on Rails from Basecamp and released it in July 2004. Rails did not introduce a new execution model — it ran on application servers like Mongrel, later Unicorn and Puma, behind a reverse proxy. Its revolution was developer productivity.

Rails introduced "convention over configuration" to mainstream web development. A single command (rails generate scaffold Post title:string body:text) generated a complete CRUD interface with database migration, model, controller, views, routes, and tests. Where a Perl CGI developer spent hours writing SQL, form handling, and HTML generation by hand, a Rails developer had a working prototype in minutes.

Rails also introduced patterns that became industry standard: MVC architecture enforced by directory structure, database migrations (versioned schema changes), RESTful routing, Active Record ORM, and an integrated testing framework. These patterns were not invented by Rails — most came from the Smalltalk and Java communities — but Rails made them accessible and practical.

The impact extended beyond Ruby. Django (Python, 2005) adopted similar patterns. Laravel (PHP, 2011) brought Rails-style conventions to PHP. Express.js (Node.js, 2010) adopted middleware pipelines. Rails demonstrated that framework design mattered as much as language performance.

What remained unsolved: Ruby was significantly slower than C, Java, or even PHP for CPU-intensive tasks. Rails applications consumed substantial memory. The "magic" of convention over configuration sometimes made debugging difficult when conventions were violated. But for the typical web application where most time was spent waiting for database queries and network I/O, Ruby's CPU performance was rarely the bottleneck.

Django and Flask (2005–2010) — Python Web Frameworks

What it solved: Adrian Holovaty and Simon Willison released Django in July 2005, extracted from the Lawrence Journal-World newspaper's online operations. Flask, created by Armin Ronacher, followed in 2010 as a lightweight alternative. Both ran on WSGI (Web Server Gateway Interface, PEP 3333), Python's standard for web server-to-application communication.

Django took the "batteries included" approach: built-in ORM, admin interface, authentication system, form handling, template engine, and security middleware. Flask took the opposite approach: a minimal core with extensions for everything. Together, they made Python — already the dominant language in scientific computing, data analysis, and system administration — a first-class web development language.

The WSGI standard itself was significant. Published as PEP 333 in 2003 (updated as PEP 3333 for Python 3), WSGI defined a clean interface between web servers and Python applications. It replaced the ad-hoc CGI-style integration that early Python web frameworks used and enabled frameworks to work with any WSGI-compatible server (Gunicorn, uWSGI, mod_wsgi). ASGI (Asynchronous Server Gateway Interface) extended this for async frameworks like FastAPI and Django Channels.

What remained unsolved: Python's Global Interpreter Lock (GIL) limited true multi-threaded parallelism, making worker-per-request models (Gunicorn with multiple workers) necessary for CPU-bound workloads. This was partially addressed by ASGI and async frameworks, and Python 3.13 (2024) introduced an experimental free-threaded mode.

Node.js (2009) — Ryan Dahl

What it solved: Ryan Dahl presented Node.js at JSConf EU in November 2009. Node.js did something no previous CGI replacement had done: it eliminated the web server as a separate piece of software. A Node.js application was its own HTTP server. Where CGI required Apache or NCSA HTTPd to receive requests and forward them to scripts, and where PHP required Apache or Nginx as a front end, a Node.js application listened on a port and handled HTTP connections directly.

Node.js used Google's V8 JavaScript engine and an event-driven, non-blocking I/O model. Instead of creating a thread or process per request, Node.js processed all requests in a single thread using an event loop. When a request needed to wait for a database query, file read, or network call, Node.js registered a callback and moved on to the next request. This architecture could handle thousands of concurrent connections on a single process with minimal memory overhead.

The second revolution was JavaScript everywhere. Front-end developers who already knew JavaScript could now write server-side code in the same language. This unified the web development stack and gave rise to "full-stack JavaScript" with frameworks like Express.js (2010), Meteor (2012), and later Next.js (2016). The npm package registry grew to become the largest software registry in the world, surpassing even CPAN and PyPI.

What remained unsolved: Early Node.js suffered from "callback hell" — deeply nested callbacks for sequential async operations. This was largely resolved by Promises (ES2015) and async/await (ES2017). CPU-intensive operations could block the event loop, requiring worker threads or external services. Error handling in async code was less intuitive than in synchronous languages. But for I/O-bound web applications — which most web applications are — Node.js proved remarkably efficient.

Serverless Functions (2014+) — AWS Lambda, Google Cloud Functions, Cloudflare Workers

What it solved: AWS Lambda launched in November 2014 and introduced a fundamentally new model: functions as a service (FaaS). Developers wrote individual functions — not applications, not servers — and uploaded them to a cloud platform. The platform handled everything else: provisioning compute resources, scaling to handle traffic, and billing per invocation (typically per 100ms of execution time).

Google Cloud Functions (2016), Azure Functions (2016), and Cloudflare Workers (2017) followed. Each offered the same core promise: write code, deploy it, and never think about servers. Auto-scaling was automatic — from zero requests to millions, with no configuration. Pay-per-execution meant zero cost when there was no traffic, unlike traditional servers that ran (and cost money) 24/7 whether they had visitors or not.

Cloudflare Workers took this further by running functions at the network edge — in over 300 data centers worldwide — reducing latency by executing code closer to the end user. Where a CGI script on a single server in one data center might take 200ms for a user on the other side of the world, a Cloudflare Worker executing at the nearest edge location could respond in under 10ms.

Full circle: Serverless is sometimes called “CGI’s grandchild” — the same stateless, request-response pattern from 1993, rebuilt for cloud-scale computing with three decades of infrastructure innovation underneath.

What remained unsolved: "Cold starts" — the delay when a function's container needs to be initialized — echoed CGI's original process-startup problem, though at milliseconds rather than hundreds of milliseconds. Vendor lock-in became a concern as applications were built on platform-specific APIs. Debugging distributed serverless functions was harder than debugging a monolithic application. And the stateless, ephemeral nature of serverless functions required external services (databases, caches, queues) for any persistent state — a constraint familiar to anyone who wrote CGI scripts in 1996.

Year Technology Creator Execution Model Still Used (2026)
1993 CGI NCSA / Rob McCool Process per request (fork+exec) Legacy only
1996 FastCGI Open Market / Mark Brown Persistent daemon + socket Yes (PHP-FPM)
1996 mod_perl Doug MacEachern Embedded in Apache process Rare
1996 ASP Microsoft Embedded in IIS process Legacy (ASP.NET Core active)
1997 PHP (mod_php) Rasmus Lerdorf Embedded in Apache / FPM daemon Yes (77% of web)
1997 Java Servlets Sun Microsystems Persistent JVM, thread per request Yes (Spring Boot)
2004 Ruby on Rails David Heinemeier Hansson Application server (Puma/Unicorn) Yes
2005 Django Adrian Holovaty WSGI/ASGI application server Yes
2009 Node.js Ryan Dahl Event loop, app is the server Yes
2014 AWS Lambda Amazon Web Services Function per request, managed containers Yes
2017 Cloudflare Workers Cloudflare Edge functions, V8 isolates Yes

The Irony: Serverless Is CGI's Grandchild

There is a deep irony in the arc from CGI to serverless. Consider the two models side by side:

CGI (1993)
  1. HTTP request arrives at web server
  2. Server forks a new process
  3. Process runs script
  4. Script writes output to stdout
  5. Server returns output as HTTP response
  6. Process is destroyed
Serverless (2014+)
  1. HTTP request arrives at edge network
  2. Platform provisions a container/isolate
  3. Container runs function
  4. Function returns a response object
  5. Platform returns response to client
  6. Container is recycled or destroyed

The conceptual model is identical: receive request, execute code, return response, release resources. The execution is fire-and-forget. There is no persistent application process between requests (conceptually, even if platforms optimize with container reuse). The unit of deployment is a function, not an application server.

What changed is everything beneath the abstraction. CGI forked a process on a single physical server. Serverless provisions a container across a global network of data centers. CGI's cold start was a fork() + exec() that took 200–500ms. A Cloudflare Worker's cold start is a V8 isolate initialization that takes under 5ms. CGI scaled by buying a bigger server. Serverless scales automatically to handle any traffic volume.

The web development industry spent twenty years building persistent application servers to escape CGI's request-per-process model — and then, with serverless, it circled back to the same fundamental pattern. The difference is that in 2026, the "process" is a container, the "server" is a global network, and the "cgi-bin directory" is a cloud deployment pipeline. The abstraction won. The implementation was rewritten from scratch.

What Happened to Perl

Any discussion of what replaced CGI inevitably raises the question: what happened to Perl? The language that was synonymous with web development from 1995 to 2002 seemed to disappear from the conversation. The reality is more nuanced.

Perl is still actively developed. Perl 5.40 was released in June 2024. Perl 5.38 (2023) introduced a new object-oriented system with the class keyword. The Perl Toolchain Summit — an annual gathering of core developers and CPAN maintainers — continues to be held (Vienna in 2025, with future events planned). CPAN still hosts over 200,000 modules. The Perl Foundation funds ongoing development.

Perl still runs in production at scale. Booking.com, one of the world's largest travel platforms, runs a massive Perl codebase. DuckDuckGo's backend was originally built in Perl. cPanel, the most widely used web hosting control panel, is written in Perl. Movable Type, an influential early blogging platform, was Perl. Many bioinformatics pipelines in genomics research are written in Perl, leveraging its text-processing strength for DNA sequence analysis.

CGI.pm was removed from the Perl core. In Perl 5.22 (June 2015), the CGI.pm module was removed from the core distribution. It remains available on CPAN for anyone who needs it, but its removal was a deliberate signal: CGI is no longer the recommended way to build web applications in Perl. Modern Perl web development uses frameworks like Mojolicious (a real-time web framework with built-in HTTP server), Dancer2 (a lightweight Sinatra-inspired framework), and Catalyst (a full-featured MVC framework). These frameworks use PSGI/Plack (Perl's equivalent of Python's WSGI), not CGI.

What changed was the default. In 1998, if you wanted to build a dynamic website, Perl was the obvious first choice. In 2006, the default shifted to PHP, Ruby, or Python depending on the community. In 2016, JavaScript (Node.js) became the new default for many developers. Perl did not become a bad language — it became a non-default language. In an industry where "what's the quickest way to get started" drives adoption, falling off the default list has compounding effects: fewer tutorials, fewer Stack Overflow answers, fewer junior developers, fewer new libraries.

Perl's story is not one of failure but of succession. It solved the right problems at the right time, shaped the web frameworks that followed it, and continues to run in the environments where its strengths — text processing, system administration, glue code — remain unmatched. It is not dead. It is just not the default anymore.

Frequently Asked Questions

CGI scripts were replaced by a succession of technologies, each solving specific limitations. FastCGI (1996) kept CGI processes running as persistent daemons. mod_perl (1996) embedded Perl inside Apache. PHP (1997–1998) eliminated the need for cgi-bin entirely by mixing code with HTML. Java Servlets (1997) ran inside a persistent JVM. ASP (1996) brought server scripting to Windows/IIS. Ruby on Rails (2004) and Django (2005) introduced convention-based MVC frameworks. Node.js (2009) made the application its own web server. Serverless platforms like AWS Lambda (2014) and Cloudflare Workers (2017) eliminated the server concept entirely. The common thread: every replacement found a way to avoid spawning a new OS process for each request.

CGI is still technically supported by Apache (mod_cgi, mod_cgid) and can run behind Nginx via fcgiwrap, but it is not used for new projects. CGI survives in legacy systems: government websites, university departments, internal corporate tools, and scientific computing environments where Perl CGI scripts have been running reliably for 15–25 years. Rewriting these working scripts in a modern framework would cost time and money with no functional benefit. However, CGI.pm was removed from the Perl core distribution in version 5.22 (2015), signaling that even the Perl community considers CGI a legacy approach.

CGI is a protocol that spawns a new operating system process for every HTTP request, runs a script (typically Perl), and destroys the process when done. PHP is a language and runtime that, when run as an Apache module (mod_php) or via PHP-FPM, keeps the interpreter loaded in memory between requests. CGI scripts live in a dedicated cgi-bin directory and require execute permissions (chmod 755). PHP files are placed anywhere in the web document root and are parsed automatically by the server. In benchmarks, PHP running through mod_php or PHP-FPM is roughly 10x faster than an equivalent Perl CGI script because it eliminates the process creation overhead that dominated CGI's response time.

CGI scripts were slow because the web server forked a new operating system process for every HTTP request. Each request required the OS to call fork() and exec(), load the interpreter (e.g., Perl) into memory, read and parse the script file from disk, load required modules (CGI.pm, DBI, etc.), execute the code, write output to stdout, and destroy the process. A typical Perl CGI script consumed 5–15 MB of memory and 200–500 milliseconds of startup time per request on 1990s hardware. With 100 concurrent users, that meant 100 separate Perl processes competing for RAM and CPU — quickly exhausting a server with 128–512 MB of total memory.

FastCGI is a protocol developed by Mark Brown at Open Market, Inc. in 1996 as a high-performance replacement for CGI. Instead of spawning a new process for each HTTP request, FastCGI keeps the application running as a persistent daemon process. The web server communicates with this daemon over a Unix socket or TCP connection, sending requests and receiving responses without the overhead of process creation. FastCGI is still widely used in 2026 — PHP-FPM (FastCGI Process Manager) is the standard method for running PHP with Nginx, powering millions of WordPress sites and other PHP applications worldwide.

Serverless and CGI share the same conceptual model — receive a request, execute code, return a response, release resources — but the implementation is fundamentally different. CGI forked a new OS process on a single physical server, consuming local RAM and CPU, with startup times of 200–500ms. Serverless platforms like AWS Lambda, Google Cloud Functions, and Cloudflare Workers run functions in managed containers or V8 isolates across globally distributed infrastructure, with auto-scaling from zero to millions of requests, cold starts measured in single-digit milliseconds, and per-execution billing. Serverless is sometimes called "CGI's grandchild" — the same stateless, request-response pattern, rebuilt for cloud-scale computing with three decades of infrastructure innovation underneath.

Related Reading

What is CGI? The Technology That Powered the Early Web

The complete guide to how CGI worked, from environment variables to cgi-bin conventions.

FormMail

The most widely used CGI script ever written — a form-to-email gateway in Perl.

FormMail Security History

How FormMail's vulnerabilities shaped CGI security practices for the entire web.

History of Matt's Script Archive

The complete timeline of worldwidemart.com and the scripts that shaped the web.

Glossary: CGI

Quick reference definition for Common Gateway Interface and related terms.

Glossary: FastCGI

Quick reference for FastCGI protocol and its role in modern web infrastructure.