
How Patent Eligibility Became a Flowchart and What That Tells Us
Congress has spoken on patent eligibility in a meaningful way. Section 101 of the Patent Act is a single sentence long. This paragraph, intentionally kept short, still contains just a few more words than the statute itself.
In today’s modern eligibility analysis, you will see flowcharts, sub-steps, and prongs, culminating in phrases like “Step 2A, Prong 2,” which read more like IKEA furniture assembly instructions than formal statutory interpretation. What Section 101 has become is not the product of a single decision, agency, or moment of doctrinal clarity. It has evolved over many years, revised by different contributors with different institutional roles and incentives, each addressing different cutting-edge technologies. Courts have refined the doctrine, in doing so destabilizing established drafting and examination practices. Subsequently, agencies have operationalized this refined doctrine, and now, USPTO guidance attempts to stabilize the resulting precedent.
The resulting flowchart is no longer law in its original form. Rather, it is a quilt of judicial anxieties, patched over time to function at scale. In other words: Step 2A, Prong 2 is not a doctrine; it is simply the latest update.
I. The Statute: Where It Started (1952)
Modern Section 101 commences with the Patent Act of 1952, which framed patent eligibility in broad terms: any new and useful process, machine, manufacture, or composition of matter. Congress refrained from enumerating any exceptions. It did not specify steps, prongs, or categories. There is no flowchart, regardless of which decoder ring you apply to § 101.
At this stage, the system is linear. Congress enacts the statute. Courts and the USPTO interpret and apply it. If your invention can be characterized as a process, machine, manufacture, or composition of matter, you have generally met the criteria for patent eligibility.
II. 1970s–1980s: Judicial Carve-Outs Appear (Without a Framework)
However, the simple, binary eligibility inquiry of “did you claim a process, machine, manufacture, or composition of matter?” did not last.
Beginning in the 1970s, the Supreme Court began recognizing implicit exceptions to § 101: laws of nature, natural phenomena, and abstract ideas. These exclusions, however, are not explicitly listed in the law. Instead, they are judicial creations, driven more by concern about preemption than textual boundaries and clarity.
In Gottschalk v. Benson[i], the Court decided that claims related to a mathematical algorithm could not be granted because such claims would effectively preempt the use of a fundamental principle. The Court acknowledged the difficulty of the task, noting that the “technological problems tendered” would “[need] action by Congress.” Yet, like Seymour waiting on Philip J. Fry in Futurama[*], this action ultimately never came.
Six years later, Parker v. Flook [ii] revisited these preemption concerns, but did not resolve how they should be applied. The case underscored how uncertain the doctrine has become: multiple judges and justices believed the claims were patent-eligible, even as the Court held otherwise. Eligibility was becoming increasingly subjective… and ambiguous.
Then, the Court shifted in the opposite direction.
In Diamond v. Chakrabarty [iii], and Diamond v. Diehr [iv], the Supreme Court narrowed the scope of Section 101 and warned against adding limitations to the statute. Both opinions cited Congress’s intent that patentable subject matter “include anything under the sun that is made by man.” While these decisions reassured innovators, they failed to reconcile the growing tension between broad statutory language and judicially created exclusions.
In this era, courts were attempting to draw boundaries without a map, and the eligibility doctrine became highly fact-dependent. The USPTO did not provide guidance due to a lack of coherence, making it like a Calvinball rulebook. Examiners, therefore, relied primarily on precedent and judgment. The carve-outs existed, but the framework did not.
What emerged from this era was not a test but a set of instincts.
III. 1990s–Early 2000s: The Federal Circuit Simplifies
Unfortunately, such instincts did not scale.
By the 1990s, the patent eligibility doctrine was buckling, and the cracks were beginning to show. The Supreme Court had created carve-outs without a framework, lower courts were policing boundaries on a case-by-case basis, and software-related inventions were becoming impossible to ignore. The gap between subject and eligibility was no longer academic, but operational.
Faced with a doctrine that was buckling under its own weight, the Federal Circuit intervened to provide the structural support the Supreme Court had omitted. In State Street Bank & Trust Co. v. Signature Financial Group[v], the Federal Circuit articulated what became the defining eligibility standard of the era: whether the claimed invention produced a “useful, concrete, and tangible result.”
The appeal was obvious. The phrase sounded practical—limiting without being hostile—and, most importantly, manageable. You could explain it to examiners, apply it to software, and move cases through the system. For a time, this worked.
Under the “useful, concrete, and tangible result” test, eligibility became permissive again. Business methods were no longer categorically suspect, and software claims could clear Section 101 without contortions. The flowchart still did not exist because it did not seem necessary yet.
At the same time, the Federal Circuit gestured toward a more traditional constraint through the machine-or-transformation test, tethering eligibility to physical implementation or transformation. Together, these heuristics expanded patentable subject matter while reassuring courts that eligibility still had edges.
But slogans have a shelf life. The problem with “useful, concrete, and tangible” was not that it was wrong, but that it was untethered. The phrase does not appear in the statute and fails to resolve the judicial exceptions. It substituted ease of administration for doctrinal coherence. As software and business method patents proliferated, criticism mounted that the test was too detached from Supreme Court precedent. The stage was set for another correction—one that would dismantle the slogan without replacing it with a workable alternative. The pendulum was about to swing back.
IV. Collapse of Simplification (2010–2014)
In rapid succession, the Supreme Court decided Bilski, [vi] Mayo,[vii] Myriad,[viii] and Alice [ix]. Each decision tightened eligibility, yet none supplied an administrable test. Instead, the Court emphasized abstraction, “directed to” inquiries, and the need for “significantly more,” while repeatedly disclaiming bright-line rules.
From the Court’s perspective, this was manageable. Judges can invalidate claims one case at a time, explaining outcomes through characterization and analogy. They were not required to make the doctrine scalable.
For examination, however, the effect is destabilizing. The result is doctrinal disruption without operational guidance. “Directed to an abstract idea” was a conclusion, not a method. “Significantly more” was an aspiration, not a rule. Examiners are left to apply flexible, fact-intensive reasoning to hundreds of thousands of applications per year. This is where the linear system reached its breaking point.
V. Improvised Boundaries
After the collapse of the Soviet Union, Chinese border forces did not wait for a comprehensive geopolitical settlement before securing their frontier. They erected provisional barriers quickly and pragmatically, often without regard to settled borders, to impose control in the absence of a stable order. The goal was not coherence; it was administrability.
Section 101 followed the same pattern. As the eligibility doctrine lost its center, courts and the USPTO did not abandon boundaries; they drew them ad hoc. Preemption concerns here, tangibility requirements there, characterizations layered on top of characterizations. Each fix solved the immediate problem in front of it, while leaving the underlying instability untouched.
This map did not have defined borders, but rather, shifting perimeters.
VI. Guidance Steps In (2014–2019)
Faced with an unexaminable doctrine, the USPTO did what complex systems do under pressure: it decomposed the problem.
- Eligibility becomes steps.
- Steps become sub-steps.
- Sub-steps become prongs.
The Office issued a series of § 101 guidance documents, culminating in the 2019 Eligibility Guidance. Abstract ideas are categorized; “directed to” analysis is divided into Step 2A; integration into a practical application becomes Prong 2.
None of this structure appears in the statute or Supreme Court opinions, since it was an administrative response to doctrinal instability. And it worked, at least operationally. Examiners had a framework from which they can cite. In writing an Office Action, you could live inside the flowchart by referring to steps, sub-steps, and prongs. Federal Circuit cases matter, but only as filtered through the MPEP.
This was not a mistake—it is survival.
VII. Courts Refine the Law, Without the Flowchart (2019–Present)
While the USPTO stabilizes examination through guidance, courts are refining § 101 independently. The Federal Circuit repeatedly emphasizes that USPTO guidance is not binding, and the result is a familiar but unsettling phenomenon: when you read a set of claims in the abstract, you often cannot predict the outcome.
Yet, when you read a judge’s characterization of those same claims, the ending writes itself. Once claims are described as “directed to organizing human activity, they fail immediately. The same claims, described as “an improvement to computer functionality,” often survive. The language is identical; the outcome turns on the story the court tells about what the invention is really doing.
It is precisely how Section 101 bifurcated two citation cultures:
- Prosecutors cite the MPEP to satisfy the administrative flowchart.
- Litigators cite the Federal Circuit to control the narrative framing.
Absent a common, binding framework, Section 101 will continue to oscillate between administrative consistency and judicial recharacterization.
VIII. Step 2A, Prong 2 as Legal Archaeology
Ultimately, this is how we ended up with Step 2A, Prong 2 today.
No legislature designed this; no court announces it. It emerges how all bureaucratic complexities emerge: through iterative repair. Each prong corresponds to a judicial anxiety—preemption, overgeneralization, or intangibility—that could not be operationalized.
The flowchart is scaffolding. It is necessary to build, but it is not part of the finished structure. The problem for § 101 is that the scaffolding never comes down.
None of its layers is arbitrary in isolation, but when stacked together, they produce structure without stability. It is what has led Section101 to feel less like law and more like an instruction manual.
IX. What the Timeline Reveals
Seen as a whole, Section 101 is not a single doctrine. It is a recursive loop:
- Congress enacts a broad statute.
- Courts create exceptions.
- The USPTO operationalizes those exceptions through guidance.
- Courts refine doctrine without regard to guidance.
- The USPTO updates guidance to keep up.
- Go back to Step 2.
Each cycle adds more complexity without resolving the underlying ambiguity. Step 2A, Prong 2 is simply the fossil record of this process.
Conclusion: You Can’t Fix Section 101 by Adding Another Prong
Flowcharts appear when a law is asked to do too much with too little agreement. They are a sign of strain, not clarity.
If the Section 101 problem is structural and recursive, certain fixes should be confidently ruled out. Adjusting USPTO guidance by adding steps, prongs, or examples cannot ensure system-wide coherence; at best, it provides temporary stability in examination. Likewise, Supreme Court “clarifications” that preserve open-textured standards while disclaiming enforceable rules simply restart the cycle. Each intervention addresses local strain without changing the system.
Changes to the dynamics would need to be qualitative, not just in degree. That might involve:
- Congressional codification, rejecting the judicial exceptions themselves.
- Implementing technology-specific safe harbors, particularly for software or artificial intelligence.
- Shifting some eligibility determinations to specialized or post-grant proceedings with distinct operational standards.
The point is not that any solution is a panacea, but that only structural interventions can interrupt a recursive process. For Section 101, this will mean moving beyond the fossil record of iterative repair and finally rebuilding a stable foundation for the law itself.
[i] Gottschalk v. Benson, 409 U.S. 63 (1972).
[ii] Parker v. Flook, 437 U.S. 584 (1978).
[iii] Diamond v. Chakrabarty, 447 U.S. 303 (1980).
[iv] Diamond v. Diehr, 450 U.S. 175 (1981)
[v] State St. Bank & Tr. Co. v. Signature Fin. Grp., Inc, 149 F.3d 1368, 1369 (Fed. Cir. 1998), abrogated by In re Bilski, 545 F.3d 943 (Fed. Cir. 2008)
[vi] Bilski v. Kappos, 561 U.S. 593 (2010)
[vii] Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66 (2012)
[viii] Ass’n for Molecular Pathology v. Myriad Genetics, Inc., 569 U.S. 576 (2013)
[ix] Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208 (2014)
[*] Reference to Futurama (Season 4, Episode 7, “Jurassic Bark”), in which Fry’s dog, Seymour, waits faithfully for his return. https://www.youtube.com/watch?v=W6GDil0rGls