When the watcher wakes: version control, AI, and the dissolution of computing’s last priesthood

Version control is arguably computing’s most consequential yet least recognized infrastructure — a coordination technology that enables 180 million developers to collaborate across 630 million repositories, underpinning virtually every piece of software running civilization. Now, at this specific moment in 2026, a tool called Egregore proposes to abstract git away entirely through an AI interface layer, and the significance of this move runs far deeper than convenience. It represents the latest — and categorically most radical — instance of a recurring pattern in which powerful capabilities locked behind specialized knowledge get dissolved into general accessibility. But unlike every previous democratization wave, the complexity isn’t being removed or simplified through a visual interface. It’s being absorbed by a cognitive agent, and this distinction changes everything about what follows.

The implications span civilizational infrastructure, the sociology of technical priesthoods, the philosophy of tools and cognition, and a genuine tension between empowerment and dependency. What follows is a research compendium organized across these dimensions, drawing on the thinking of Linus Torvalds, Ivan Illich, Edwin Hutchins, Fred Brooks, Andrej Karpathy, Geoffrey Litt, Maggie Appleton, and others — intended as source material for a serious essay on what it means that git is being absorbed by AI at this particular inflection point.

Git as civilization’s invisible operating system

The history of version control is a history of expanding the boundaries of who can coordinate intellectual labor, and how. Dick Grune’s CVS (1986) replaced pessimistic locking with optimistic merging — multiple people could edit the same file and reconcile later. Subversion (2000) added atomic commits and directory versioning but remained centralized. Then in April 2005, after the BitKeeper licensing controversy, Linus Torvalds wrote git in roughly two weeks, and the topology of collaboration changed permanently.

Torvalds’ design philosophy was explicitly social, not merely technical. In his famous 2007 Google Tech Talk, he articulated the WWCVSND principle — “What Would CVS Never Do” — and framed distributed version control as fundamentally about trust architecture: “The way merging is done is the way real security is done. By a network of trust.” His cynicism was productive: “I don’t trust everybody. In fact, I am a very cynical and untrusting person. The whole point of being distributed is I don’t have to trust you.” Git’s distributed model meant no central authority, no permission required to fork, no single point of failure. Every clone was a complete repository. Branching was essentially free.

The consequences were civilizational in scale. The Linux kernel — 28+ million lines of code, contributed by 15,600 developers from 1,400+ companies — became, in the Linux Foundation’s words, “the largest collaborative project in the history of computing.” GitHub, built on git, now hosts 395 million public repositories and facilitated 1.12 billion contributions to open source in 2025 alone — the most active period in its history. In March 2025, 255,000 first-time contributors joined open source in a single month. Linux itself runs 90% of public cloud workloads, 99% of supercomputers, and 82% of the world’s smartphones. Torvalds noted wryly in his 2025 twentieth-anniversary interview that his daughter told him he’s better known in her CS lab for creating git than Linux, “because they actually use git for everything there.”

Karl Fogel’s Producing Open Source Software provides perhaps the deepest analysis of version control as governance infrastructure. His key insight: “Replicability implies forkability, and forkability implies consensus.” Because anyone can copy the entire repository and start a competing project, power in open-source governance is always contestable. Fogel writes: “Imagine a ruler whose subjects could copy her entire territory at any time and move to the copy to rule as they see fit. Would not such a ruler govern very differently from one whose subjects were bound to stay under her rule?” Version control doesn’t just track code — it enables a specific form of distributed governance where authority must persuade rather than merely pronounce.

Nadia Eghbal (now Asparouhova) strengthened this framing in her Ford Foundation report Roads and Bridges (2016) and her Stripe Press book Working in Public (2020), arguing that open-source software is public infrastructure analogous to roads and bridges — critical, undermaintained, and invisible until it breaks. The U.S. Bureau of Economic Analysis published a working paper treating open-source as an intangible capital asset, using GitHub data to estimate its resource cost. The analogy to double-entry bookkeeping — Luca Pacioli’s 1494 innovation that made modern commerce possible through accountability, error detection, and institutional memory of financial transactions — is striking. Version control provides the same capabilities for intellectual labor: attribution, accountability, perfect recall, and reversibility. Every commit is cryptographically hashed, timestamped, attributed, and contextualized. No other human coordination system has this combination of properties.

Five centuries of dissolving priesthoods

The abstraction of version control follows a pattern so consistent across centuries that it approaches a historical law. In every case, a specialized skill generates a professional class whose genuine expertise becomes entangled with incidental difficulty — bad tools, scarce materials, arcane notation — that inflates gatekeeping power beyond what intrinsic complexity warrants. Then someone builds a tool that separates the essential from the incidental, and the cycle plays out with remarkable regularity.

The printing press is the canonical example. Before Gutenberg (~1440), medieval scribes — primarily monks — controlled the reproduction of knowledge. A printed copy of Cicero cost a month’s salary by the 1490s; by 1500, 20 million books circulated across Europe. The Dominican friar Filippo de Strata characterized the press as a “whore” compared to the “virgin” pen — the exact quality-degradation argument that recurs at every democratization boundary. But within decades, Martin Luther’s pamphlets had produced six million copies of his writings, constituting nearly a third of all German-language publications. The press, as one historian notes, “did not eliminate hierarchy, but it destabilized monopolies over meaning.” Authority now had to persuade rather than merely pronounce.

The spreadsheet revolution shows the pattern in miniature. Dan Bricklin watched his Harvard accounting professor “laboriously create a financial model on a blackboard” and built VisiCalc (1979), transforming twenty hours of manual calculation into fifteen minutes. The tool offered functionality comparable to $20,000 financial modeling languages but required no programming. The consequences were immediate and unpredictable: Wall Street legend has it that “junk bond king Michael Milken once blamed the 1980s boom in hostile corporate takeovers on the inventors of VisiCalc” — because spreadsheet modeling enabled financiers to envision previously imponderable leveraged buyouts. Accountants weren’t eliminated; they were elevated from mechanical calculation to strategic analysis. The new scarcity was judgment, not arithmetic.

Desktop publishing (PageMaker, 1985), photography (Kodak’s “You press the button, we do the rest,” 1888, through smartphone cameras), and the WordPress-to-no-code progression all follow the identical five-stage structure: priesthood consolidation, a tool that strips incidental complexity, priestly resistance (typesetters mocked “ransom note” layouts; film photographers said digital lacked “soul”), an explosion of creation including forms the priesthood never imagined, and finally the surviving specialists elevating to higher-order work — art direction, strategic analysis, architectural thinking. The quality concern at each stage is always partially legitimate in the short term and always overwhelmed by the sheer volume and novelty of what the democratized capability unleashes.

Ivan Illich provided the deepest theoretical framework for this pattern in Tools for Conviviality (1973). His distinction between convivial tools (which enhance autonomous, creative human action) and manipulative tools (which structure dependency) maps precisely onto the priesthood dynamic. Illich identified the mechanism by which priesthoods form: institutions develop “self-sustaining, but otherwise counter-productive, expert knowledge” until they cross what he called the “second watershed” — where “the means overtake the ends” and humans become commodities for the institution to manipulate. His concept of “radical monopoly” — when a technology so dominates that non-users are excluded from participation — describes precisely the situation of non-developers in organizations built on version-controlled code. A convivial tool, by contrast, guarantees “the most ample and free access to the tools of the community.”

When intelligence replaces mechanism as the interface

Every previous democratization wave used what Alan Kay might call “Plan B.” In a remarkable 2013 TIME interview, Kay revealed that the original Dynabook vision at Xerox PARC included AI agents as the primary interface: “We invented the overlapping window, icons, etc., graphical-user interface at PARC and just concentrated on it when it became clear that the ‘helpful agent’ wasn’t going to show up in the decade of the ’70s.” The entire GUI paradigm — icons, windows, menus, direct manipulation — was a substitute for the intelligence-mediated interface they actually wanted. Every subsequent democratization wave (WYSIWYG, drag-and-drop, no-code) refined this substitute. They all share a fundamental characteristic: the abstraction is a fixed mapping. Click this button, get that result. Drag this element, see that change. The complexity is removed or hidden behind visual metaphors.

AI as an interface layer is categorically different. The complexity isn’t removed — it’s absorbed by a cognitive agent capable of reasoning about intent. The mapping between user instruction and system behavior is no longer fixed but dynamic, contextual, and probabilistic. This distinction was anticipated with extraordinary precision by the Shneiderman-Maes debate at CHI 1997. Ben Shneiderman championed direct manipulation — users desire control and predictability. Pattie Maes argued that as computing environments grew complex, “direct manipulation alone would prove insufficient,” proposing software agents as assistants. Shneiderman warned: “I think the intelligent agent notion limits the imagination of the designer.” The debate maps exactly onto the present tension: direct manipulation (complexity removed/hidden) versus agent-based interaction (complexity absorbed by intelligence). The difference was that in 1997, the agents weren’t smart enough. Now they are.

Geoffrey Litt, whose MIT PhD work focuses on malleable software, has articulated the key question most precisely: “How should we think about empowerment and agency vs. delegation and automation in the age of LLMs?” His 2023 essay argues that “soon all computer users will have the ability to develop small software tools from scratch” through LLMs, but warns that chat is an “essentially limited interaction mode” — even when the bot is infinitely capable. Telling ChatGPT to “trim the first 5 seconds of a video” works, but it’s inferior to dragging a timeline handle. Litt’s crucial insight: user interfaces still matter even when AI can do the work, because direct manipulation provides control, visibility, and feedback that delegation cannot. His 2025 concept of the “nightmare bicycle” — drawn from Andrea diSessa’s Changing Minds — warns against interfaces that paper over systematic structure with task-specific labels. A bicycle with “gravel mode” and “downhill mode” buttons instead of numbered gears is unusable in novel situations. AI risks creating nightmare bicycles when it hides the structure users need to understand.

Maggie Appleton, working at GitHub Next, contributes a complementary framework. Her central metaphor frames language models as “squishy” (probabilistic, fuzzy) versus traditional software’s “structured” (deterministic, precise) nature. She argues the chatbot-as-interface is a design failure: “This thing has no affordances! There are no knobs or door handles on this thing. This interface offloads a ton of cognitive labour to the user.” Her January 2026 essay on agent patterns identifies perhaps the deepest implication: when AI absorbs implementation complexity, design becomes the bottleneck: “When you have a fat stack of agents churning through code tasks, development time is no longer the bottleneck… Design becomes the limiting factor: imagining what you want to create and then figuring out all the gnarly little details.”

The metaphor landscape captures the stakes well. Steve Jobs called the computer a “bicycle for the mind” — amplification through human effort. Professor Jason Lodge at the University of Queensland reframed AI as an “e-bike for the mind”: it “can efficiently transport students to their destination — but potentially eliminate the valuable mental effort inherent in the learning process.” Litt warns of the “nightmare bicycle.” The essay’s implied metaphor — the chauffeur — captures what none of these fully do: the interface isn’t a mechanism but an agent with whom you negotiate. You can give a chauffeur vague directions. A chauffeur can interpret context. But you can’t see the road, can’t learn the route, and must trust.

The specific inflection: programming as a dissolving boundary

The abstraction of version control occurs within a larger moment in which programming itself is dissolving as a boundary. Andrej Karpathy’s Software 2.0 essay (2017) identified the first phase: neural networks replacing hand-coded rules, with the “programmer” writing data pipelines while gradient descent writes the actual code. His 2023 extension declared “the hottest new programming language is English.” Then on February 2, 2025, Karpathy’s viral post introduced “vibe coding”“I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works” — garnering 4.5 million views and becoming Collins Dictionary’s Word of the Year. Exactly one year later, on February 4, 2026, Karpathy reframed: the practice had matured into “agentic engineering”“the new default is that you are not writing the code directly 99% of the time, you are orchestrating agents who do and acting as oversight.”

The historical parallel to high-level languages is precise and illuminating. When John Backus created FORTRAN (1957), it reduced required code by a factor of 20 — a thousand assembly instructions compressed to fifty lines. Resistance was fierce: “Many programmers doubted a compiler could ever match hand-coded assembly’s efficiency.” Ed Post’s satirical 1983 essay “Real Programmers Don’t Use Pascal” captures the priestly reflex perfectly: “If you can’t do it in Fortran, do it in assembly language. If you can’t do it in assembly language, it isn’t worth doing.” The companion piece, “The Story of Mel,” celebrates a programmer who refused to use an optimizing assembler because “You never know where it’s going to put things.” Most remarkably, Dijkstra wrote in 1975 that “projects promoting programming in ‘natural language’ are intrinsically doomed to fail” — a dismissal of the very capability that LLMs are now demonstrating. The priesthood has always proclaimed the impossibility of the next abstraction layer.

The current data tells a complex story. 65% of developers now use AI coding tools weekly. Y Combinator reported that 25% of its Winter 2025 startups had codebases that were 95% AI-generated. Microsoft and Google both report roughly 25% of their code is AI-generated. Cursor reached $100 million ARR in twelve months. Yet the METR study (July 2025) found experienced open-source developers were actually 19% slower with AI tools, while believing themselves 20% faster — a remarkable perceptual gap. GitClear’s longitudinal analysis found code refactoring dropped from 25% to under 10% while code duplication quadrupled. The productivity revolution is real but messier than the narrative suggests.

What is genuinely new is the concept Kevin Roose called “software for one” — bespoke tools tailored to individual needs, never worth building before because the cost exceeded the value for a single user. Jensen Huang proclaimed at Computex: “Everyone is a programmer now — you just have to say something to the computer.” Sam Altman argued “the world wants a gigantic amount more software, 100 times maybe a thousand times more software.” GitHub CEO Thomas Dohmke predicts a billion programmers within the decade, up from roughly 30 million professionals today. Whether these projections prove accurate or not, the direction is clear: the boundary between “people who code” and “people who don’t” is dissolving, and version control is one of the last remaining gatekeeping layers.

The egregore: when organizational memory becomes accessible

The name “Egregore” for an AI-mediated version control tool is philosophically precise in ways that reward unpacking. In the esoteric tradition, an egregore — from the Greek egrēgoros, meaning “wakeful” or “watcher” — is an autonomous psychic entity created by a collective group mind. First appearing in the Book of Enoch as angelic “Watchers” and developed in its modern esoteric meaning by Éliphas Lévi, the concept describes a thoughtform that arises from collective intention, gains quasi-autonomous existence, and then influences its creators in return — a feedback loop between group and emergent entity. Mark Stavish’s 2018 book Egregores: The Occult Entities That Watch Over Human Destiny provides the most comprehensive modern treatment.

The concept maps onto software repositories with uncanny precision. A codebase is literally a collective thoughtform — it emerges from the combined intentions, decisions, and knowledge of every contributor, yet takes on structural properties that no single person fully controls or comprehends. It constrains future decisions, establishes path dependencies, and shapes the thinking of everyone who works within it. The Freemasonic journal The Square Magazine makes the connection explicit: “The incorporated entity is an egregore… As the entity grows in power, it is fashioned by everyone within the company and soon takes on a life of its own.”

The theoretical grounding for this comes from Edwin Hutchins’ distributed cognition framework, developed in Cognition in the Wild (1995). Hutchins’ central thesis: “Cultural activity systems have cognitive properties of their own that differ from the cognitive properties of the individuals who participate in them.” His ethnographic study of Navy ship navigation demonstrated that the navigation team functions as a cognitive system with properties irreducible to any individual member. Cognition, he argued, is not confined to brains but distributed across people, artifacts, and environments. His companion paper, “How a Cockpit Remembers Its Speeds,” showed that the cockpit system “remembers” through the coordinated interaction of pilots, speed cards, instruments, and verbal callouts — “a continual interaction with a world of meaningful structure.”

Applied to software teams, this framework reveals version control as the primary infrastructure of distributed cognition. Flor and Hutchins’ 1991 study of software teams found that “complex cognitive systems engaged in software development tasks possess cognitive properties distinct from the cognitive properties of the individual programmers.” Walsh and Ungson’s foundational organizational memory framework (1991) identified five “retention bins” — individuals, culture, transformations, structures, ecology — and a git repository spans all of them simultaneously. Commit history encodes transformational memory. Code review discussions encode cultural memory. Branch structures encode structural memory. Daniel Wegner’s transactive memory theory (1987) adds another dimension: effective teams develop “who knows what” knowledge that allows them to specialize rather than duplicate expertise. Git’s blame command is literally a transactive memory query — identifying who has expertise in specific code.

Here is the critical implication: if the organization’s collective cognition is mediated through version control, and version control is accessible only to developers, then non-technical participants are locked out of the group mind. Designers, product managers, domain experts, business stakeholders — they hold knowledge that never enters the externalized memory system. The distributed cognitive system has missing nodes. An AI layer that makes version control accessible to all participants doesn’t just improve convenience — it completes the organization’s cognitive architecture. It invites everyone into participation with the collective thoughtform. The egregore wakes up.

The honest case against: what leaks when intelligence mediates

The counter-arguments deserve steelmanning, because the best critiques of abstraction contain genuine wisdom. Joel Spolsky’s 2002 “Law of Leaky Abstractions” states the foundational principle: “All non-trivial abstractions, to some degree, are leaky.” TCP provides reliable communication over unreliable IP — until the network cable is chewed through. SQL hides procedural complexity — until logically equivalent queries differ by a thousandfold in performance. Spolsky’s devastating conclusion for abstraction advocates: “The abstractions save us time working, but they don’t save us time learning.” If even git’s command-line interface is a leaky abstraction over its DAG data model (as MIT teaching staff argued in 2020), then an AI abstraction over git adds another leaky layer on top of an already leaky one.

Fred Brooks’ “No Silver Bullet” (1986) provides the deeper philosophical grounding. His distinction between essential complexity (inherent in the problem domain) and accidental complexity (artifacts of tools and representations) leads to a devastating line: “Descriptions of a software entity that abstract away its complexity often abstract away its essence.” Merge conflicts arise from the essential complexity of parallel human work on shared artifacts. An AI can remove accidental complexity — memorizing git rebase --onto syntax — but cannot remove the essential complexity of understanding what a three-way merge means. Martin Thompson’s concept of “mechanical sympathy” (borrowed from Formula One driver Jackie Stewart) reinforces this: “Understanding how a car works makes you a better driver.” Developers who understand git’s DAG can predict and recover from failures; those who only know “rebase makes history clean” are helpless when conflicts arise.

Lisanne Bainbridge’s classic 1983 paper “Ironies of Automation” identifies perhaps the most important risk. Her core irony: “By automating the process the human operator is given a task which is only possible for someone who is in on-line control.” Automation deskills operators, then requires those deskilled operators to intervene precisely when the automation fails — during the hardest, most ambiguous situations. The Air France 447 disaster (2009) is the canonical case: when pitot tubes iced over and autopilot disengaged, pilots who had lost manual flying skills found themselves unable to handle the situation. 228 people died. The parallel to AI-mediated version control is direct: if developers use AI for all git operations, they lose the ability to diagnose merge conflicts manually, recover from corrupted histories, or intervene when the AI makes errors in complex merges. A 2023 study found that even experienced radiologists’ accuracy dropped from 82% to 45.5% when influenced by incorrect AI-generated scores — automation bias in action.

But AI abstraction introduces risks that previous abstractions never posed. Traditional abstractions are deterministic — the same SQL query is always slow in the same way; a GUI button always executes the same function. AI abstraction is probabilistic and non-reproducible: the same natural language instruction might produce different git operations on different occasions. The abstraction leaks not in inspectable, learnable patterns but in unpredictable, confidence-masked ways. AI hallucination in version control could mean fabricated branch names, inappropriate rebase-vs-merge decisions, or silently dropped commits. A USENIX Security 2025 paper frames hallucinations as “not simply a bug to be patched but a structural consequence of how these systems operate.”

Harvard Kennedy School research on human-AI collaboration models offers a path through this tension. Their study identified three collaboration modes: centaurs (14% of participants, who used AI selectively while maintaining control, achieved highest accuracy), cyborgs (who intertwined human-AI work throughout), and self-automators (who ceded control and performed worst while experiencing the most deskilling). The ideal for AI-mediated version control is the centaur model — what Akita Software calls a “complexity-revealing tool” rather than a pure abstraction: one that helps users understand their git DAG and make better decisions rather than hiding the DAG entirely.

And yet — every abstraction layer in computing history faced the same critique, and abstraction won every time. Assembly programmers said FORTRAN was a crutch. FORTRAN programmers said Pascal was for “quiche eaters.” Dijkstra declared natural-language programming “intrinsically doomed to fail.” The priesthood has always been partially right about quality degradation in the short term and always wrong about the long-term trajectory. The question isn’t whether AI-mediated version control will prevail — the pattern suggests it will — but whether we can design it as a convivial tool (in Illich’s sense) rather than a manipulative one; as a complexity-revealing centaur rather than a deskilling autopilot.

Conclusion: the watcher at the threshold

The threads of this research converge on a single recognition: what Egregore represents is not merely a convenience layer over git but a claim about where human cognition should meet computational infrastructure. Version control is civilization’s coordination substrate — the technology that makes it possible for 180 million people to collaboratively build the digital systems everything else runs on. The pattern of priesthood dissolution tells us this substrate will inevitably become accessible to non-specialists; every powerful capability locked behind arcane knowledge eventually gets liberated. The question this particular moment poses is novel: when the liberating mechanism is itself an intelligence — not a simplified UI but a cognitive agent that absorbs rather than removes complexity — the nature of the relationship between user and tool changes fundamentally. We move from Illich’s convivial tool to something that has no precedent in his framework: a tool that reasons.

Three insights emerge that the essay might foreground. First, Bret Victor’s silence on AI is itself significant — the strongest advocate for making systems visible and directly manipulable has not endorsed the paradigm of delegating to an opaque intelligence, and the tension between his vision (reveal complexity) and AI’s tendency (absorb complexity) is the core design challenge for tools like Egregore. Second, the egregore metaphor is not decorative but structurally precise: a git repository is a collective thoughtform that gains autonomy and shapes its creators, and an AI layer over it is literally a watcher — an always-awake entity that embodies the collective intelligence of all contributors. Making this entity accessible to non-technical participants doesn’t just improve workflow; it completes the organization’s distributed cognitive system. Third, the honest path forward is the centaur model — not hiding the DAG but making it legible through intelligence, preserving the user’s ability to develop mechanical sympathy while leveraging AI’s capacity to absorb incidental complexity. The tool that gets this balance right — convivial enough for universal access, transparent enough for genuine understanding — will deserve the name Egregore. The watcher will not merely abstract; it will illuminate.