.webp)
Generated by Master Biographer | Source for LinkedIn Content
The thing about working at Google is that you get used to impossible things.
You get used to querying petabytes in milliseconds. You get used to infrastructure that was, by any outside standard, science fiction. You get used to the fact that the problems your team is solving don't exist anywhere else — because no one else has scaled large enough to have them yet.
Spencer Kimball had been at Google for eight years. He had helped build Colossus — the successor to the Google File System, the distributed storage backbone beneath everything Google ran. He understood, at a molecular level, how data moved at scale. He was not easily impressed.
And then he saw Spanner.
Google Spanner was the internal distributed database being built for AdWords — Google's most critical revenue engine. AdWords was processing billions of advertising transactions per day, across data centers spread across continents. The existing databases — the ones everyone outside Google used — couldn't handle it. Not even close.
Spanner was Google's answer: a globally distributed relational database that did something engineers in the outside world genuinely believed was impossible. It provided ACID transactions — the gold standard of data integrity — at planetary scale. It replicated data across continents and kept every node perfectly synchronized using atomic clocks and a time API called TrueTime, which measured clock uncertainty in single-digit milliseconds. If an entire data center was wiped out by a flood, a fire, a server failure, a software bug — Spanner kept running. It rerouted. It healed. It didn't notice.
It was, in the language of distributed systems, externally consistent at global scale. No one had ever built that before. The academic community had theorized it. Google had built it.
Kimball stared at Spanner and felt two things simultaneously.
The first was awe. This was the most sophisticated piece of database engineering he had ever seen. It was genuinely ten years ahead of what the outside world would have.
The second was a question — quiet at first, then louder, then impossible to ignore.
Why does only Google get to have this?
Spanner would never be released. It was a proprietary weapon inside the most powerful technology company on earth, and it would stay there. Every startup, every scale-up, every bank, every retailer, every hospital building software on databases that could fail, could corrupt, could not survive a single data center outage — they would never see it. They would never even know it existed.
Unless someone built it for them.
Kimball filed the thought away. He wasn't ready yet. The moment wasn't right. But the question sat in the back of his mind, patient and persistent, the way the most important questions always do.
To understand what Spencer Kimball built, you have to understand what Spencer Kimball had already built.
In the summer of 1995, Peter Mattis posted a message to a Linux development newsgroup. It wasn't a declaration or a manifesto. It was a question. The kind of question that sounds casual until you trace its consequences forward thirty years:
"Suppose someone decided to write a graphical image manipulation program — akin to Photoshop. What kind of features should it have? What file formats should it support?"
Mattis had been tinkering with plug-in architectures for two weeks. He was a student at UC Berkeley, and he was frustrated. Photoshop was expensive. Graphic designers who couldn't afford it had nothing. The free software world had editors, but nothing serious. Nothing that could do real work.
Two weeks after that newsgroup post, Mattis and his roommate Spencer Kimball — both students at Berkeley, both members of a student club called the eXperimental Computing Facility — announced the General Image Manipulation Program. GIMP.
By November 21, 1995, they had a public release.
By February 1996, a version refined enough to show their ambitions: selection tools, transformation tools, painting tools, effects filters, layer support — a complete system. Not a toy. A real tool, given away for free, for anyone who needed it.
GIMP spread across the early internet the way only free, genuinely useful software can. Designers who couldn't afford Photoshop used GIMP. Students used it. Hobbyists used it. It landed on Linux distributions worldwide. It was translated into dozens of languages. The Free Software Foundation adopted it as an official GNU project in 1997, and the G in the acronym officially became GNU.
Mattis and Kimball had started this as a semester project. A class assignment. And they had, accidentally, created one of the most widely-used graphics programs in the world.
They were students. They were twenty years old.
The lesson that Kimball took from GIMP — even if he didn't articulate it at the time — was a specific one: the most powerful technology in the world should not be exclusive to the people who can afford it. If you can give everyone Photoshop's power, you should. If you can give everyone Spanner's power, you should.
He would apply that lesson exactly once more in his life.
When Kimball and Mattis joined Google in 2002, they were employee number three hundred and something — early enough to be significant, late enough to join a company that was already becoming mythological.
On their first day, Larry Page and Sergey Brin came to find them.
Not to welcome the new cohort. Not to give a speech. They came specifically to find Kimball and Mattis, because they wanted to tell them something personally.
"We love GIMP," they said. "The first Google logo was made in GIMP."
The two Berkeley students who built a free Photoshop alternative in their dorm room had created the tool that Google's founders had used to draw the letters that now appeared above billions of search boxes. There was something almost fairy-tale about the moment — the craft made for everyone, used by the people who would build the most powerful company on earth.
At Google, Kimball and Mattis went to work on infrastructure. They joined the teams building the plumbing that made Google run: the file systems, the storage layers, the distributed computing primitives that no user ever saw but that every Google product depended on. Kimball worked on Colossus, Google's second-generation distributed file system. Mattis worked on infrastructure projects that demanded the same philosophy: design for failure, replicate aggressively, assume any single component can die at any moment and plan around it.
Across the building, a third engineer named Ben Darnell was building something different. Darnell worked on Google Reader — the RSS aggregator that let users subscribe to websites and read them in a single feed. Reader was, by any measure, a beloved product. It would eventually be killed, to the sustained grief of the internet. But inside Google, what Darnell was absorbing was the same lesson as everyone else: how to build things that scaled, that survived, that didn't need to be babysat.
Kimball, Mattis, and Darnell were not yet a team. They were just three engineers inside the same company, running the same reps, developing the same instincts. The crucible that would eventually forge them together was still years away.
Darnell left Google in 2009. The company was large enough by then that leaving felt less like defection and more like graduation. He went to FriendFeed, the social aggregation startup that Facebook would buy. Then he joined Dropbox, which was in the middle of its own rocketship moment.
Kimball and Mattis held on until 2012. By then, Google was thousands of employees. The startup energy had calcified into something more institutional. Peter Mattis told the story plainly: he felt too comfortable. He wasn't being challenged. He was stagnating. That feeling — the specific sensation of competence without growth — was its own kind of signal.
They left. They co-founded a company called Viewfinder: a mobile photo-sharing app. This was 2012, the peak of the photo-sharing moment — Instagram had just been acquired by Facebook for a billion dollars, and every investor in Silicon Valley was trying to find the next one.
Viewfinder was good. It was not the next Instagram.
But inside Viewfinder, something important happened. Kimball and Mattis ran into the exact problem they'd seen at Google — just from the other side. At Google, they'd watched Spanner get built because existing databases couldn't handle the scale. At Viewfinder, they were building a consumer product and immediately hitting the walls of the database ecosystem.
MySQL. PostgreSQL. Amazon DynamoDB. They evaluated them all. Every option involved a compromise: either you got consistency and sacrificed scale, or you got scale and sacrificed consistency. The distributed NoSQL databases that were supposed to be the future — HBase, Cassandra, Riak — required so much operational overhead that small teams drowned in the maintenance before they could build actual product.
Kimball's mind kept going back to Spanner.
In January 2012 — still at Viewfinder, before writing a single line of the eventual database code — Kimball wrote an email to a friend. The email described, in brief strokes, what he wanted to build. It ended with a phrase that would eventually become the entire company's thesis:
"Very fast. Very scalable. Very hard to kill."
The name came in that same conversation, in that same apartment. Kimball had been thinking about the architecture: a database made of symmetric nodes, no single point of failure, no external dependencies, able to spread across availability zones the way a colony spreads across a kitchen — autonomously, relentlessly, without needing to be managed. Each node would replicate data and repair itself when adjacent nodes failed. The system would be, in a word, survivable.
"These were the capabilities that led me to the name 'cockroach,'" Kimball later explained, "because they'll colonize the available resources and are nearly impossible to kill."
No other names were considered. The name was immediate. It fit so perfectly that alternatives felt redundant.
In December 2013, Square acquired Viewfinder.
Kimball, Mattis, and the team joined Square. They integrated what they'd built. They did the work. And they kept thinking about the database.
Darnell, by this point, was also at Square — the paths converging almost gravitationally. Three former Googlers, all carrying the same scar tissue from the database problem, all ending up in the same building.
In January 2014, Kimball sat down and wrote the design document.
Not a pitch deck. Not a business plan. An engineering document — the kind that starts with first principles and builds a system architecture from the ground up. The document described a distributed SQL database that would combine everything the outside world was missing: horizontal scalability, ACID transactions, multi-region replication, automatic fault recovery, PostgreSQL-compatible SQL interface.
It was, essentially, Spanner for everyone.
The design had a technical elegance that reflected years of accumulated infrastructure thinking. Every node in the network would be a peer — no primary, no secondary, no hierarchy that could become a bottleneck or a single point of failure. Consensus would be managed through the Raft protocol: a distributed agreement algorithm that ensured every node agreed on the state of the data before any transaction was considered committed. If nodes disagreed, the system stopped and resolved the disagreement before proceeding. Consistency was not a nice-to-have. It was non-negotiable.
Geography was first-class. Data could be placed in specific regions, close to the users accessing it — not as an afterthought, but as a core primitive. The difference between a 17-millisecond query routed correctly and a 245-millisecond query routed to the wrong continent was the difference between a product that felt instant and one that felt broken.
On a day in February 2014, Spencer Kimball pushed the first CockroachDB code to GitHub.
It was not a finished product. It was a proof of concept — a sketch of the architecture, executable enough to demonstrate the ideas, rough enough that only engineers who understood what they were looking at could appreciate it.
But it was public. Deliberately, emphatically public.
This was the second time Kimball had made this choice. GIMP had been open-source from the start, built on the philosophy that the free software community deserved professional-grade tools. CockroachDB was open-source from the start for the same reason: the world deserved Spanner-grade infrastructure, and locking it up behind a proprietary wall would have betrayed the entire premise.
The first commit sat on GitHub without fanfare. No press release. No announcement. Just code, pushed to a public repository, available to anyone who wanted to look.
Building CockroachDB was not, primarily, a problem of inspiration. The inspiration was clear. The problem was that what they were trying to build had never been built outside Google, and the reasons for that were not trivial.
Distributed consensus at scale is genuinely hard. The Raft protocol gives you a framework for nodes to agree on the state of data — but making that consensus fast enough to be useful, while also handling node failures, network partitions, and geographic latency, requires solving problems that compound in non-linear ways. A system that works correctly with three nodes doesn't automatically work correctly with three hundred. Every failure mode you anticipate reveals two failure modes you didn't.
ACID transactions across distributed nodes are even harder. Traditional databases provide ACID guarantees because they're running on a single machine — consistency is easy when everything lives in one place and one process manages all the state. CockroachDB needed to provide those same guarantees across nodes that could be in different cities, connected by networks that could fail, managed by software that could crash. Every transaction needed to either fully commit or fully roll back, across all nodes, consistently, with no exceptions.
Kimball, Mattis, and Darnell spent months on architecture debates that had no easy answers. They made choices — some of which they would revisit, some of which would prove to be exactly right. The database took shape slowly, the way serious engineering always does: not in moments of inspiration but in the grinding accumulation of solved problems.
On May 13, 2015, Cockroach Labs was officially incorporated.
The seed round was $6.3 million, led by Benchmark Capital — one of the most respected venture firms in Silicon Valley, the firm behind Uber, Dropbox, Instagram, Snap. Alongside Benchmark: GV (Google Ventures), Sequoia Capital, and Index Ventures.
The fact that Google Ventures invested was not lost on anyone. The company that had built Spanner and kept it proprietary was now writing a check to the team trying to democratize it. This was either ironic or fitting, depending on how you looked at it.
The pitch was not complicated: every company was moving to the cloud. Every cloud deployment eventually ran into the database problem. NoSQL had promised to solve scale but had sacrificed consistency and made operations nightmarish. Traditional SQL was consistent but couldn't scale. The market needed a database that did both, ran anywhere, and didn't require a team of infrastructure engineers to keep alive.
CockroachDB was that database.
The name caused friction in some enterprise sales conversations. CTOs would hear it and pause. "Cockroach?" But Kimball had an answer ready — he'd been giving it since the first blog post, published June 14, 2015:
"If you can get past their grotesque outer aspect, you've got to give them credit for sheer resilience. You've heard the theory that cockroaches will be the only survivors post-apocalypse? Turns out modern database systems have a lot to gain by emulating one of nature's oldest and most successful designs."
"Survive. Replicate. Proliferate. That's been the cockroach model for geological ages, and it's ours too."
The name stayed. Names that capture a truth perfectly tend to survive the discomfort they cause.
The world that CockroachDB was released into had spent years arguing about whether the future of databases was SQL or NoSQL. The answer, it turned out, was both — and the market had been waiting, without knowing it, for someone to say so.
In March 2016, Cockroach Labs raised a $20.3 million Series A extension, led by Index Ventures. CockroachDB v1.0 launched in 2017. By then, the architecture had been tested, refined, beaten against real production workloads, and proven. The Raft consensus implementation was solid. The geo-partitioning worked. The PostgreSQL compatibility was genuine — you could take an application written for PostgreSQL and point it at CockroachDB with minimal changes.
Companies that had been terrified of cloud lock-in found in CockroachDB something they'd been looking for: a database that ran identically on AWS, on Google Cloud, on Azure, on bare metal in an on-premise data center. True portability. No vendor tax.
Financial services companies came early. Banks and fintech startups for whom a single corrupted transaction could mean regulatory consequences and customer losses — they needed ACID guarantees that could survive infrastructure failures. CockroachDB gave them that.
Gaming companies came next. Millions of concurrent players, unpredictable traffic spikes, global user bases that needed low latency in Tokyo and São Paulo and London simultaneously. CockroachDB gave them that too.
The Series B came in 2018: $55 million. The Series C in 2020: $86.6 million. Each round arrived faster than the previous one.
By 2021, Cockroach Labs was a unicorn and then some.
The Series E: $160 million at a $2 billion valuation, led by Altimeter Capital.
The Series F, December 2021: $278 million at a $5 billion valuation, led by Greenoaks. Total capital raised: $633 million.
The customer list told the story more precisely than the valuations did. Netflix. Nubank. Comcast. Bose. Companies for whom downtime was not an acceptable outcome. Companies that could not afford data loss, data corruption, or consistency failures. Companies that had looked at the database market and found one option that met their requirements without compromise.
Cockroach Labs triple-digit-grew its annual recurring revenue in 2021. Its cloud business grew 500% in a single quarter. The EMEA market grew 600% year-over-year in early 2022.
Kimball appeared on CNBC's Disruptor 50 list. Cockroach Labs was named one of the most innovative companies in enterprise software.
And somewhere in a data center — or several data centers, simultaneously, by design — CockroachDB was processing transactions. Replicating data. Healing itself when nodes failed. Spreading across availability zones. Doing exactly what its name said it would do.
There is a line that runs cleanly through the entire story.
1995: Two Berkeley students create GIMP because Photoshop is too expensive and the free software world deserves professional tools. They give it away.
2002: Those same students join Google. Larry and Sergey greet them personally. "We made the first Google logo in GIMP." The open-source software that was supposed to be for everyone had also been used by the people who would shape the next era of the internet.
2010: One of those students watches Google build Spanner — infrastructure so sophisticated it can survive nuclear-strike-level data center failures — and thinks: Why does only Google get to have this?
2014: That student writes the first lines of a database designed to give everyone Spanner's survivability.
2015: The company launches with backing from, among others, Google Ventures — Google itself funding the democratization of Google's most powerful internal tool.
The thesis, at every stage, is identical: the most powerful technology in the world should not belong only to the people who built it.
GIMP was about giving everyone Photoshop's power.
CockroachDB is about giving everyone Spanner's power.
Spencer Kimball and Peter Mattis have now done this twice in their lives. Started as students with a class project, built something that millions of people use, watched it become foundational infrastructure, moved on, and built something even larger.
The scale changed. The mission didn't.
And the database — like its namesake — just keeps running.
Angle 1 — The open-source origin loop:
"Two students built GIMP at Berkeley. Google's founders used it to make the first Google logo. Those same students joined Google, watched Google build Spanner, and left to build an open-source version for everyone. The software made the journey from dorm room → Google logo → Google's own competitors."
Angle 2 — The Spanner democratization thesis:
"For years, Google had a database that could survive a data center going offline. Nobody else could use it. Three ex-Googlers decided that was a problem. One of them wrote an email in 2012: 'Very fast. Very scalable. Very hard to kill.' That email was the founding document of CockroachDB."
Angle 3 — The naming story:
"Spencer Kimball named his database after a cockroach before writing a single line of code. The metaphor: 'They'll colonize the available resources and are nearly impossible to kill.' No other names were considered. When the name captures the truth that precisely, you don't need alternatives."
Angle 4 — The mission consistency:
"Spencer Kimball has had exactly one mission across two companies: take the most powerful technology in the world and give it to everyone. At Berkeley, that was Photoshop's power → GIMP. At Cockroach Labs, that was Spanner's power → CockroachDB. Same thesis. Different decade."