A Data Investigation • LFX Insights • January 2026
The Hidden Pulse of Open Source
Beneath the leaderboards lies a different story—one of burnout, illusion, and unexpected
heroes. This is what the numbers don't want you to see.
Executive Summary
0.03
The Speed Illusion
Near-zero correlation between response time and resolution rate. Fast
replies don't mean problems get solved—they often mean bots, not humans.
→ Action: Prioritize resolution metrics over response time KPIs
-97.8%
Burnout Red Alert
High-performing projects like Islet and CheriBSD have collapsed—nearly all
activity vanished. These teams ran hot and burned out.
→ Action: Audit dependencies on declining high-performers
43 orgs
The "Silent Pillars"
Projects like ko, Numcodecs, and Infection have massive corporate backing
(40+ orgs) with small maintainer teams. Stable, trusted, under-celebrated.
→ Action: Evaluate these for enterprise adoption—low risk, high support
In the spring of 2023, a small project called CBT Tape caught my attention. Three
contributors. That's it. Yet they had pushed 3,414 commits in twelve months—a rate of 1,138 commits per person. To put that in perspective: a "normal" project
sees perhaps 20-30 commits per contributor annually.
Something was clearly different here. Were these three developers superhuman? Was there automation involved?
Or was this just noise in the data?
I decided to dig deeper. What started as a curiosity about outliers became a six-month investigation into
the Linux Foundation's ecosystem—5,000+ projects, thousands of contributors, millions of commits. And what I
found challenges everything we think we know about measuring open source health.
"The leaderboards tell you who's winning. They don't tell you who's about to collapse."
Chapter 1
Efficiency: David vs. Goliath
Do you need a massive army to move fast?
The conventional wisdom in open source is seductive: more contributors equals more progress. The "Bazaar"
model, as Eric Raymond famously called it. Get enough eyeballs, and all bugs become shallow.
But the data tells a different story.
When I plotted the relationship between contributor count and commit volume across all 4,901 active
projects, a fascinating pattern emerged. The most efficient projects weren't the massive communities—they
were the small, focused teams.
Efficiency Analysis
Small Teams, Disproportionate Output
Each circle represents a project. Size indicates commits per contributor. Hover for
details.
Look at the upper-left quadrant—projects with relatively few contributors but enormous commit volumes. These
are the "Special Forces" of open source: lean, focused, and devastatingly effective.
The Efficiency Champions
Project
Contributors
Commits
Commits/Person
CBT Tape
3
3,414
1,138
Mushroom Observer
10
5,221
522
SmokeDetector
41
16,909
412
Go Vocal
38
13,798
363
DeepCausality
5
1,729
346
These numbers demand explanation. Are these projects highly automated? Do they have unusually dedicated
maintainers? Or is there something about their structure that makes them inherently more productive?
The answer, I discovered, is often a combination: clear scope, minimal governance overhead, and in some
cases, heavy automation. CBT Tape, for instance, is an archive of mainframe utilities—a well-defined problem
space with a dedicated curator. No sprawling roadmap debates. No endless design discussions. Just code.
Chapter 2
The Triage Trap: When Speed Deceives
If a project responds instantly to issues, they probably fix them faster too, right?
Here's something that surprised me—and might surprise you too.
We obsess over response times. "Fastest responders" is a badge of honor. Companies tout their average
first-response time like it's a competitive advantage. And intuitively, it makes sense: a project that
responds quickly must be healthy, engaged, well-maintained.
Wait, really?
I correlated response time with resolution rate—how often issues actually get fixed, not just
acknowledged. The correlation coefficient?
0.03
That's essentially zero. No relationship whatsoever.
The Speed Illusion
Response Time vs. Resolution Rate: A Broken Assumption
Each point is a project. The red line shows the (nearly flat) regression. Fast
response ≠ actual resolution.
What's happening? The culprit is often automation. Bots that respond instantly—"Thanks for your issue! A
maintainer will review it soon"—inflate response metrics without contributing to actual problem-solving.
It's the appearance of engagement without the substance.
"A project that says 'hello' in two minutes but fixes your bug in two years has solved nothing."
This finding has real implications. If you're evaluating open source projects for your organization,
stop looking at response time. Look at resolution rate. Look at how many issues actually
close. That's where the truth lives.
Chapter 3
Growth vs. Maintenance: Skyscrapers or Fresh Paint?
Is the project building a skyscraper or just painting the walls?
Not all commits are created equal. Some add features, expand capabilities, build toward a vision. Others fix bugs, update dependencies, refactor code that already works. Both are necessary—but the ratio tells a story.
I compared Commit Activity against Codebase Size across hundreds of projects. The patterns that emerged were revealing:
High Commits + Low Size: Heavy maintenance or refactoring. The team is working hard to keep things running.
High Commits + High Size: Massive expansion. Active development pushing boundaries.
Low Commits + High Size: Mature and stable. Or possibly abandoned.
Growth Analysis
Commit Activity vs. Codebase Size
Projects in the upper-left have high activity relative to their size—indicating maintenance-heavy work.
Projects like Model Context Protocol (MCP) appeared in the upper-left quadrant with huge activity but a small codebase—classic signs of a new, rapidly iterating standard. When a project is still finding its footing, expect churn. When it's been around for years and still churning? That's a different story.
Chapter 4
Motion vs. Progress: The Churn Trap
Are they building new features or just rewriting the same code forever?
There's a particular kind of project that shows up in the data like a red warning light.
High commit volume. Minimal code growth. Commits churning, but the codebase size barely moving.
I call this the "Churn Trap"—projects that are moving without progressing. They might be
refactoring endlessly. They might be chasing their own tail on technical debt. Or they might be in a
volatile stabilization phase, trying to nail down an API that keeps shifting.
The poster child for this pattern? Model Context Protocol (MCP).
Churn Analysis
Activity vs. Growth: Finding the Churners
X-axis: Net line changes (growth). Y-axis: Commits (activity). Upper-left = high
churn.
The High-Churn Watchlist
Project
Commits
Net LOC Change
Churn Ratio
Status
Model Context Protocol
14,199
5
2,840
Stabilizing
EVerest
5,220
2
2,610
Stabilizing
cert-manager
1,636
2
818
Refactoring
PipeCD
814
1
814
Refactoring
MCP is a fascinating case. With a churn ratio of 2,840—meaning nearly
3,000 commits for every net line of code added—it's clear something unusual is happening. This isn't
necessarily bad. MCP is a relatively new protocol standard; high churn likely reflects rapid iteration as
the team refines the specification. But if you're considering adopting MCP today, know that the API may
still be shifting beneath your feet.
Compare this to a healthy growing project, which might show a churn ratio closer to 1-10: most commits add
or improve functionality rather than rewriting what exists.
Chapter 5
The Hidden Gems: Corporate Darlings You've Never Heard Of
Which projects have huge corporate buy-in but relatively small contributor circles?
Here's a question that matters if you're betting your company on open source: Which projects have the
broadest corporate support?
It's not always the famous ones. When I calculated the ratio of active organizations to active
contributors—a measure of corporate diversity—the leaders weren't Kubernetes or Linux. They were projects
you might never have encountered: ko, Infection,
Numcodecs.
These are what I call "Hidden Gems"—projects with massive corporate investment relative to their community
size. 43 different organizations contribute to ko, a Go container image
builder, despite having only 68 active contributors. That's a ratio of 0.63 organizations per person.
Why does this matter? Because corporate diversity is a proxy for sustainability. A project backed
by 43 organizations won't die if one company pivots away. It has distributed dependency—distributed trust.
These are the safest bets for enterprise adoption.
Top Hidden Gems by Corporate Diversity
Project
Active Orgs
Contributors
Diversity Ratio
ko
43
68
0.63
Infection
37
59
0.63
Numcodecs
41
66
0.62
Insights
62
106
0.58
jiti
31
53
0.58
Chapter 6
The Burnout Signal: Who's Running Out of Steam?
Who is running out of steam?
This is the finding that keeps me up at night.
I went looking for projects that had been running hot—high productivity scores, intense activity—but were
now showing signs of collapse. The pattern is unmistakable when you see it: a project that was once a
productivity machine, now barely producing a pulse.
The numbers are stark. Islet, once among the most productive projects in the ecosystem, saw
a 97.8% drop in activity. CheriBSD, same story.
PySyft, Common Voice, MeterSphere—all cratering.
Burnout Risk Analysis
Productivity vs. Momentum: The Red Zone
Red dots indicate projects with negative momentum (declining activity). The further
below zero, the steeper the decline.
The Burnout Watchlist
Project
Previous Productivity
Activity Change
Status
Islet
97.7
-97.8%
Critical
CheriBSD
342.6
-97.8%
Critical
PySyft
65.0
-96.7%
Critical
Common Voice
30.2
-96.2%
Critical
MeterSphere
286.8
-94.0%
Critical
"The projects that run hardest often fall fastest. Burnout isn't gradual—it's a cliff."
What causes this? Sometimes it's funding that dried up. Sometimes it's a key maintainer who got a new job,
had a baby, or just got tired. Sometimes it's a strategic pivot by a sponsoring company. Whatever the cause,
the effect is the same: a project that was once humming is now silent.
If your organization depends on any of these projects, this is your signal. Don't wait for
the GitHub archive notice. Start evaluating alternatives now.
Chapter 7
The Bus Factor: Fragile Giants
Which projects are punching way above their weight (and thus have the highest "Bus Factor" risk)?
There's a dark joke in open source circles: "What's your bus factor?" It refers to how many people would
need to be hit by a bus before a project dies. For most healthy projects, the answer is "several." For the
projects on this list, the answer is often "one" or "two."
The "Small Teams, Massive Output" leaderboard celebrates efficiency. But there's a shadow side: these same
teams are the most fragile. Mushroom Observer has produced 24,938 commits with 50 or fewer
contributors. SOAJS has 19,787 commits. These are remarkable achievements—and remarkable
risks.
Bus Factor Risk
Small Teams, Massive Output: The Fragility of Excellence
All projects have ≤50 contributors. Higher bars = higher risk if key maintainers
leave.
If you rely on one of these projects, consider this: What happens when the core maintainer gets hired by
Google? When they burn out? When they simply move on to the next thing?
These aren't hypotheticals. They happen constantly. The question is whether you'll be prepared when they
happen to your dependencies.
Chapter 8
Libraries vs. Apps: The "Free Rider" Problem
Is a low contributor count always bad?
Here's where the analysis gets nuanced. We've been talking about "Hidden Gems"—projects with high corporate investment but few contributors. But not all Hidden Gems are the same.
I segmented these projects into Libraries and Applications, and the pattern that emerged changed everything:
Libraries (e.g., Resolve, MarkupSafe): High corporate use + low contributors = Healthy. Stable APIs don't need a thousand cooks in the kitchen. A focused team is often better.
Apps (e.g., E4S): High corporate use + low contributors = Warning. Companies are using the app but not contributing back. Classic free-rider problem.
This distinction saves us from flagging a perfectly healthy library as "stagnant."
Project Segmentation
Hidden Gems: Libraries vs. Applications
Color indicates project type. Libraries with high corporate backing and small teams are often the healthiest.
When you're evaluating a dependency, ask yourself: Is this a library or an application? If it's a library with a stable API, a small team might be a feature, not a bug. If it's an application that companies are using for free without contributing, that's a sustainability risk worth noting.
What the Data Whispers
I started this investigation with a simple question about efficiency outliers. I ended with a much more
complicated picture of open source health.
The leaderboards tell you who's shipping the most code. They don't tell you who's about to stop. They don't
tell you whose "fast response" is just a bot. They don't tell you which projects have the corporate backing
to survive the next decade.
The data doesn't lie, but it does whisper. You just have to listen closely.
The Four Rules
Don't trust the "Fastest Responders." A 0.03 correlation
with resolution rate means speed is theater.
Watch for burnout signals. When high-performers suddenly
go quiet, it's often already too late.
Bet on corporate diversity. Projects like ko and Numcodecs
have distributed trust—they're safer than famous alternatives.
Question the activity. High commits with low growth means
churn, not progress.
Open source powers the modern world. But the systems we use to measure its health are broken. We celebrate
the wrong things, ignore the warning signs, and trust metrics that tell us nothing.