
Project Glasswing and the ASF: Open Source's Chance to Win the AI Era.
On Project Glasswing, Anthropic's $1.5M donation to the Apache Software Foundation, and the open source movement's defining moment in the AI era.
Last week, the Apache Software Foundation announced that Anthropic has donated $1.5 million to support ASF infrastructure, security, and community operations — part of a broader initiative called Project Glasswing. The news landed with a thud in open source circles, and not just because of the dollar amount.
Glasswing is Anthropic's attempt to use its most capable AI model — Claude Mythos Preview — to scan critical software infrastructure for security vulnerabilities before malicious actors find them. Alongside the ASF donation, the company committed $2.5 million to cybersecurity organizations through the Linux Foundation, and is providing up to $100 million in model usage credits to partners across the industry, including AWS, Apple, Cisco, Google, and Microsoft.
The stated goal is genuinely valuable: put frontier AI capabilities to defensive use before offensive actors do. Claude Mythos Preview has reportedly already discovered thousands of high-severity flaws — vulnerabilities in major operating systems and browsers that had evaded detection for years. Anthropic says it will report publicly on its findings within 90 days.
Good news, right?
Maybe. Depending on who you ask.
A Community Divided
The reaction across the open source world — inside the ASF member community and beyond — has been complicated. And that's putting it charitably.
On one side: real gratitude. The ASF has quietly underwritten the world's most important software infrastructure for decades on a shoestring. Kafka, Spark, Iceberg, Arrow — the backbone of the modern data stack runs on Apache projects. Sustained funding matters. Account credits for AI tooling that maintainers couldn't otherwise afford? Genuinely useful.
On the other: genuine unease. A few themes keep surfacing:
The branding question. Anthropic is a for-profit company building proprietary AI systems. Using the Apache brand — one of the most trusted in software — as a credibility signal for a commercial initiative raises fair questions about independence and co-optation. The ASF has always prided itself on vendor neutrality. What does it mean when a single donor with its own agenda contributes at this scale?
The workload problem. If Glasswing's AI scanning finds thousands of high-severity vulnerabilities across critical open source projects, someone has to fix them. Maintainers, many of whom already work nights and weekends without pay, will inherit the triage, remediation, and disclosure process for a flood of AI-generated findings they didn't ask for. Good intentions can still create bad outcomes.
The "what are these models actually for?" question. A tool capable enough to find zero-days in production software is a tool capable enough to be used for the opposite purpose. Security researchers are not wrong to ask who else has access, under what terms, and with what oversight. This is not tinfoil-hat territory — it is exactly the kind of concern responsible security professionals raise about dual-use technology.
These concerns are real, and they deserve honest engagement rather than dismissal.
There's also something worth pausing on before we get to the pushback: we should all be grateful these vulnerabilities are being found in open source code, not proprietary code. When Glasswing finds a zero-day in a closed-source product, you won't know about it. There's no CVE filed publicly, no community discussion, no patch you can contribute. With open source, the issues are visible. The progress is trackable. Anyone can submit the fix. That transparency is precisely what makes open source more trustworthy over time — not less — and it's a feature that proprietary software simply cannot replicate.
But I do want to push back on the underlying conclusion that some in the community seem to be drawing — that the right response is to slow this down, or to keep open source at arm's length from AI advancement.
Pandora Already Left the Building
Let's be honest about where we are.
The capability to do what Glasswing describes — AI-assisted vulnerability discovery at scale — exists now. Anthropic built it. So did Google. So did OpenAI, DeepMind, Meta, and a dozen well-funded startups you've never heard of. The research is published. The models exist. The infrastructure is being built whether any of us participates or not.
Debating whether open source communities should engage with this is a little like debating whether to acknowledge that email exists. The question isn't whether to engage. The question is how, and on whose terms.
Some voices — including well-intentioned political leaders — have argued for slowing the construction of AI data centers as a way to pump the brakes on AI development while governments and foundations sort out regulation. Senator Bernie Sanders has made exactly this argument, and he makes it with characteristic clarity and passion. His heart is in exactly the right place.
But I think this particular prescription would backfire, badly.
The Democratization Risk Nobody Is Talking About
Slowing data center construction doesn't slow the advancement of AI models and their capabilities. It slows access to them.
The research happens regardless — in private labs, behind closed doors, increasingly in labs outside the US and Europe where local oversight may be weaker, not stronger. What tightening supply does is make capacity scarce, raise prices, and tilt the playing field further toward whoever can afford to play. Big companies. Wealthy governments. Institutions with the capital to lock in long-term contracts at scale.
The students, the small businesses, the nonprofits, the open source projects, the researchers at universities in countries without sovereign AI programs — they get squeezed out. That's not a win for ethics. That's the worst possible outcome: powerful AI, controlled by the fewest possible hands.
The goal of AI regulation — and I'm firmly in the camp that says urgent, thoughtful regulation is essential — should not be to slow the technology. It should be to democratize access to it, and to hold those with access accountable for what they do with it.
This is the same argument the open source movement has been making about software for forty years. And it won.
What Open Source Can Win Here
Here's the thing that gets lost in the anxiety about Glasswing: open source is uniquely positioned to come out ahead of this moment — if it moves deliberately.
The donation from Anthropic isn't just money. It's an acknowledgment that modern AI systems are built on open source infrastructure, and that the companies extracting value from that infrastructure have an obligation to invest in it. That's a precedent worth establishing and expanding.
The vulnerability-scanning conversation is an opening, not a threat. Open source projects that engage proactively — that use AI tooling to get ahead of their own security debt, that establish community norms around AI-assisted disclosure, that advocate for maintainer support as part of the remediation process — will be better positioned than those that ignore the wave until it breaks over them.
The regulatory conversation is an opportunity. Open source foundations have a voice in how AI gets governed that they haven't fully used yet. The ASF, the Linux Foundation, the Eclipse Foundation — these are neutral, trusted institutions with decades of credibility. That credibility is valuable precisely in a moment when everyone is trying to figure out who to trust.
The Role You Can Play Right Now
None of this is abstract. It plays out in the day-to-day choices of individual engineers and maintainers.
The highest-leverage thing you can do right now is become an AI Enablement Engineer — someone who actively learns how AI tooling can accelerate your work, helps your team adopt it responsibly, and serves as the bridge between fast-moving AI capabilities and the humans who need to use them safely and effectively.
This isn't just a career move (though it is that). It's a form of stewardship. Every open source contributor who gets good at using AI tools to find bugs, write better documentation, review code faster, and onboard new contributors is expanding access to those capabilities. Every maintainer who establishes clear community norms around AI-generated contributions is doing the governance work that foundations and governments are only beginning to figure out.
The answer to "AI is powerful and scary" is not "therefore, keep it away from us." It's "therefore, let's make sure we understand it, we shape how it's used, and we ensure that access to it isn't hoarded."
Open Source Has Been Here Before
The open source movement was built on a bet that felt radical at the time: that software could be more powerful, more secure, and more trustworthy if it were built in public, by many hands, with shared ownership and shared accountability.
That bet was right. It won. Linux runs almost everything. Apache projects power global financial infrastructure. Open source is the default for most new software development on the planet.
AI is the next terrain for that argument. Proprietary models built behind closed doors, available only to those who can pay premium prices, governed by no community and accountable to no one but shareholders — that is the failure mode we should be organizing against.
Project Glasswing, for all its complexity, is a signal that the AI industry knows it needs open source to legitimize what it's building. That's leverage. Use it.
We can't turn back the clock. We can't slow the models down. But we can blow open the doors to access, hold the powerful accountable for how they use these tools, and make sure that the open source community — which built the infrastructure AI runs on — is driving the car, not riding in the trunk.
Open source is a movement. It has always required playing the game — the political game, the licensing game, the standards game. We've won before by playing it better than proprietary incumbents who had every structural advantage. This is the same game on new terrain, with higher stakes and a bigger stage.
We have to play it to win. And we will win — if we play it well.