Submit an article
anthropic.com
Project Glasswing: Securing critical software for the AI era
by timdaub.eth11884 πŸ₯ β€’ 2h
@LorenzoARK
@LorenzoARK

New monthly developer count in crypto just fell to levels not seen since 2017. One of the metrics we have always preached in crypto and I think is going to be completely irrelevant very quickly is the developers activity or count. Back in the day this was really important to understand an L1/L2 health. Historically, developer count and activity mattered because writing code was expensive. If a chain had a lot of real developers shipping wallets, protocols, tooling, SDKs, infra, and apps, that usually meant there was genuine mindshare, and experimentation. It was an imperfect metric, but it was still a decent proxy for how much human capital was committed to the ecosystem. First, code generation is essentially free. One developer can now produce the output that previously required several people. So a lower developer count may not mean a weaker ecosystem at all. You could have fewer developers producing better products. So with AI, developer activity becomes less useful as a health metric because it stops being a scarce input. When something becomes cheap and abundant, it usually loses value as a signal. Crypto is open source. We don’t need millions of developers all rewriting the same thing to build new products. Smart contracts were always meant to be capital- and human-efficient. Curious what the folks at @electriccapital @avichal think

Tweet image
x.com
by rvolz.eth1248 πŸ₯ β€’ 18h β€’ x.com
@RonanFarrow: 18-month investigation into Sam Altman's ouster from OpenAI
by mishaderidder.eth12001 πŸ₯ β€’ 6h β€’ firefly.social
Create ENS Subdomains via API | NameStone
by timdaub.eth11884 πŸ₯ β€’ 6h β€’ namestone.com
x-voice.app
X Voice – Listen to X Articles
by timdaub.eth11884 πŸ₯ β€’ 2h
@tenobrus
@tenobrus

maybe this is not yet clear, so let me state it plainly: as of right now Anthropic, and really a small number of individuals at Anthropic, has the capacity to directly attack and cause major damage to the United States Government, China, and generally global superpowers. government agencies like the NSA do not have internal models or defense capabilities that outclass frontier models. if they chose to do so, they could likely exfiltrate top secret information from government systems, gain control over critical infrastructure including military infrastructure, sabotage or modify communications between members of government at the highest level, and potentially carry on activities for some time without detection. the thing about having access to a huge number of zerodays your adversaries don't know about is it gives you a massive asymmetric advantage. they did not exploit this to gain power or destabilize the world order. they publicly released the information that they had these capabilities and worked to mitigate these flaws. you should be grateful american frontier labs have proven themselves remarkably trustworthy and concerned with the public good. but it's critical you understand we are in a new regime. private entities now have power that directly rivals and impacts the government's monopoly on influence and violence. and anthropic is certainly not the only one, there's little chance OpenAI's internal models are far behind. this trend will accelerate on virtually every dimension, not slow down. my prediction for how it plays out is the relatively imminent seizure and nationalization of labs by the US government, sometime over the next two years. it's very tough for me to see how they accept the existence of this kind of threat. but this adds a whole new class of governance issues, as then we've handed these extremely wide-reaching capabilities from private entities to public ones.

Tweet image
x.com
by timdaub.eth11884 πŸ₯ β€’ 36m β€’ x.com
x.com
Introducing the Tempo Accounts SDK
by timdaub.eth11884 πŸ₯ β€’ 5h