1. The Economics of Crypto Security Are Breaking Down
Hacks and exploits drained approximately $1.4 billion from cryptocurrency platforms over the past year, a figure that does not capture the full scale of social engineering and phishing losses that fall outside the traditional "exploit" category. According to Charles Guillemet, chief technology officer of Ledger — the world's most widely used hardware wallet manufacturer — that number is likely to get worse before it gets better. The reason is not that defenders are becoming less competent. It is that artificial intelligence is fundamentally altering the cost structure of attacking crypto systems, making advanced exploits accessible to a much broader range of adversaries and compressing the time required to identify and weaponize vulnerabilities.
Guillemet's assessment, delivered in a detailed interview, draws on both Ledger's front-line position as a hardware security manufacturer and his close observation of the threat landscape that has produced incidents like the Drift Protocol $270 million breach — which he explicitly referenced as a precedent when Bybit was compromised in the same manner. The warnings carry weight not as academic threat modeling but as observations from someone whose organization is commercially and reputationally dependent on the integrity of the security infrastructure he is describing.
2. AI as an Attack Amplifier
The specific mechanism through which AI is degrading crypto security economics, in Guillemet's framing, is its ability to reduce both the cost and the knowledge threshold required to execute sophisticated attacks. Traditionally, writing effective malware, identifying novel vulnerabilities in complex smart contract code, and crafting convincing social engineering campaigns required either deep technical expertise or significant resources. Nation-state actors and well-funded criminal organizations could clear these bars. Most potential adversaries could not.
AI changes that calculus. Large language models can generate functional malware code from high-level descriptions. Automated vulnerability scanners augmented by AI can identify patterns in code that human auditors miss or would take longer to find. AI-generated phishing content and deepfake audio and video make social engineering campaigns more persuasive and harder to detect. The combination of these capabilities means that attacks that previously required a team of expert engineers can now be initiated by individuals or small groups with modest technical backgrounds, and that the speed of moving from vulnerability discovery to exploitation is compressing.
For an industry that already struggles to produce secure code fast enough to keep up with the pace of protocol development, this compression of the attack lifecycle creates a meaningful asymmetry: defenders must secure every version of their code across every deployment; attackers only need to find and exploit one version, once.
3. AI-Generated Code Introduces Systemic Risk
A second AI-related threat vector Guillemet identified is one that originates on the defense side rather than the attack side: the rapid adoption of AI coding assistants by crypto developers. Tools that generate functional code from natural language descriptions, autocomplete complex functions, and accelerate the development process have become mainstream across the software industry including in blockchain protocol development.
The problem is that AI-generated code is not inherently secure code. AI coding assistants are trained to produce code that works — that passes tests, compiles, and executes the intended function — not code that is hardened against adversarial inputs, edge cases, or sophisticated manipulation. When developers trust AI-generated code without independently verifying its security properties, and when that code is deployed at scale across protocols managing hundreds of millions or billions of dollars, the result is a potential proliferation of vulnerabilities that are systematically introduced rather than incidentally present.
Guillemet's summary of this dynamic was blunt: there is no "make it secure" button in AI coding tools, and the industry will inevitably produce a significant volume of code that is insecure by design — not because anyone intended insecurity but because the tools optimizing for speed and functionality are not optimizing for security. That code will be deployed, it will be exploited, and the losses will compound unless the industry develops systematic ways to verify what AI tools produce rather than simply trust it.
4. The Demand for Perfection That Cannot Be Met
One of the starkest dimensions of Guillemet's assessment is the mismatch between what the crypto security environment demands and what is realistically achievable. He summarized the challenge with a direct formulation: defenders need to be perfect; attackers only need to be right once. That asymmetry is not new — it applies to cybersecurity broadly — but it is more consequential in crypto than in most other domains because the assets at risk are directly and irreversibly transferable. A security failure in a bank results in a loss that may be partially recoverable through the traditional financial system's dispute and reversal mechanisms. A security failure in a DeFi protocol results in assets that are typically gone permanently.
The implication of AI's expansion of the attacker population and compression of the attack lifecycle is that the "be perfect" requirement is being applied against a larger and faster-moving attack surface. The number of potential adversaries who can mount a sophisticated attack is growing. The time between vulnerability introduction and exploitation is shortening. The resources required to mount complex, multi-vector operations — like the six-month social engineering campaign that preceded the Drift exploit — are declining. Each of these trends moves the equilibrium further away from the defender and toward the attacker.
5. Formal Verification as a Stronger Foundation
Against this threat environment, Guillemet argued for a fundamental shift in how crypto protocols approach security — away from the current dominant model of code audits and toward formal verification. Traditional security audits involve human experts reviewing code to identify potential vulnerabilities. They are valuable and catch real problems, but they are inherently incomplete: auditors can only identify the vulnerabilities they know to look for, and the depth of coverage is constrained by the time and expertise available. As code complexity increases and AI-generated code proliferates, the gap between what audits can catch and the full space of potential vulnerabilities widens.
Formal verification takes a different approach: instead of searching for problems by reading code, it uses mathematical proofs to demonstrate that code satisfies specific properties under all possible inputs and conditions. A formally verified property is not "probably correct based on our review" — it is provably correct given the specification. For high-value DeFi protocols, the additional development time and cost required for formal verification represents an investment that is modest compared to the potential losses from a single exploit.
The challenge is that formal verification requires precise specification of what a system should do before the proof can be constructed, and that specification process itself requires significant expertise. It also cannot protect against vulnerabilities in the specification itself — a formally verified system that does exactly what its specification says can still be exploited if the specification omitted an important security property. Nevertheless, Guillemet's view is that formal verification is the direction the most security-critical parts of crypto infrastructure need to move in, regardless of the friction involved.
6. Hardware Isolation as a Defense Layer
Parallel to the argument for formal verification, Guillemet pointed to hardware-based security isolation as a fundamental layer of defense that software-only approaches cannot replicate. The argument from first principles is straightforward: a device that maintains private keys in secure hardware that is not connected to the internet and does not execute arbitrary software cannot have those keys extracted by malware running on an internet-connected machine, regardless of how sophisticated that malware is.
The Drift Protocol compromise illustrates the attack vector that hardware isolation addresses. Security Council members had their devices compromised through a malicious TestFlight application and a VSCode/Cursor vulnerability. Because the signing keys used for governance approvals were stored on and accessible from those compromised devices, the attackers could obtain valid transaction signatures without the signers knowing what they were authorizing. A hardware wallet that required physical confirmation — a button press on a device the signer physically controlled — combined with a screen display of the actual transaction content would have made the durable nonce approval process observable and interruptible.
The practical limitation of hardware isolation is user experience friction. Hardware wallets require physical interaction, add steps to signing workflows, and are incompatible with the speed and automation that DeFi governance operations often attempt to achieve. Guillemet's position is that this friction is a feature, not a bug, in high-value security contexts: the additional friction is what makes the attack more difficult, and removing it in the name of convenience directly reduces the security properties the hardware provides.
7. The Offline Storage Imperative
A related principle Guillemet emphasized is the value of keeping assets and critical infrastructure in a state that is, by default, not reachable from the internet. The attack surface for a system that cannot be accessed remotely is fundamentally different from the attack surface for one that can. For individual users, cold storage — hardware wallets kept offline when not actively in use — eliminates the remote malware vector entirely. For protocol governance systems, the requirement that signing keys reside on air-gapped or minimally networked devices significantly raises the cost and complexity of compromising those keys.
The tension between offline security and operational efficiency is real and not easily resolved. DeFi protocols designed for continuous operation, fast governance responses, and high-frequency administrative updates are architecturally biased toward always-online infrastructure. The security premium from moving governance operations to more isolated hardware and workflows is paid in speed and convenience, and in a competitive DeFi ecosystem where protocol teams optimize heavily for development velocity, that premium often goes unpaid until a major breach makes it visible.
8. The AI Adversary in Social Engineering
Beyond the technical attack vectors, Guillemet's concern about AI-powered social engineering reflects a category of threat that formal verification and hardware wallets do not directly address. The Drift Protocol incident demonstrated that a sophisticated attacker willing to invest six months and $1 million in building a false identity can compromise governance systems by manipulating the humans who control them, regardless of the technical security of the protocols those humans operate.
AI makes this class of attack more accessible and more effective. Deepfake audio and video technology can make impersonation attacks — previously limited to sophisticated state-level operations — available to a much wider range of actors. AI-generated text can produce phishing communications that are indistinguishable from legitimate correspondence in terms of language quality, context, and specificity. AI-assisted research into targets can rapidly build the kind of detailed personal and professional profiles that make targeted social engineering more convincing.
The defensive response to AI-enhanced social engineering cannot be purely technical. It requires operational security protocols, mandatory verification procedures for high-value transactions and governance actions, and a security culture that treats unexpected requests with heightened skepticism regardless of how credible they appear. The lesson from Drift is not that the security council members were naive — the operation that compromised them was sophisticated enough to deceive people who had met the attackers in person over months. The lesson is that human-layer security requires systematic process controls, not just good judgment in individual interactions.
9. What the Industry Should Assume
One of Guillemet's more consequential points is a recommendation about the operating assumption that crypto protocols and users should maintain. His framing is that the current threat environment justifies assuming that many systems will eventually fail, rather than assuming they will hold unless a specific vulnerability is identified. That shift in assumption — from "secure until proven otherwise" to "will eventually fail, so minimize impact" — has significant implications for how security is designed and resources are allocated.
It argues for defense-in-depth architectures where the failure of any single component does not immediately compromise the whole system. It argues for timelocks and governance delays that create detection windows between when a malicious action is approved and when it can be executed. It argues for circuit breakers and emergency pause mechanisms that can limit the damage of an exploit in progress. And it argues for the kind of asset isolation and cold storage practices that ensure that a device compromise, however sophisticated, cannot immediately reach the full scope of assets a protocol manages.
The assumption of eventual failure also implies a different relationship with recovery and compensation planning. Protocols that operate on the assumption that their security will hold do not invest heavily in compensation funds, insurance mechanisms, or recovery protocols. Protocols that operate on the assumption of eventual failure treat those mechanisms as essential infrastructure rather than optional features.
10. The Security Infrastructure Race
The warning from Ledger's CTO is not the only evidence that the crypto industry is confronting a moment of security reckoning. Ripple has launched an AI-driven security strategy for the XRP Ledger that embeds machine learning across the full development lifecycle, including adversarial testing designed to find vulnerabilities before attackers do. Ethereum has established a post-quantum security hub. MoonPay has integrated Ledger hardware wallet signing into its AI agent platform specifically to prevent private keys from being accessible to the agent layer. Multiple analytics firms including Chainalysis and TRM Labs are deploying AI to the tracing and attribution side of the threat equation.
The response is real, but Guillemet's warning is that the attack side of the AI ledger is advancing at least as fast as the defense side, and in some dimensions faster. The economics favor offense: a successful crypto exploit generates immediate, irreversible returns; a successful defense generates only the absence of a loss. Until the defense side invests at the scale and with the systematic rigor that the threat environment demands — formal verification, hardware isolation, human-layer security protocols, and the assumption of eventual failure — the $1.4 billion annual loss rate to crypto hacks is not a ceiling. It is a baseline.

