Wednesday, October 22, 2025
HomeEthereumWhy did MetaMask present $0 on Ethereum when AWS went offline?

Why did MetaMask present $0 on Ethereum when AWS went offline?


An Amazon Internet Companies disruption on Oct. 20 knocked out MetaMask and different ETH wallets shows and slowed Base community operations, exposing how cloud infrastructure dependencies ripple by means of decentralized programs when a single supplier fails.

AWS reported a fault in its US-EAST-1 area beginning at 03:11 ET, with DNS and EC2 load-balancer well being monitoring failures cascading into DynamoDB and different companies.

Amazon declared full mitigation by 06:35 ET and full restoration by night, although backlog clearing prolonged into Oct. 21.

Coinbase posted an lively incident, noting an “AWS outage impacting a number of apps and companies,” whereas customers reported that MetaMask balances have been displaying zero and that Base community transactions have been experiencing delays.

The mechanical hyperlink runs by means of Infura, MetaMask’s default RPC supplier. MetaMask documentation directs customers to Infura’s standing web page throughout outages as a result of the pockets routes most learn and write operations by means of Infura endpoints by default.

When Infura’s cloud infrastructure wobbles, stability shows and transaction calls can misreport although funds stay safe on-chain.

The disruption affected Ethereum and layer-2 networks that depend on Infura’s RPC infrastructure, creating UI failures that mimicked on-chain issues regardless of consensus mechanisms persevering with to operate.

Base chain metrics from Oct. 21 present $17.19 billion in whole worth locked, roughly 11 million transactions per 24 hours, 842,000 lively addresses each day, and $1.37 billion in DEX quantity over the prior day.

Quick outages of six hours or much less sometimes cut back DEX quantity by 5% to 12% and transaction counts by 3% to eight%, with TVL remaining secure as a result of the problems are beauty moderately than systemic.

Prolonged disruptions lasting six to 24 hours can lead to a ten% to 25% lower in DEX quantity, an 8% to twenty% lower in transactions, and a 0.5% to 1.5% lower in bridged TVL, as delayed bridging operations and risk-off rotations to Layer 1 take maintain.

Nevertheless, transaction depend and DEX volumes remained regular between Oct. 20 and 21. DEX volumes have been $1.36 billion and $1.48 billion, respectively, whereas transactions amounted to 10.9 million and 10.74 million.

Base transactions over a 7-day period
Base each day transactions dropped 8% from 11.2 million to 10.3 million throughout the Oct. 20-21 AWS outage earlier than recovering to 11 million by Oct. 23.

Base skilled a separate incident on Oct. 10 involving secure head delays from excessive transaction quantity, which the staff resolved rapidly.

That episode demonstrated how layer-2 networks can hit finality and latency constraints throughout demand spikes unbiased of cloud infrastructure points.

Stacking these demand-side pressures with exterior infrastructure failures compounds the danger profile for networks working on centralized cloud suppliers.

Date & time Service Replace Symptom Resolved?
Oct 20, 07:11 AWS (us-east-1) Outage recognized; inner DNS and EC2 load-balancer health-monitor fault World API/connectivity errors throughout main apps “All 142 companies restored” by 22:53; some backlogs lingered into Oct 21.
Oct 20, 07:28 → Oct 21, 00:57 Coinbase standing Incident opened → resolved Customers unable to login, commerce, switch; “funds are secure” messaging Recovered; monitoring by means of night Oct 20 (PDT).
Oct 20, 19:46 Decrypt tracker MetaMask balances exhibiting zero; Base/OpenSea struggling as AWS points persist; Infura implicated Pockets UI misreads and RPC errors throughout ETH & L2s Ongoing throughout afternoon; restoration staggered by supplier queues.
Oct 10, 21:40 (context) Base standing “Secure head delay from excessive tx quantity” (unrelated to AWS) Finality/latency lag (“secure head” behind) Resolved identical day; exhibits L2 latency edges unbiased of cloud occasions.

Cloud focus surfaces as a systemic weak spot

The AWS occasion refreshes longstanding considerations about cloud supplier focus in crypto infrastructure.

Prior AWS incidents in 2020, 2021, and 2023 revealed complicated interdependencies throughout DNS, Kinesis, Lambda, and DynamoDB companies that propagate to pockets RPC endpoints and layer-2 sequencers hosted within the cloud.

MetaMask’s default routing by means of Infura means a cloud hiccup can seem chain-wide to finish customers, regardless of on-chain consensus working usually.

Solana’s five-hour community halt in 2024, attributable to a software program bug, demonstrated person tolerance for temporary downtime when restoration is executed cleanly and communication stays clear.

Optimism and Base have beforehand logged unsafe and secure head stalls on their OP-stack structure, points that groups can resolve by means of protocol enhancements.

The AWS disruption differs in that it exposes infrastructure dependencies exterior the management of blockchain protocols themselves.

Infrastructure groups will possible speed up multi-cloud failover plans and broaden RPC endpoint variety following this incident.

Wallets might immediate customers to configure customized RPCs moderately than counting on a single default supplier.

Layer-2 groups sometimes publish post-mortems and service-level goal revisions inside one to 4 weeks of main incidents, probably elevating shopper variety and multi-region deployment priorities in upcoming roadmaps.

What to observe

AWS will launch a post-event abstract detailing root causes and remediation steps for the US-EAST-1 disruption.

Base and Optimism groups ought to publish incident post-mortems addressing any sequencer or RPC impression particular to OP-stack chains.

RPC suppliers, together with Infura, face strain to commit publicly to multi-cloud architectures and geographic redundancy that may stand up to single-provider failures.

Centralized exchanges that posted incidents throughout the AWS outage, together with Coinbase, might expertise unfold widening and quantity shifts to decentralized exchanges on less-affected chains throughout future cloud disruptions.

Monitoring change standing pages and downdetector curves throughout infrastructure occasions offers real-time alerts for the way centralized and decentralized buying and selling venues diverge below stress.

The occasion confirms that blockchain’s decentralized consensus can not absolutely insulate person expertise from centralized infrastructure chokepoints.

The RPC layer focus stays a sensible weak level, the place cloud supplier failures translate into pockets show errors and transaction delays that undermine confidence within the reliability of Ethereum and layer-2 ecosystems.

Talked about on this article
RELATED ARTICLES

Most Popular

Recent Comments