Linum Blog

Your "Decentralised" Network Might Be Centralised (And We Can Detect It)

Ken Owiro
April 22, 2026
The uncomfortable truth is that blockchains can claim to be decentralised and yet very few actually are.

The uncomfortable truth is that blockchains can claim to be decentralised and yet very few actually are.

The scary part is that you can’t tell by looking at the code, or the whitepaper, not even from the validator list.

As someone who works in QA and network testing, I’ve watched countless projects launch with decentralisation claims that collapse under scrutiny. Validator distributions that look good on a chart but represent a handful of entities. Node participation that seems diverse until you trace the infrastructure back to three cloud providers.

The inherent problem is that decentralisation is easy to claim and hard to verify. Unlike smart contract security (which you can audit), network decentralisation requires continuous monitoring and systematic analysis.

So what Actually Counts as “Decentralised”?

Let’s start with the definition problem.

When someone says a blockchain is “decentralised,” they could mean any of these things:

  1. Consensus decentralisation: Many independent validators/miners participate in consensus
  2. Infrastructure decentralisation: Nodes are geographically distributed and run on different infrastructure providers
  3. Client decentralisation: Multiple independent client implementations exist and are widely used
  4. Economic decentralisation: Token rewards are distributed across many stakeholders, not concentrated in a few hands
  5. Governance decentralisation: Decision-making power is distributed, not concentrated in a foundation or core team
  6. Development decentralisation: The codebase is maintained by multiple independent teams

Most discussions assume definition #1 (validator count), but that’s the easiest to fake. You can have 1,000 validators and still have extreme centralisation if:

  • They’re all renting infrastructure from AWS (infrastructure centralisation)
  • 51% of them are run by the same entity (true validator centralisation)
  • 99% of tokens are controlled by 10 entities (economic centralisation)
  • A foundation controls all major decisions (governance centralisation)
  • Only one client implementation exists that everyone uses (client centralisation)
The Measurement Problem

Here’s where QA becomes critical.

If decentralisation is important (and for a true blockchain, it is), then you need to measure it continuously and systematically.

Some metrics frequently used:

  1. Nakamoto Coefficient

    The minimum number of entities that would need to be compromised to halt the network. Bitcoin’s is around 5-6. Most newer chains score worse, sometimes single digits.

  2. Gini Coefficient

    A measure of inequality in token distribution. Higher = more centralised. Bitcoin’s is around 0.88 (very unequal), which actually makes it highly centralised by this measure.

  3. Client diversity

    What percentage of nodes run each client implementation? If 80% of nodes run the same client, a bug in that client can take down the network.

  4. Geographic distribution

    Where are validator/node operators physically located? If 70% are in one country, you have geographic centralisation risk.

  5. Infrastructure provider concentration

    How many validators are running on AWS vs. running on their own hardware? AWS being down shouldn’t break the network.

The problem here is that these metrics are hard to measure and easy to game. A validator can claim to run on independent infrastructure while actually renting cloud capacity. Geographic distribution can be faked with proxy nodes. Token concentration data is public, but the beneficial ownership behind addresses is opaque.

Why Network Decentralisation Matters for QA

You might be thinking: “This is a blockchain design problem, not a QA problem.”

Decentralisation directly impacts how a blockchain behaves under stress, which is exactly what QA teams need to test.

Scenario 1: Network Congestion

  • In a truly decentralised network, congestion should be handled by many independent validators making independent decisions about transaction ordering
  • In a semi-centralised network, a few large validators can effectively prioritise their own transactions, or MEV extractors can game the ordering
  • Testing this requires understanding the validator distribution and simulating behavior under congestion

Scenario 2: Network Partitions

  • If your network splits (some regions can’t reach others), does the consensus mechanism still work?
  • If it does, which partition is the “real” one? In Bitcoin, it’s the longest chain. In proof-of-stake chains, it depends on staker distribution
  • A network where all major validators are in one region will partition differently than one with true geographic distribution

Scenario 3: Validator Malice

  • What if 1/3 of validators decide to attack the network by including invalid transactions?
  • Can the remaining 2/3 continue? Most modern blockchains assume you need 2/3 honest validators
  • But if 51% of validators are controlled by one entity, that assumption is meaningless

Scenario 4: Upgrade Adoption

  • When the network needs to upgrade (fork), do all validators cooperate, or do some dissent?
  • If validators are truly independent, you might see contentious forks (Bitcoin/Bitcoin Cash)
  • If they’re coordinated, upgrades happen smoothly (which is efficient, but signals centralisation)
The Testing Gap

Here’s the honest part: most QA frameworks don’t test for centralisation risk.

We test smart contract logic. We test consensus rules. We test transaction throughput.

But we rarely test: “What happens when 51% of validators are compromised?” or “How does the network behave when infrastructure providers have outages?” or “Can we detect when validators are colluding?”

Why? Because it requires:

  1. Understanding the actual validator distribution (data collection)
  2. Defining what “acceptable” centralisation looks like (governance decision)
  3. Building monitoring systems to detect centralisation drift (engineering effort)
  4. Running simulations under different centralisation scenarios (test complexity)

It’s hard. It’s not glamorous. It doesn’t ship features. But it’s critical.

Red Flags for Centralisation (The QA Checklist)

Based on observations from dozens of blockchain projects, here are the signs that a “decentralised” network might not be:

  • 🚩 Validator concentration - Top 10 validators control >51% of stake. Ask: who are they? Are there common owners?
  • 🚩 Geographic concentration - >70% of validators are in one region. Check locations of validator operators (IP geolocation data).
  • 🚩 Infrastructure concentration - >60% of nodes use AWS/Azure/Google Cloud. This is surprisingly common and represents a single point of failure.
  • 🚩 Client concentration - >80% of nodes run the same client implementation. A bug in that client breaks the network.
  • 🚩 Token concentration - Gini coefficient >0.85 or Nakamoto coefficient <10. This tells you how much stake concentration matters.
  • 🚩 No contentious forks ever - If the network has never had a genuine disagreement that resulted in a fork, validator operators might be too coordinated. Real decentralisation includes occasional disagreements.
  • 🚩 Foundation control - The founding team/foundation still controls or heavily influences most upgrade decisions. This isn’t necessarily bad, but it’s centralised.
  • 🚩 Upgrade speed - Every upgrade passes with >95% validator support. Real independence means some validators will disagree about direction.
  • 🚩 No validator churn - The same validators have been at the top 100 for 2+ years. New validators can’t break in (high barriers to entry = centralisation).
  • 🚩 Hard to run a node - Hardware requirements are extreme, or it requires specialised knowledge. Easier barrier to entry = more decentralisation.
What Good Decentralisation Looks Like

For contrast, here are the signs of a network taking decentralisation seriously:

  • Nakamoto coefficient >20 - You need to compromise many independent entities to attack the network
  • Top 10 stake <40% - No single entity dominates
  • Geographic distribution - Validators spread across at least 5+ countries
  • Multiple client implementations - At least 2 client implementations with >10% market share each
  • Infrastructure diversity - <50% of nodes on any single cloud provider
  • Regular contentious debates - The community disagreed about something and there’s documentation of it
  • Validator churn - New validators regularly enter and old ones exit
  • Low barriers to running nodes - Documentation exists, hardware requirements are reasonable, non-technical barriers are minimal
  • Transparent validator data - Information about validator distribution is readily available and regularly published
  • Incentive alignment - Token distribution and economic incentives reward decentralisation (not concentration)
The Uncomfortable Question

Here’s what keeps me up at night:

If decentralisation is so important to blockchain (and most projects claim it is), why don’t most teams systematically measure it?

The answer: measuring decentralisation is hard, and the results might be embarrassing.

It’s easier to claim decentralisation than to prove it. And once you’ve claimed it, admitting you don’t actually have it is a credibility killer. But from a QA perspective, you have to measure what matters. You can’t trust what you don’t monitor.

What Your Team Should Be Doing

If you’re building or maintaining a blockchain:

  1. Define what decentralisation means for your network. Don’t use vague language. Specific metrics: Nakamoto coefficient, Gini coefficient, client diversity percentage, geographic distribution targets.
  2. Measure it continuously. Build dashboards. Track validator distribution, client usage, stake concentration over time. Make this data public.
  3. Test failure scenarios. What happens if the top validator goes offline? What if all validators in one region lose connectivity? What if 1/3 of validators become malicious? Run these simulations.
  4. Monitor for centralisation drift. Decentralisation is not a one-time achievement. It requires ongoing maintenance. New validators should be easier to bootstrap than old ones. Geographic distribution should be actively encouraged.
  5. Be honest about your tradeoffs. Some projects optimise for speed over decentralisation (Solana). Some optimise for decentralisation over throughput (Bitcoin). Both are valid choices. What’s not valid is claiming both.
  6. Plan for the worst case. “Our network won’t be attacked by a coalition of validators” is not a credible assumption. Plan for it anyway.
The Real Problem

The real problem with network decentralisation isn’t that it’s hard to achieve but that it’s easy to claim and hard to verify.

The role of QA isn’t just to test smart contracts but also to systematically verify that the claims made about the network (including decentralisation) match reality. The moment those diverge, you don’t have a blockchain. You have a distributed database with an expensive consensus mechanism.

Key Takeaways
  • Decentralisation is easy to claim and hard to verify
  • Most blockchains claim decentralisation but measure only validator count (the easiest part to fake)
  • True decentralisation requires measuring: geography, infrastructure, clients, token distribution, governance, and development
  • QA teams should test network behavior under centralisation assumptions as part of security testing
  • Nakamoto coefficient, Gini coefficient, and client diversity are better decentralisation metrics than validator count
  • Decentralisation requires ongoing monitoring, not just initial measurement
  • Red flags include validator/token concentration, geographic clustering, and infrastructure concentration
  • Most networks succeed despite mediocre decentralisation because decentralisation is about tail risks, not day-to-day operation
More articles from Linum Labs