AI and crypto are colliding at pace. Autonomous agents are executing trades on-chain, AI tools are being used to audit smart contracts, and DAOs are experimenting with letting models guide decisions. For founders, this convergence isn’t just hype – it’s shaping how the next generation of products will be built. But the real conversation isn’t about what we can build. It’s about how we govern it.

To make sense of this landscape, it helps to frame two distinct approaches:

  • CAI (Centralised AI): Models run by a singleentity. They’re powerful, optimised, and easy to scale. But they’re opaque, andwhoever controls the model controls the outcome. If a DAO leans on CAI,decentralisation is more marketing than reality.
  • DAI (Decentralised AI): Models and agentsdistributed across a network. They’re slower, harder to coordinate, and moreexpensive to run. But they’re transparent, resilient, and closer to the Web3ethos of shared ownership.

Neither approach isinherently better – they represent a trade-off. CAI gives you speed and scale while DAI gives you trust and resilience. In practice, most products will adopt hybrids: centralised models for high-performance tasks, decentralised ones where transparency and trust are non-negotiable.

Just to give you awindow into what is happening in this space currently and why we need to payattention:

  • Noted in a research piece from Precedence Research The Blockchain + AI Market was valued at US$550.70 million in 2024, and is projected to reach ~US$4.34 billion by 2034 (CAGR ≈ 22.9%).
  • Cryptopolitan AI Driven Crypto Projects reported that startups focused on decentralized AI and blockchain integrations raised approximately US$3.1 billion in Q1 2025 alone, although many AI crypto projects continue to struggle for success. But there is genuine interest in finding ways to make it work.
  • Firms using AI/ML forblockchain compliance report 39% improvement in reporting accuracy in 2025 vs~27% in 2024. Audit preparation time has dropped ~33% using real-time data validation, showing measurable gains in Compliance and governance.

Several projects are already experimenting with DAI, giving us a glimpse of what works(and what doesn’t):

  • Spore.fun is an experiment in “open-ended evolution” of on-chain agents. Autonomous agents hosted using TEEs (Trusted Execution Environments) that can evolve, control wallets, interact with social media, etc. It’s a testbed for agents that are sovereign (not overseen constantly), raising big questions about emergent behaviour and safety.
  • SocialGenPod – generative AI apps built with decoupled personal data stores (Solid Pods). Users keep data, apps access it only with permission. It’s a prototype but shows a model for preserving privacy and data ownership in a DAI context.
  • Web3Recommend – decentralised recommendation engine with trust and relevance, using reputation (MeritRank), graph algorithms, and resistance to Sybil attacks. Useful when you need content ranking or recommendation in a social or content-DAO setting without handing control to one service.
  • Projects like Ocean Protocol, Alethea AI arein the mix: Ocean (founded in 2017 by Bruce Pon and Trent McConaghy) for decentralised data marketplaces and Alethea for synthetic media and identity that can help prevent deep fake fraud. They aren’t fully “DAI” in every component, but they illustrate hybrid builds where parts of the pipeline are decentralised.

As we know, the industry is evolving at a rapid pace, but the underlying dangers are the risks they carry, and you should consider adding to your risk register, technical architecture documents and governance designs to ensure you not only move fast, but move safely.

  1. Emergent / Unintended Agent Behavior: Projects like Spore.fun show the potential and the danger: once agents can evolve, self-replicate or take actions with financial consequences, prediction becomes hard.
  2. Data Poisoning & Bias: Even open datasets can be corrupted. If your DAI is training or making decisions from skewed inputs, governance outcomes can skew too. This can be manipulated by whales, bad actors, or by mis-design.
  3. Governance Capture / Centralisation Under the Hood: A token-governed system where only a few hold tokens, or where the model training is done by one party, or where inference happens in secret, defeats the point. You may think you are DAI, but be effectively CAI in disguise.
  4. Scalability & Resource Constraints: Running truly decentralised inference or training is expensive. Distributed compute, consensus, synchronization, storage etc. The cost and latency trade-offs are real.
  5. Accountability & Legal Risk: Who is responsible when things go wrong — bad trades, loss of treasury, agent behaviour contradicting DAO mandates? If the logic of decisions is opaque, you risk legal liability, loss of reputation, and user trust.
  6. Security: New attack surfaces: oracles, smart contracts, off-chain compute shards, etc. Also, fraud or scams aided by AI are rising fast: generative AI is being used to produce highly credible phishing /pig-butchering scams.

What Founders Should Do Now

To build defensible, trusted systems:

  • Be deliberate in choosing where CAI, DAI, or hybrid makes sense. Map data ingestion, training, inference, governance, and transparency.
  • Bake in explainability and auditability. Even CAI modules should expose logs and decision logic.
  • Use token or reputation-based governance for decentralised parts – ensure stakeholders genuinely participate.
  • Start with hybrids: e.g. decentralised data + central inference + citizen-audit. Shift deeper into decentralisation as costs and tech allow.
  • Stress-test agents against adversarial inputs, poisoned data, and emergent behaviours. Always design fallback or override mechanisms.

You may ask why this all matters. Well, because this isn’t just about tech.

It’s about power, trust and risk

If you build with CAI without guardrails, you may gain performance, but risk replicating the very centralisation Web3 set out to avoid. If you go full DAI without planning, you end up with cost overruns, slow feedback loops, and frustrated users.

But the sweet spot? Founders who build hybrid, who bake in transparency, governance, and safety. Those are the teams that will earn trust, attract users, survive audits, and avoid costly missteps.

At Linum Labs, we’re observing and building at this AI + Crypto convergence.  

More articles form Linum Labs