DeepSeek Shatters the Final Bubble of the Agent Track, DeFAI May Give Birth to New Life, and Industry Financing Methods Are Set to Transform. This article is authored by Kevin, the Researcher at BlockBooster, and republished by Foresight News.
(Previous Context: Binance Report: How DeFAI is Reshaping the DeFi Interaction Experience?)
(Background Supplement: Legendary Short Seller: Signs of a U.S. Stock Bubble Have Emerged, the Biggest Risk in the Coming Year is the Deepseek Effect)
TLDR:
The emergence of DeepSeek has shattered the computational power moat, with open-source models leading the way in computational optimization as a new direction;
DeepSeek benefits the model and application layers in the industry’s upstream and downstream, negatively impacting computational power protocols in infrastructure;
DeepSeek inadvertently punctured the last bubble of the Agent track, making it most likely for DeFAI to give birth to new life;
The zero-sum game of project financing is expected to come to an end, with community launches and a small number of VC investments potentially becoming the norm.
The impact triggered by DeepSeek will have far-reaching effects on the upstream and downstream of the AI industry this year. DeepSeek successfully enabled consumer-grade graphics cards to accomplish model training tasks that previously required high-end GPUs. The first moat surrounding AI development—computational power—begins to collapse. As algorithmic efficiency accelerates at a staggering rate of 68% per year, while hardware performance follows Moore’s Law with linear growth, the entrenched valuation models from the past three years are no longer applicable. The next chapter of AI will be opened by open-source models.
Although AI protocols in Web3 are fundamentally different from those in Web2, they inevitably bear the influence of DeepSeek, which will foster new use cases across the upstream and downstream of Web3 AI: infrastructure layer, middleware layer, model layer, and application layer.
**Mapping the Collaborative Relationships of Upstream and Downstream Protocols**
Through analyzing technical architectures, functional positioning, and practical use cases, I have divided the entire ecosystem into four layers: infrastructure layer, middleware layer, model layer, and application layer, and clarified their dependencies:
**Infrastructure Layer**
The infrastructure layer provides decentralized underlying resources (computational power, storage, L1), where computational power protocols include Render, Akash, io.net, etc.; storage protocols include Arweave, Filecoin, Storj, etc.; and L1 includes NEAR, Olas, Fetch.ai, etc.
Computational power layer protocols support model training, inference, and the execution of frameworks; storage protocols store training data, model parameters, and on-chain interaction records; L1 optimizes data transmission efficiency through dedicated nodes, reducing latency.
**Middleware Layer**
The middleware layer acts as a bridge connecting infrastructure to upper-layer applications, providing framework development tools, data services, and privacy protection. Data labeling protocols include Grass, Masa, Vana, etc.; development framework protocols include Eliza, ARC, Swarms, etc.; and privacy computing protocols include Phala, etc.
The data service layer provides fuel for model training, while development frameworks rely on the computational power and storage of the infrastructure layer, and the privacy computing layer protects data during training/inference.
**Model Layer**
The model layer is used for model development, training, and distribution, with open-source model training platforms like Bittensor.
The model layer relies on the computational power of the infrastructure layer and the data from the middleware layer; models are deployed on-chain through development frameworks; the model marketplace delivers training results to the application layer.
**Application Layer**
The application layer consists of AI products aimed at end-users, where Agents include GOAT, AIXBT, etc.; and DeFAI protocols include Griffain, Buzz, etc.
The application layer calls the pre-trained models from the model layer; it relies on privacy computing from the middleware layer; complex applications require real-time computational power from the infrastructure layer.
**DeepSeek May Have a Negative Impact on Decentralized Computational Power**
According to a survey, approximately 70% of Web3 AI projects actually call upon OpenAI or centralized cloud platforms, with only 15% using decentralized GPUs (such as Bittensor subnet models), and the remaining 15% employing a hybrid architecture (sensitive data processed locally, general tasks on the cloud).
The actual usage rate of decentralized computational power protocols is far below expectations and does not match their market value. The reasons for low usage rates include: Web2 developers carrying over existing toolchains when migrating to Web3; decentralized GPU platforms have yet to achieve price advantages; and some projects evade data compliance audits under the guise of “decentralization,” while still relying on centralized clouds for actual computational power.
AWS/GCP dominate over 90% of the AI computational power market, while Akash’s equivalent computational power accounts for only 0.2% of AWS’s. The moat of centralized cloud platforms includes cluster management, RDMA high-speed networks, and elastic scaling; decentralized cloud platforms have web3-improved versions of these technologies but have shortcomings that cannot be fully addressed, such as latency issues: communication delay in decentralized nodes is six times that of centralized clouds; toolchain fragmentation: PyTorch/TensorFlow do not natively support decentralized scheduling.
DeepSeek reduces computational power consumption by 50% through Sparse Training, enabling consumer-grade GPUs to train models with billions of parameters through dynamic model pruning. Market expectations for high-end GPUs have been significantly lowered in the short term, and the market potential for edge computing is being re-evaluated. As shown in the above figure, prior to the emergence of DeepSeek, the vast majority of protocols and applications in the industry were utilizing platforms like AWS, with only a few use cases deployed on decentralized GPU networks. These use cases focused on the latter’s price advantage in consumer-grade computational power and did not consider the impact of latency.
This situation may worsen further with the advent of DeepSeek. DeepSeek has released the constraints on long-tail developers, and low-cost, efficient inference models will proliferate at an unprecedented rate. In fact, many centralized cloud platforms and several countries have already begun deploying DeepSeek. The significant reduction in inference costs will spawn numerous front-end applications, which will have a substantial demand for consumer-grade GPUs. Facing the upcoming massive market, centralized cloud platforms will engage in a new round of user competition, not only competing with top platforms but also with countless small centralized cloud platforms. The most direct means of competition will be through price reduction. It is foreseeable that the price of the 4090 on centralized platforms will be adjusted downward, which could spell disaster for Web3’s computational power platforms. When price is no longer the latter’s only moat, and computational power platforms in the industry are also forced to lower prices, the outcome will be unsustainable for io.net, Render, Akash, etc. The price war will destroy the last remaining valuation ceiling of the latter, and the death spiral resulting from declining revenues and user attrition may force decentralized computational power protocols to pivot in new directions.
**Specific Implications of DeepSeek for Upstream and Downstream Protocols**
As illustrated, I believe DeepSeek will bring different impacts to the infrastructure layer, model layer, and application layer. In terms of positive impacts:
– The application layer will benefit from the significant reduction in inference costs, allowing more applications to ensure long-term online presence of Agent applications at low costs, completing tasks promptly;
– Simultaneously, low-cost model expenses like those of DeepSeek enable the DeFAI protocol to form more complex SWARMs, with thousands of Agents utilized on a single use case, where each Agent’s division of labor will be very fine and clear, greatly enhancing user experience and avoiding misinterpretations of user inputs;
– Developers at the application layer can fine-tune models, feeding prices, on-chain data and analysis, and governance data to DeFi-related AI applications without incurring high licensing fees.
– The value of the open-source model layer is reaffirmed following the emergence of DeepSeek, with high-end models made available to long-tail developers, stimulating a broad development surge;
– The computational power fortress built around high-end GPUs over the past three years has been thoroughly dismantled, giving developers more choices and establishing direction for open-source models. In the future, the competition among AI models will no longer be about computational power but rather algorithms. This shift in belief will become a cornerstone of confidence for open-source model developers;
– Specific subnets surrounding DeepSeek will proliferate, and model parameters under equivalent computational power will rise, attracting more developers to join the open-source community.
In terms of negative impacts:
– The inherent delays in usage of computational power protocols in the infrastructure cannot be optimized;
– Furthermore, a hybrid network composed of A100 and 4090 requires higher coordination algorithm demands, which is not an advantage for decentralized platforms.
**DeepSeek Shatters the Last Bubble of the Agent Track, DeFAI May Give Birth to New Life, and Industry Financing Methods Are Set to Transform**
Agents represent the last hope for AI in the industry. The emergence of DeepSeek has liberated the constraints of computational power, painting a future expectation of application explosion. This should be a significant boon for the Agent track; however, due to the strong correlation with the industry, the U.S. stock market, and the Federal Reserve’s policies, the last bubble has been punctured, leading the track’s market value to plunge to the bottom.
In the wave of AI integration with the industry, technological breakthroughs and market games have always coexisted. The chain reaction triggered by NVIDIA’s market cap fluctuations serves as a mirror, reflecting the deep-seated dilemmas within the AI narrative in the industry: from On-chain Agents to the DeFAI engine, the seemingly complete ecological map conceals the harsh reality of weak technological infrastructure, hollowed-out value logic, and capital dominance. The superficially prosperous on-chain ecosystem hides chronic issues: numerous high FDV tokens compete for limited liquidity, obsolete assets cling to life through FOMO sentiment, and developers are trapped in a PVP spiral that drains innovative momentum. As incremental funding and user growth hit a ceiling, the entire industry falls into the “innovator’s dilemma”—eager for breakthrough narratives while struggling to escape the shackles of path dependence. This state of tension provides a historic opportunity for AI Agents: it is not only an upgrade of the technological toolbox but also a reconstruction of value creation norms.
In the past year, more and more teams within the industry have discovered that traditional financing models are failing—VCs are finding it increasingly difficult to maintain small stakes and tight control over funding rounds. With VCs tightening their pockets, retail investors refusing to take over, and high thresholds for major listings, under the triple pressure, a new set of strategies more suited to bear markets is emerging: collaborations with top KOLs + a small number of VCs, large-scale community launches, and low market cap cold starts.
Innovators like Soon and Pump Fun are paving new paths through “community launches”—partnering with top KOLs for endorsements, distributing 40%-60% of tokens directly to the community, launching projects at valuations as low as $10 million FDV, achieving financing in the millions of dollars. This model builds consensus FOMO through KOL influence, allowing teams to lock in profits early while exchanging high liquidity for market depth. Although they forgo short-term control advantages, they can repurchase tokens at lower prices in bear markets through compliant market-making mechanisms. Essentially, this is a formal shift in power structures: from a VC-led pass-the-parcel game (institutional takeover – major listings – retail purchases) to a transparent game of community consensus pricing, forming a new symbiotic relationship between project initiators and communities through liquidity premiums. As the industry enters a period of transparency revolution, projects that cling to traditional control logic may become relics of a bygone era amid the wave of power shifts.
The short-term market pain precisely confirms the irreversibility of the long-term technological tide. When AI Agents reduce on-chain interaction costs by two orders of magnitude, and when adaptive models continuously optimize the capital efficiency of DeFi protocols, the industry is poised for the long-awaited Massive Adoption. This transformation is not reliant on conceptual hype or capital incubation, but rather rooted in the technological penetration of genuine demand—just as the power revolution did not stagnate due to the bankruptcy of bulb companies, Agents will ultimately become the true golden track after the bubble bursts. DeFAI may indeed be the fertile ground for new life; as low-cost inference becomes commonplace, we may soon witness the emergence of use cases where hundreds of Agents are combined into a single Swarm. With equivalent computational power, the significant increase in model parameters can ensure that Agents in the era of open-source models can be fine-tuned more thoroughly. Even when faced with complex user input commands, they can be broken down into task pipelines that individual Agents can execute fully. Each Agent optimizing on-chain operations may enhance the overall activity of DeFi protocols and increase liquidity. More complex DeFi products led by DeFAI will emerge, which is precisely where new opportunities arise after the previous bubble’s collapse.