DePIN, decentralized physical infrastructure networks, aims to use reasonable token incentives to promote the construction and paid circulation of valuable physical hardware resources in the real world, narrowing the gap between the digital and virtual worlds. Its core concept involves using decentralization to obtain tokens with physical hardware on the blockchain and inversely using tokens to access hardware resources. Currently, the entire track is valued at around 9 billion US dollars, with expectations to grow to a scale of 3.5 trillion US dollars by 2028.
Therefore, this article will delve deeply into the AI computing resource leasing segment in the DePIN track, focusing on a potential unicorn project—UtilityNet (hereinafter referred to as Utility).
Utility is a public blockchain project emphasizing decentralized AI computing power distribution. It has quietly conducted a beta test for nearly a year, with the team remaining anonymous and not receiving any institutional funding. The token distribution of Utility is intriguing, with almost all tokens (97%) produced by computing power contributors (miners). The entire anonymous team, including the DAO organization (comprising code contribution organizations), retains only a 2% share of tokens. Additionally, the remaining 1% was burned at the start of the beta test. According to CoinGecko data, the market capitalization of Utility’s test phase token UNC (UtilityNet Coin) is around 20 million US dollars, with a surge of up to 7 times in the past 120 days. Utility aims to integrate idle and substantial computing resources into the blockchain network through decentralized token incentives, rewarding tokens based on the contributed computing power. Users of AI computing power can rent computing power for AI development training or inference by purchasing tokens. Notably, the Utility network encompasses not only AI computing resources but also general and heterogeneous computing. Currently, the main chips capable of connecting to the network under the Utility BDC (Blockchain Defined Chipset) definition are a few TPU chips from Sophon (e.g., BM1684x), thus meeting the demands of large-scale artificial intelligence.
Utility’s proposed PoCI consensus mechanism aims to solve the challenge of distributing UNC by proving computing resources. Initially, if based on PoW-like algorithms (such as SHA and RANDOMX), it would consume computing resources while proving them, contradicting the philosophy of DePIN. However, a monopolistic admission rule would also contradict decentralization. The introduction of PoCI overturns existing algorithm mechanisms. By establishing a chipset design protocol (BDC protocol stack) and connecting qualified chips, the mechanism uses the private knowledge generation mechanism of BDC chips (sourced from TRNG, True Random Number Generator) and the physical etching and burning mechanism at the transistor level (5-12 nanometers). A hybrid encryption mechanism using RSA/ECC and AES units enables the chip to generate a blockchain address and bind itself uniquely. Only the chip’s owner can complete the challenge proof (by decoding the mixed encrypted digital signature) using the chip’s driver. It’s worth mentioning that the chip, from its pins and driver level, does not expose or cache the private keys and keys used for signing. Only by opening the chip and using tunnel scanning can one detect and attack this BDC chip, thus achieving unparalleled security features. This agreement limits the variety of chips currently available (focused on a few TPU chips). At this stage, Utility’s TPU chips provide powerful INT8 computing power and built-in cache, capable of inferring large language models like LLAMA and GLM, and are compatible with SD (Stable Diffusion xLarge) and more CV models. Sophon’s multi-core card also supports Mistral’s mixed expert model (MoE) well.
During the beta test phase, Utility’s open-source development team released the official container cloud environment, combining Kubernetes and proprietary chip plugins, further customized based on the Volcano scheduling engine for various scheduling granularity and resource strategies. It has both a management end and a user end for combined use. According to official news, the massive project will be open-sourced in the later stages of the test network and the main network phase. Currently, test machine time can be requested by sending an email to admin@utnet.org. Notably, on the user end, distributed training (based on heterogeneous chips, compatible with Nvidia GPUs), development environments (integrating JupiterLab for large model developers, bridging container root console, remote file directory, development environment, and IDE), and model inference (based on current BDC chips) have been streamlined. In the future, the user end will be integrated as a Chrome browser plugin to manage development and deployment environments. The service management end will be integrated into the miner end with a one-click setup, enabling each computing power provider to deploy the world’s most advanced AI container cloud service with simple operations, enhancing the availability of computing resources.
Moreover, miners can engage in differentiated competition across the entire Utility, enriching Utility’s computing power market and maintaining a good competitive order through factors like the inclusion of GPUs, the strength of CPUs, bandwidth latency, geographical location, the provision of IPv4 address mapping, and the use of fewer UNC tokens for leasing. This competition also includes optimizing and customizing open-source code to tailor more suitable scheduling strategies.
Additionally, the CoreGalaxy platform, incubated by the foundation, will serve as a dapp for reviewing computing resources, connecting nodes to evaluate the use of container resources based on the order wallet and computing power provider wallet. It will issue titles and support traffic diversion to computing power providers based on an open ranking and high-availability-first principle, thus promoting a healthy cycle in the computing resources ecosystem. Partially centralized ranking and diversion facilities ensure the reliability of traditionally hard-to-measure core indicators like bandwidth and memory speed. In this competitive environment, only miners who continually provide the highest quality, most cost-effective, and most differentiated services will receive the most orders and tokens (UNC) generated by these orders.
It is foreseeable that after the open beta test in March 2024, UNC can be used for large-scale deployment of models trained in various industries, ensuring an unparalleled privacy deployment experience through a P2P network. This will also confirm Utility’s potential to become the largest edge computing network in the future.
To learn more about the project, please visit its website.