When the popularization of computers began 70 years ago, people could not imagine having a second life in a digital world. Virtual avatars continue to push the boundaries of identity and improve living conditions in the online world where individuals leave their mark as data.
According to Statista, connected devices worldwide are projected to number 30.9 billion units by 2025. Connected devices and services generate a substantial portion of data; the International Data Corporation predicts that by 2025 global data will measure 163 zettabytes, with one zettabyte equaling 1 trillion gigabytes. This is ten times the 16.1 zettabytes of data generated in 2016.
How can data be efficiently accessed and utilized? The answer is artificial intelligence (AI).
Sixty years of Artificial intelligence (AI) development
During a six-month seminar at Dartmouth College in the summer of 1956, a group of young scientists, including Marvin Minsky, coined the term “artificial intelligence.”
Today, most are familiar with AI; it has become embedded in daily routine. The convenience and progress of AI is everywhere from online shopping to industrial production.
Deloitte Touche Tohmatsu’s 2019 white paper on global AI development predicted the global AI market will exceed $6 trillion by 2025 at a compound growth rate of 30% from 2017 to 2025. A research report released by PricewaterhouseCoopers suggests the global gross domestic product will be 14% higher by 2030 due to AI adoption, contributing an additional $15.7 trillion to the world economy — more than the current output of China and India combined.
The field of AI has flourished for the past 60 years. However, with the advent of the fourth industrial revolution — the technological revolution — AI’s upper limit has become increasingly apparent.
The looming roadblocks
For AI to become both the variable and core technology of pending technological reform, three key elements are indispensable — data, algorithms and computing power.
The first challenge is the pressure of data governance and privacy. In 2018, the European Union introduced the General Data Protection Regulation, and in 2021, China’s Data Security Law and Personal Information Protection Law went into effect. The latter focuses on Chinese citizens’ rights and interests for privacy, dignity and property. The law defined personal information as something identified or identifiable about someone recorded electronically or otherwise documented. The tightened rules on personal privacy and data were meant to effectively prevent data misuse.
How should AI clear these roadblocks and progress when confronted with challenges in data privacy, high costs and technological centralization?
Artificial intelligence (AI) for all
Blockchain and privacy-preserving computation has inspired AI.
Blockchain’s consensus algorithms help complete subject-collaboration tasks in AI systems, and its technical features allow data assetization. It can incentivize adding more comprehensive data ranges, algorithms and computing power to create better AI models.
Recently, a product released by a company has shown users and markets a new application of universal AI.
The PlatON Privacy-Preserving Computation Network — a tentative name — is a decentralized data-sharing and privacy-preserving computation infrastructure network. It was designed to integrate the three elements of AI — computing power, algorithms and data — into a user product. Anyone can be a data owner, user, algorithm developer or developer to access the platform and complete various tasks. Such a decentralized way to gather data, algorithms and computing power necessary for computation creates a new, safe AI model.
As a commercially available product, PlatON serves enterprises and individuals. For example: As data owners, individuals and institutions can add data as nodes and participate in computing tasks published on the platform. This innovative approach allows ownership identification, pricing, data protection and assetization — all with privacy preservation.
Individuals and organizations can provide computing power on the platform to complete others’ computing tasks. Idle computing power becomes publicly available for computing tasks that yield rewards.
As algorithm providers, AI developers can develop algorithms, and computing tasks completed with their algorithms will generate earnings. This shapes a free, open and sustainable AI market.
PlatON has followed multiple data-privacy and preservation measures: collaborative computing with secure multi-party computation, zero-knowledge proof, homomorphic encryption, verifiable computing, federal learning and other cryptographic technologies to protect local data. Computing results such as trained AI models are also shielded from leakage, and products can efficiently execute smart contracts and run popular deep-learning frameworks for versatility, compatibility and high availability.
The product is undergoing a closed beta test, as such a large and complex platform will inevitably face challenges:
- How can multiple parties price data?
- How can data be accurately captured and applied as it circulates among various parties?
- How can AI developers be incentivized to provide core algorithms?
However, this is a data complex never seen before. The integration and application of new technologies need time, and PlatON has stepped forward in data commercialization.
To learn more about this project please visit our website.