Skip to content

AI and data decentralization

[Recommend my two-volume book for more reading]:

BIT & COIN:  Merging Digitality and Physicality

In the current AI development, Meta comes out competing on a different principle: opensourcing AI.

Some worry that opensourcing AI would enable illicit actors (both state and private actors) to do harm.

I have not formed an opinion about Meta’s open-sourcing AI, but this pushing back with a comparison between bioweapons and AI is dubious at best. 

When it comes to physical weapons, decentralization is proliferation.  

You get bombed not because you agreed with the bomb but simply because someone made the bomb and was willing to use it.  Your disagreement with it is irrelevant.  

More important, when a bomb explodes, the fact that there are many others who also have the knowledge of making the bomb but have chosen to use the knowledge for more peaceful causes is also irrelevant.  When a bomb explodes in your face, you don’t get to choose an alternative reality.

Information is different. 

When it comes to information, there is a type of decentralization that has a positive impact on its final effect. The question is what kind of decentralization. It is not an easy question (see below), but at least a subject of study and choice.  

As long as the truth exists and is not suppressed, you always get to choose, despite the existence of wrong information. One may still get influenced or infected by misinformation, but at least people have an option. 

Having options means freedom. The existence of options, therefore, is a key.  

But not all options are equal when measured by the allowance of freedom. Decentralization is helpful and even necessary. However, the concept of decentralization is widely misunderstood.

True decentralization is not determined by the number of participants but by the positioning of the participants’ interests and the competitiveness among them. Just a few competitors whose interests are in providing a reliable utility service rather than manipulating the information and transactions result in a far more decentralized system than thousands of participants whose business models are built on manipulating information.  

The worst future of AI

The worst future of AI is one in which people are completely surrounded by one or just a few sources of information that deliberately allow or even actively fabricate false information. This is true whether the sources are governments or corporations. In that scenario, people no longer have access to the source of truth.  

For this reason, the following is critical: 

Truth must remain active and competitive and must not be plotted out or cut off.  The truth competes.  Lies will always be there. It is part of the normal condition to test the truth. It is also a test of people’s intelligence and wisdom.

Take the area of blockchain, the truth is still alive and fighting, even though lies have thrived and grown to cover 99% of the information space. But it could be worse if we lived in a more censored environment. Even when it is only 1%, the truth still exists and is still accessible to many people. If it is the real truth, it will prevail.   

How can truth be competitive?

But the question is, how does the world ensure that truth is competitive?  

For truth to be competitive, it requires the following conditions:

(1) Decentralization of data ownership. Data must be owned by parties who have both an incentive and capability to keep the truth. The current form of data aggregators, whether the government or corporations, are not this type of owner.  

(2) Unification of data infrastructure as a utility. Without such unification, decentralized data will not be competitive in terms of efficiency and aggregate power.  

People who believe in decentralization may be surprised to learn that the unification of data infrastructure as a utility is necessary, as unification appears to be a form of centralization. But they shouldn’t be.

It is important to understand the multilayered reality. The rich varieties in the upper layers are always based on unification in the base layer. Even the universe is designed according to this principle, with a unified law of physics at the base and unlimited varieties in actual presentations. See, for example, A Cybernetic View of Bitcoin’s System Design Choices and One Blockchain as The Base Layer of IoV.

The key here in the context of AI is “utility.” When services become infrastructure, and infrastructure becomes a utility, the operators’ businesses no longer rely on manipulating the information but on providing a reliable service. The base-layer utility providers of electricity and the Internet are good examples.  

When it comes to AI, none of the current major players, including Meta which bravely outsources its AI, is a utility provider. All of them are aggregators whose business models are based on data manipulation rather than truth discovery. One that comes closer than the others is Palantir. But even in the case of Palantir, it is fundamentally corruptible without the support of the universal blockchain. (See One Blockchain as the Base Layer of the New Internet.)

[Recommend my two-volume book for more reading]:

BIT & COIN:  Merging Digitality and Physicality

Share