Decentralising AI Solves Our Existential Fear of Machine Intelligence
The age of intelligent machines isn’t looming on the horizon; it's already here. As we evaluate how best to ethically deploy AI, it’s important to understand how the technology works – and one of the terms still unfamiliar to many is that of an “AI agent.”
An agent in AI terms is a module – a piece of a system designed to perform tasks autonomously. And these agents are evolving. Current agents have hit an IQ equivalent to or surpassing 100, mirroring the cognitive capabilities of half our global population. Project this trajectory a few years into the future, and we’re looking at machines that match or exceed human cognitive abilities.
One of the many benefits of these agents is that the ‘less-skilled’ can now harness them to level up their proficiencies. This could be using generative AI to help write a paper or using AI-enhanced platforms to discover new patterns that lead to breakthroughs in sectors like drug discovery.
This is the precursor to an interconnected system, where humans and multiple AI agents work together to achieve a task. It’s possible that in these future workflows, 80% of the work is managed by agents, and humans step in for the remaining high-level 20%.
As these agents interact more deeply, however, complexities arise. Consider this scenario. Your personal AI agent that you’ve tasked with helping arrange your food decides you want pizza. This AI agent then communicates with the food delivery AI agent. Then, the pizza you didn’t ask for gets delivered. The agents did their job based on the information and instructions they had. Who’s to blame for the unwanted pizza?
This scenario introduces a question. How do we manage contracts and agreements in a world dominated by AI agent decisions? A challenge that might have seemed trivial a few months ago is now highly likely.
Enter web3 – specifically, the capacity for smart contracts. Blending web3 and AI is essential here because machine decisions can have tangible and costly consequences. We need mechanisms to define, oversee, and ensure the validity of these interactions. Smart contracts in a decentralised AI system could be the solution.
We can set transparent, immutable rules for agent interactions through these contracts. If an agent decides on a course of action, the rules inscribed in these smart contracts will determine the validity of that action. It’s about ensuring that the agents we rely on work for us, not against us, and always agree with each other.
The rise of cognitive machinery brings about existential fears and the threat of a dystopian future – yet the union of decentralised AI and smart contracts offers a reassuring path. The challenges posed by AI’s evolution are real and imminent, as are the solutions presented by web3’s machinery. The allure of AI is strong, but it is vital at this stage to couple our adoption with the governance structures of decentralised systems. Only then can we harness the power of agents, ensuring humans and machines collaborate seamlessly, effectively, and harmoniously.
At the DLT Science Foundation, we’re working on decentralised intelligence projects, and a recent highlight is our grant to Peking University for its new technology lab. Located within the UK campus of Peking University’s HSBC Business School (PHBS), this lab will focus on the intersection of web3, fintech, and AI. The lab, led by Dr. Alastair Moore and Professor Guy Liu, brings together a team of experts, entrepreneurs, and partners to promote commercially viable web3 and AI research.
If you're an expert in web3/AI or have projects resonating with the domain of decentralised intelligence, we're eager to connect.