This website is evolving. Subscribe and come back often to stay up to date with DSF grants and opportunities.
Opinion

The Power of Many: Why AI Should Be Open-Source

If you were to ask one of the major AI CEOs (and we can’t think about anyone other than Sam Altman these days) if they would be willing to release their tech as open-source, I venture that the answer would be no. No because of what it does to their valuation, no because of what it does for their competitive advantage, and no because the costs would be too high - among other reasons.

Yet, exactly for these reasons, we should make artificial intelligence open-source for everyone

While some argue that keeping AI proprietary offers a competitive advantage, I firmly believe that an open-source approach is not only essential for fostering innovation but also vital for shaping the future of AI.

Open-source AI refers to the practice of making artificial intelligence algorithms, models, and software freely available to the public, allowing anyone to view, use, modify, and distribute the code. 

Scary to the bottom line? Perhaps. But improving the bottom line is the lowest and least impressive benefit of AI that we could ever create. Instead, we should provide more people with access to technology that can significantly improve their lives. By commoditising AI, this is where we'll see the most powerful economic growth and value creation.

We hear many concerns standing against this theory. One primary reason is the fear of losing commercial advantages and intellectual property rights (amusing, in light of AI models using the property rights of, for example, artists to feed their algorithms). Companies investing substantial resources in AI research and development may be hesitant to release their algorithms and models under an open-source license. They believe that keeping AI technology closed and proprietary allows them to maintain a competitive edge, protect their investments, and monetise their innovations.

Another concern revolves around security and the potential misuse of AI technologies. Detractors argue that open-sourcing AI exposes vulnerabilities, making it easier for malicious actors to exploit or manipulate the technology for the wrong purposes. The fear of unauthorised access, data breaches, and the creation of AI-powered weapons or surveillance systems is real. Critics argue that stringent control mechanisms are necessary to ensure that AI remains in the hands of responsible entities capable of safeguarding its ethical use. 

Yet, there’s no sure process for knowing if the right people have the right access to the right kinds of AI right now. Data breaches, the usage of AI to build weapons, and AI bot waves going online to support geopolitical causes are happening as we speak. Instead of using them as an example to support the thesis of not giving AI to all, we should see it as one of the reasons we should make access to AI algorithms widely available. Making AI open-source means more eyes on the problems we don't yet see. It's not just about creating another tool but also about putting the right incentives in place. It's about spreading knowledge so that more people (think: engineers, academics and researchers who have less venture capital but strong skills) can spot risks and experiment safely.

Aside from fighting the malicious use of AI, history has proven that openness fosters innovation rather than hinders it. If AI projects are open-source, a global community of developers, researchers, and enthusiasts can contribute their expertise, ideas, and improvements. By making AI tools freely available, developers and organisations with limited resources can leverage state-of-the-art algorithms without the need for substantial financial investments. This inclusivity facilitates the widespread adoption of AI, benefiting a broader range of industries and applications. 

Making AI open-source allows for greater scrutiny and understanding of the code, but this still doesn’t entirely demystify how AI makes its decisions. The code is one thing, but the intricate workings of complex models, particularly deep learning systems often remain a “black box”. While open-sourcing AI models enhance transparency, this alone is not sufficient for complete understanding. Transparency is essential for building trust in AI systems and ensuring accountability. Users can scrutinise the underlying mechanisms, which helps mitigate the risk of unintended consequences and promotes responsible AI development. By allowing transparency and peer review, we establish a system of checks and balances that guards against bias, discriminatory outcomes, and unethical use cases. However, it's important to recognise that black box systems are still black boxes even when open-sourced for some AI systems, which are still not fully understood.

An AI sector available to all brings a lot of risks, but I believe it gives twice as many opportunities for growth and pushes the boundaries of development even further. In embracing open-source AI, I acknowledge that innovation should not be stifled or limited to a select few. Instead, I embrace the philosophy that progress is best achieved through shared knowledge and collaboration. By leading the charge in open-source AI development, we give ourselves a better and bigger opportunity to shape the future and to ensure that advancements are made in a manner that aligns with the best interests of everyone.

About the Author
Nikhil Vadgama
Co-founder of the DLT Science Foundation and deputy director of UCL Centre for Blockchain Technologies