Can blockchain make AI accountable?

We are in the midst of an artificial intelligence (AI) revolution, and things are moving rapidly and accelerating. It’s exciting but also frightening, considering the uncharted territory ahead when the tech grows even more powerful and starts to make its own decisions. When we reach this point, how can we trust the decisions AI is making? Who is accountable for these decisions?

It is already difficult to trust what we see online these days, with deepfakes rippling through social media, the rise of fake news, all sorts of AI-driven scams, you name it. While the latest tech is responsible for our decline in trust, perhaps it is also the latest tech that will restore our trust in this rapidly evolving digital world.

frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen>

“Now that we have flooded with social media and AI that trust is eroding. We have to find new forms of regaining trust in what we see so we understand what’s correct and what’s on the side of consensus,” Sebastian Thrun, Founder of Udacity, shared after his opening keynote at the London Blockchain Conference 2025.

“Society is run by trust, and we’ve always, when we invented new technologies, invented new sources of trust,” he pointed out.

AI pioneers like Thurn view accountability as a cornerstone of AI’s evolution and its interaction—and future interactions—with humanity. Consider Thurn’s work in agentic AI and self-driving cars as an example; we must be able to trust AI’s decision-making with 100% certainty for an AI-powered future to work.


“I think with all the excitement around AI, we already see all the risks that arise with it. We can think about monopolization that comes from just a few companies that control all the data. And we have very little clue how this data will be used,” shared Polina Vertex, Senior Researcher at Cambridge Centre for Alternative Finance (CCAF).

“If something goes wrong, how do we investigate what happened? Is it the data that went into the model? Is it a model? Is it the human who created the prompt, interacted with the model in their own way?” she asked.

Thrun’s response? It’s the people who should be held accountable, not the AI.

“AI is just the tool that’s being used by people. The people are the perpetrators. If the decision is made by the machine, it’s a technical error, and the company responsible for this should be accountable as an entity,” said Thrun.

Back to the top ↑

Now that we know who is accountable for AI’s actions, we need transparency regarding the data used to train AI models and the decisions these models make. This is where blockchain technology comes in, the only technology that is up to the challenge.

Blockchain can play an incredibly important role by just journaling the truth and putting up a journal that’s unalterable in the world that really lays out what happened,” explained Thrun.

Blockchain technology “journals the truth” by providing a system that immutably logs records. This means that every single AI data source and decision can be logged on a tamper-proof ledger, providing a transparent history for users, regulators, and other stakeholders.

“AI essentially is a product of the data that’s been fed to it, and if I don’t understand the data on which the model’s been trained, I have a difficult time trusting it,” shared Scott Zoldi, Chief Analytics Officer of FICO, who recently announced 101 AI and software patents granted.

“We need to audit and understand the data on which model was built, the assumptions on which it was built, and then you establish trust. Blockchain is going to be very important because it provides an opportunity to have an immutable record of what data went into the model, what the model is supposed to do, how it was developed, and how it’s tested,” he revealed.

Back to the top ↑

While we can’t move forward safely without blockchain, there is one more piece to guaranteeing accountability in AI at scale. We also need verifiable identities across all systems, and this is where IPv6 (Internet Protocol Version 6), the networking layer, plays a crucial role.

“It is the underlying infrastructure. It allows all these elements to communicate with each other and to understand where they’re coming from. And with appropriate encryption technology, PKI, and other things, then you can really have a valid source that you can trace,” explained IPv6 expert John Lee, CTO of Internet Associates.

Unlike the outdated IPv4, IPv6 provides “end-to-end” addressing for every AI agent, blockchain node, person, and device. By combining blockchain and IPv6, we can guarantee trust at scale.

“IPv6 gives you peer-to-peer communications. So with encryption and with digital identity and various, I would call it tokens, you can use that to categorize the data and have it better monitored and better tracked,” Lee added.

So, back to our question, can blockchain make AI accountable? The answer is yes, blockchain can make AI accountable, but we also need IPv6 to complete the whole picture. Blockchain provides us with transparency, IPv6 gives us identity, and as a result, AI delivers us intelligence that we can trust with 100% certainty.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Back to the top ↑

Watch: Adapting to AI—what entrepreneurs need to know

frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen>