We live in the Agentic Internet era, and we all know that by now it’s not future tech. Autonomous AI does tasks assigned to it once- everyday without approval. The crucial, nail biting question that arises is who is responsible for AI mistakes, when it messes up, who is going to held accountable?
Artificial intelligence (AI) is no longer a matter of simple tools. Generative AI is very different from Agentic AI. The AI industry no longer consists of just AI chatbots that only answer to your query. Now there are independent systems that choose, act, and learn themselves. In such situations where it is said that something is not working properly, it is being blamed. However, there is no blaming for long. The developer accuses the company. The company accuses the user. The user, at his turn, accuses the developer. In the meantime, the one that is affected is left to suffer without any support.
Real Damage, Real Confusion
In February 2024, Air Canada’s chatbot offered a bereavement discount to a passenger because he was grieving. The discount was simply a mistake. The passenger bought tickets based on the wrong information. So, who should be responsible for the payment? The airline blamed the AI. The customer blamed the airline. A Canadian tribunal ruled that the airline must honour the non-existent policy and pay damages.
Another story: an AI agent mistakenly wired $50,000 to an incorrect bank account. The customer was furious and demanded explanations. The bank took time to investigate. The AI vendor, IT team, compliance officer, and executives, all spilt the blame between them and none took the responsibility. The money was missing, but so was accountability, and it disappeared much quicker.
These are not infrequent faults. They are the signs of a malfunctioning system.
Who Should Be Responsible for AI Mistakes?
This question does not have a straightforward answer, but here’s how the responsibility is distributed at the moment:
Developers and AI Companies:
- Can not put their systems on the market without proper testing
- They have to make it very clear and lay down what the AI is capable of and what it’s not
- Should always be ready and waiting to check up on their systems
- They are obliged to rapidly solving the issues when they are made visible
Companies Using AI:
- Must study the technology extensively before putting it into operation
- Need to establish adequate control and supervision systems
- They are responsible for teaching workers about AI capabilities and limitations
- Should be prepared with clear instructions during incidents
End Users:
- Can not overlook safety instructions of the AI devices
- Are obligated to report any irregularities found in functioning of the AI devices
- Should not turn a blind eye to the most evident warning signs
- Have to consider that they are communicating with AI, not humans
Regulators and Governments:
- Draft perfectly tuned laws depending on the level of AI sophistication
- Guarantee compliance with regulations across different sectors
- Ensure the safety of those who suffer from AI errors
- Work towards achieving openness in AI Systems
The Ethics Problem: AI Is Indifferent
AI systems don’t have a conscience, guilt, feelings or in fact know what’s morally right or wrong. They know what they have been fed- the training data, data from the web. They go after their objectives without any idea of moral or immoral. This situation raises very serious ethical problems.
If an AI is instructed with a biased hiring dataset, it will discriminate. We have seen various instances as well. If you prompt a system to generate an image of a boss sitting on a chair, it will almost every time generate a man, unless you ask it to generate a female(this is just a basic example). It doesn’t realize that it is being unfair, it’s merely following patterns. ProPublica discovered that COMPAS, a tool for criminal sentencing, was making racially biased predictions. Additionally, in the UK, the AI grading system lowered about 40% of student results in 2020, thus a group of students from less advantaged backgrounds were affected most.
The Black Box Mystery
The majority of AI models are essentially black boxes. They decisions come out, but the process is not understandable to anyone. Therefore, it is very difficult to pinpoint responsibility. In case you don’t know the reason for the AI’s decision, how will you correct it? How will you argue that it was incorrect?
What Different Countries Are Doing
European Union: The EU is stepping ahead with the AI Act, a world leading regulation in terms of strictness. Systems that use AI in high-risk situations are subject to strict regulations. Enterprises have to demonstrate that they have taken the necessary safety measures. The responsibility to prove the safety measures has been changed – which is a big thing in legal terms.
United States: The USA is still without a single federal AI law. Courts rely on outdated regulations which were meant for traditional products. This causes a lot of confusion. New precedents are being created with every case, however, clear standards are still far off, it’s just sad really.
United Kingdom: The UK has set a requirement for insurance to cover autonomous vehicles. Those who suffered get compensated without delay. Then, insurers, after locating the responsible party, recover the money through a process called reimbursement. Although this system is functional, it still requires the establishment of clear lines of responsibility.
How Much AI Errors Actually Cost
When AI fails, the damage spreads fast:
- Monetary: It may become extremely expensive to pay for settlements, fines, legal fees, and system repairs that total millions of dollars.
- Judicial: The court cases have a long duration, and the final decisions are vague.
- Brand: The confidence of the public disappears after negative situations that attract the attention of the media.
- Execution: The breakdown of the system, being locked out by the system, or the wrong processing of data.
Building Better AI Accountability
Intelligent businesses are not sticking around until perfect regulations see the light of the day. Instead, they are setting up their own protective measures. Because prevention is always better than cure. It’s better to be safe than sorry.
- Assembling supervision boards which evaluate AI systems both prior and post their release.
- Performing mindful impact assessments to identify potential issues at an early stage.
- Maintaining thorough records of every AI decision for subsequent auditing.
- Put in place the killing switches (in the hands of humans)so that humans can have the power to change AI decisions.
Wrapping It Up
Until a clear framework is laid down, the answer to the question- Who is responsible for AI mistakes? remains uncertain. This ambiguity hurts a lot of people in the process or at least has the potential to. As of now the only solution to this is shared responsibility. Be it companies, regulators or users. AI isn’t going anywhere and is going to make a lot of mistakes too. But right now the focus should not be on whether AI is going to create more problems but should be on whether we’ll be ready to tackle the mistake when it makes one. This is exactly why designing stronger, more agentic learning systems matters, as explored in our guide on building agentic learning frameworks.