By Martha Onate Inaingo
Published on: October 11, 2023 at 01:12 IST
Artificial intelligence (AI) is a transformative technology that has immensely impacted several aspects of living and of modern technology in every economy around the world; from entertainment, education, agriculture, forecasting and modeling, transportation, manufacturing, cybersecurity, robotics, and many other applications. Household appliances, vehicles, medical equipment, drones and other products are increasingly using AI and AI related technologies to enhance decision making.
AI has positively impacted human activities in this modern age in very tremendous ways. For instance, AI-based systems have been used to track criminals using face recognition and analysis applications thereby improving safety and security, they are also used to make decisions that are more objective, consistent, and reliable; AI also has the ability to efficiently analyze large data, identify and make good use of correlations within very short period of time.[1]
The increasing degree of autonomy facilitated by AI has many advantages, however like any science innovation it also gives rise to unknown risks; it is vulnerable to biases, errors, and security breaches which could incur potential liabilities. In particular, what happens when AI is deployed and the product causes injury or loss?[2]
A driverless car might fail to avoid an accident that was preventable, Considering the aforementioned scenarios the questions that follow are; Who should bear responsibility in such cases? Whose fault is it if an AI algorithm makes a decision that causes harm? The user of the car? The programmer of the medical software programme? The manufacturer or programmer of the mortgage application? The developer of the AI system? The AI system itself? the developer, the manufacturer, and the user? How should fault be identified and apportioned? What sort of remedy should be imposed?
This article explores the legal aspects of AI liability, examines the aforementioned questions of who should bear liability and highlights the difficulties in establishing or determining civil and criminal liabilities of AI under various legal regimes,
Who is Responsible When The Algorithm Goes Wrong?
AI is embodied as a sequence of instructions/protocols telling a computer to transform input data into a desired output. AI differs from traditional computer software algorithms which limit the ability to rewrite codes based on the criteria and biases previously coded by the programmer, AI software has the feature that enables it to re-write its own code independently based on experiences free from the programmers’ criteria and biases.[3]
The special characteristics of these technologies and their applications which ranges from complexity, modification through updates or self-learning during operation and limited predictability, makes it more difficult to determine what went wrong with an AI-based system and who should bear liability if anything goes wrong.[4]
Artificial intelligence applications result in legal difficulties primarily associated with privacy, discrimination, product liability and negligence. Addressing these difficulties usually begins with a determination of which party may sue or be sued.[5]
Determining who should be liable can be problematic as there are often many parties involved in an AI system (data provider, designer, manufacturer, programmer, developer, user and AI system itself).[6] Further complications may arise if the fault or defect arises from decisions the AI system has made itself based on machine learning principles with limited or no human intervention. Since an AI is not a legal person and internet transactions may span more than one jurisdiction, this initial assessment is crucial.
There are various divided schools of thought on the issue of liability, and the division of opinion stems from the inability to agree on the nature of AI, whether it is a “thing” or “product” that has no legal personality and cannot be personally responsible for its actions or a “person” or “e-person” that has legal personality and can be personally liable for its actions.[7]
Is it the AI system that can be held Liable?
The general rule is that, only juristic persons have the inherent right to sue and be sued in their names; non legal persons or entities may neither sue nor be sued except where such right to sue or be sued is created and/or vested by a statute. According to the law, juristic persons who may sue or be sued are:
- Natural persons, that is to say, human beings;
- companies incorporated under the Companies Act;
- corporations aggregate and corporations sole with perpetual succession;
- certain unincorporated associations granted the status of legal personae by law such as registered Trade Unions, partnerships, and friendly societies.[8]
There are divergent views on the legal personality of AI systems. One of the postulations are that the AI system can be held liable for criminal and civil wrong because of its ability to rewrite codes and modify its system through self learning processes from experiences and data of the users or programmers.
This position is based on the definition of a legal person which attributes legal liabilities to artificial persons, such as the corporations or companies that are independent of their shareholders and employees and may be liable for legal difficulties resulting from owning property and entering into contracts.[9]
Another school of thoughts posit that separate legal personality of an AI system or program is not necessary, for the purposes of liability. The rationale for that position is that it is not necessary to give devices or autonomous systems a legal personality, as the harm these may cause can and should be attributable to existing persons or bodies.[10]
The assumption is that AI systems are programmed by legal persons who should bear the liability for any harm caused, that means as:[11]
- Manufacturers of products or digital content incorporating emerging digital technology should be liable for damage caused by defects in their products, even if the defect was caused by changes made to the product under the producer’s control after it has been placed on the market;
- Strict liability should lie with the one who has more control over the risks of the operation of an AI system; if there are two or more operators, in particular the frontend operator, who is the person primarily deciding on and benefitting from the use of the relevant technology and backend operator who is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support Producers should be strictly liable for defects in emerging technologies even if said defects appear after the product was put into circulation, as long as the producer was still in control of updates to, or upgrades on the technology.
- Joint and several liability: where two or more persons cooperate on a contractual or similar basis in the provision of different elements of a commercial and technological unit, and where the victim can demonstrate that at least one element has caused damage in a way triggering liability but not which element, all potential liable parties should be jointly and severally liable vis-à-vis the victim.
- Insurance: The Automated and Electric Vehicles Act 2018 provides for situations exposing third parties to an increased risk of harm, compulsory liability insurance could give victims better access to compensation and protect potential liable parties against the risk of liability.
Whether or not AI system can be held criminally or civilly liable is enunciated below:
Criminal Liability
The major postulant of imposing criminal liability for AI programs is Gabriel Hallevy who posits that AI entities might be held criminally liable.[12] He classified actus reus and mens rea to reflect how AI could be criminally liable.
- That actus reus consists of an action or failure to act;
- That men’s rea requires knowledge or failure to know what a reasonable person would have known.[13]
Hallevy proposed three legal models by which offences committed by AI systems could be considered:
- Perpetrator-via-another: According to this model, AI programs could be held to be an innocent agent, with either the software programmer or the user being held to be the perpetrator-via-another.[14]
Where an offence is committed by a mentally deficient person, child or animal, the perpetrator is considered an innocent agent because they lack the mental capacity to form men’s rea, however, if the innocent agent acting on the instruction of another person committed the offence, the instructor can be held criminally liable.[15]
- Natural-probable-consequence: This model tries to portray that an AI program which was intended for good purposes could be activated inappropriately and triggered to perform a criminal action.
For example, where a Japanese employee of a motorcycle factory was killed by an artificially intelligent robot working near him because the robot erroneously identified the employee as a threat to its mission, and calculated that the most efficient way to eliminate this threat was by pushing him into an adjacent operating machine. Using its very powerful hydraulic arm, the robot smashed the surprised worker into the machine, killing him instantly, and then resumed its duties.[16]
This model of natural probable consequence is to establish the criminal liability of accomplices to a crime. If conspiracy cannot be proven, an accomplice could be held liable for criminal acts of the perpetrator if such action turns out to be a natural or probable consequence, as long as the accomplice was aware that some criminal scheme was under way.[17]
Thus, users or programmers might be held legally liable if they knew that a criminal offence was a natural, probable consequence of their programs/use of an application. The application of this principle must, however, distinguish between AI programs that ‘know’ that a criminal scheme is under way i.e. they have been programmed to perform a criminal scheme and those that were programmed for another purpose.
It may well be that crimes where the mens rea requires knowledge cannot be prosecuted for the latter group of programs but those with a ‘reasonable person’ mens rea, or strict liability offences, Can.
- Direct liability. This model attributes both actus reus and mens rea to an AI system. It is relatively simple to attribute an actus reus to an AI system. If a system takes an action that results in a criminal act, or fails to take an action when there is a duty to act, then the actus reus of an offence has occurred.
Assigning a mens rea is much harder, and so it is here that the three levels of mens rea become important. For strict liability offences, where no intent to commit an offence is required, it may indeed be possible to hold AI programs criminally liable. Considering the example of self-driving cars, speeding is a strict liability offence; so according to Hallevy, if a self-driving car was found to be breaking the speed limit for the road it is on, the law may well assign criminal liability to the AI program that was driving the car at that time.
Where an AI system is fully autonomous or is far removed from human decision making, it will become more difficult to establish proximity and foreseeability. Such cases are likely to involve complicated and competing expert evidence regarding whether the AI system functioned as it should have done. Issues of liability for autonomous systems and software driven incidents are not new.[18]
As far back as the 1980s, Therac-25, a radiation therapy machine developed by Atomic Energy of Canada Limited “AECL”, delivered damaging doses of radiation to cancer patients due to a glitch in the computer coding, with fatal results. Liability in this case is still debated as some hospitals had implemented their own upgrades to the systems that arguably caused the overdoses.[19]
Also in 2017, a class action was instituted against Tesla over an automated vehicle’s autopilot system, claiming that it contains inoperative safety features and fault enhancements.[20]
Civil Liability
There are many parties involved in the production of an AI system that could be held responsible when a civil wrong is committed, ranging from data provider, designer, programmer, developer, insurer, product manufacturer, user, or even the AI itself. It is also possible that different types of legal liability may be appropriate for different forms of AI.[21] For example, different liability models may arise for a computer software system that makes decisions about job applicants compared to autonomous systems in cars.[22]
Specific rules are being formulated in certain sectors to deal with the risks posed by AI systems. For example the UK is proposing to introduce rules under which the insurer will generally bear primary liability in the case of accidents caused by autonomous vehicles.
The cause of an AI system’s failure to perform is the key element for establishing: a breach of a duty of care in negligence claims; a breach of an express or implied term in contractual claims; or a link between the defect and damage suffered in consumer protection liability claims.
In the absence of legislation relating to AI, redress for victims who have suffered damage as a result of a failure of AI would most likely seek compensation under the tort of negligence, product liability, vicarious liability or contractual liability.
Negligence
It could be argued that the general principles of negligence can apply to the widespread use of AI. Liability in negligence arises where there is a duty of care. It seems logical that a person who has suffered loss because of a decision made by AI may be owed a duty of care. However, it may be unclear who has this duty. The AI is not responsible for its own actions because it is not a legal person. Liability could therefore rest with the owner, the manufacturer, the user or the service provider. Whilst there is potential for recovery of loss when AI malfunctions, using the negligence route, the exact method is not yet clear.
When software is defective, or when a party is injured as a result of using software, the resulting legal proceedings normally allege the tort of negligence rather than criminal liability. The three elements that must be proved for a negligence claim to prevail are:[23]
- The defendant had a duty of care;
- The defendant breached that duty;
- That breach caused an injury to the plaintiff.
The common law principle enunciated in Donoghue Vs Stevenson,[24] postulates that where a party has suffered injury as a result of a breach of duty of care owed by a manufacturer, the manufacturer may be liable to compensate the injured party if the injury is a reasonably foreseeable consequence of the act of the manufacturer.
The claimant would need to establish that the defendant (whoever that may be) owed a duty of care, breached that duty and that the breach caused injury to the claimant. Ultimately, liability for negligence would lie with the person, persons or entities who caused the damage or defect or who might have foreseen the product being used in the way that it was used. In the event that the damage results from behaviours by the AI system that were wholly unforeseeable, this could be problematic for negligence claims as a lack of foreseeability could result in nobody at all being liable.
It is difficult to argue that AI systems intentionally cause damages or possess motives to commit tortious acts. The complexity of AI systems affects the transparency of decision-making processes, making it increasingly difficult to ascertain the causal connection between an AI’s action and the resulting damages.
It is opined that operators of emerging digital technologies should comply with an adapted range of duties of care, including with regard to choosing the right system, monitoring and maintaining the system. There should be a duty on producers to equip technology with means of recording information about the operation of the technology if such information is typically essential for establishing whether a risk of the technology materialized.[25]
Product Liability
Products liability is the area of law that addresses remedies for injuries or property damage arising from product defects, as well as harms arising from misrepresentations about products.[26] Product liability claims can be initiated under the general laws of contract or tort or under extant consumer protection statute.
Product liability has been advocated as a liability regime for situating AI Liability. The first legal debate on product liability is the determination whether AI is a product or a service. This piece considers AI as a product that can be covered under the product liability regime.
The second threshold to consider in product liability claims is to determine if the claimant qualifies as a consumer of the product. It would appear from the authorities that the courts are minded to take a liberal view in defining a consumer to include a user of the product or purchaser of the service, barring any contrary statutory definition of the term.[27]
That being said, defectively made AI, or AI that is modified by a licensee and causes damages as a result, can create liability for both the licensor and/or licensee. Whether AI is defectively made will depend, like in other product liability cases, on prevailing industry standards.[28]
Product liability under the EU Product Liability Directive 1985, was reproduced in the Consumer Protection Act 1987. Under this regime, a product is defective, and thus potentially liable to inculpate its producer, ‘if the safety of the product is not such as persons generally are entitled to expect’.[29] The definition of defective AI is very inadequate as it does not cover what performances or failures or functions and the resultant response from the AI that would amount to a defect.
The limitation with product liability claim of this nature is that it only provides compensation in damages against the manufacturer but does not attach liability to the owner, keeper, user, network provider, software provider, etc. of the AI.
Vicarious Liability
Although vicarious liability typically arises in employment, partnership and limited liability partnership scenarios, vicarious liability may also be implied by law, outside the context of an employment relationship, particularly where the agent carries on activities as an integral part of the activities of the principal and for the principal’s benefit, and where the commission of the wrongful act is a risk created by the principal by assigning those activities to the agent.[30]
There have been arguments that AIs should be treated as agents of their owners or manufacturers, depending on the scenario, and that the human principal should be held vicariously liable for damage caused by the AI. The reasoning behind this is that AI is designed to accomplish goals specified by, and receive tasks/directions from a human being. Thus, it has been suggested that vicarious liability may be applied to hold the human principal liable for the damages caused by the AI agent.
The practical difficulty of imposing this liability is that of identifying the principal, particularly where many parties are involved. For instance, where an AI application provides wrong information to a customer that causes the customer to suffer financial losses or personal injury the question that would arise is, who is liable? Is it the application owner, designer or programmer?
Certain factors need to be considered in each circumference since there is no clear answer to the aforementioned questions, such as the person with the greatest level of involvement, monitoring and supervision of the AI, or the person with the highest capacity and capability to control or influence the actions of the AI. Where there are multiple principals, it may be possible to hold all the principals jointly and severally liable for the damage.
Contractual Liability
AI developers and operators can be held liable for damages caused by an AI system under contractual terms. This can be achieved through explicit provisions in contracts that govern the use of AI systems.
Strict Liability
Some legal frameworks propose a strict liability regime caused by AI systems. This would mean that AI developers and operators would be held strictly liable for damages caused by the system, irrespective of their negligence.
Liability Regime in the US & UK
Several countries have established regulatory frameworks for AI systems, mandating developers and operators to adhere to specific safety and security standards. The legal framework in the USA is the National AI Initiative Act; in China is A New Generation Artificial Intelligence Development Plan and in the UK is the Centre for Data Ethics and Innovation. These frameworks can be used to establish legal liability for damages caused by AI systems.[31]
On September 28, 2022, the European Commission revealed AI Liability Directive (Directive), proposing a legal framework to establish liability for damages caused by AI systems. The Directive is introduced a risk-based approach to AI liability, where the level of liability corresponds to the risk associated with the AI system. Additionally, it suggests a strict liability regime for high-risk AI systems, holding developers and operators responsible for damages caused, regardless of negligence. This Directive encourages developers and operators to adopt necessary measures to ensure system safety and reliability. Its rules seek to maximize the benefits of AI while minimizing associated risks. Ultimately, the Directive is set to shape the future of AI liability within the EU and beyond.[32]
US Regime
In the United States, the implementation of laws to regulate AI has been relatively slow. There are some case laws in the United States concerning the regulation of computerized robotics. For instance, in the case of Jones Vs W + M Automation, Inc., where the plaintiff complained against a manufacturer and programmer of a robotic loading system for product defect, it was dismissed by New York’s Appellate Division. The court held that, the defendants were not liable for plaintiff’s injuries at the GM plant where he worked because these defendants showed they “manufactured only non-defective component parts.” It further stated that as long as the robot and associated software was “reasonably safe when designed and installed,” the defendants were not liable for plaintiff’s damages. GM, the end user, however, could still be liable for improperly modifying the hardware or software. The implication of this judgement is that creators of AI software or hardware are likely not liable for any injuries as long as these products were non-defective when made.[33]
UK Regime
The UK has passed the Automated and Electric Vehicles Act 2018 pursuant to which liability for damage caused by an insured automated vehicle when driving itself lies with the insurer. Otherwise, redress for victims who suffer damage as a result of a failure of AI would most likely be sought under existing laws on damages in contract, consumer protection legislation and the tort of negligence.[34]
Data protection law already offers individuals some protection against how automated decision making uses their personal data. The UK data protection law protects personal data of individuals by prescribing that individuals may not be subject to a significant decision based solely on automated processing unless that it is required by law.[35]
The UK Federal Trade Commission proposed guidelines concerning the regulation of AI on the 8th of April 2020. The Commission basically recommends that those who use or license AI in a way that affects consumer well-being could warrant liability for the resulting damage.[36]
Way Forward
In light of all the aforementioned questions, scenarios and liabilities, what is the best way forward?
There are certain key developments that could be implemented to improve the regulatory framework for AI liability. Here are some of the key areas:[37]
- Increased regulation: Advancements in AI technology and its growing autonomy may demand new laws and regulations governing AI development, testing and deployment to ensure the safety and reliability of these systems. This could lead to new laws and regulations governing AI development, testing, and deployment.
- Expanded liability for AI users: As AI systems become more autonomous, user training, education and user interface that communicates the capabilities a d limitations of AI systems would be crucial because they may be expected to share their cup of liability for the actions and decisions these systems make.
- International cooperation on AI regulation: Since AI transcends national boundaries, Liability frameworks across jurisdictions need to consistent to ensure AI technology’s safe and ethical use. This can be attained by cooperation among states to establish common standards and regulations for AI systems.
- Liability insurance for AI systems: Mitigating AI-related risks can be achieved through AI liability insurance policies. Some experts propose developing mandatory liability insurance programs for AI systems, similar to car insurance. Such programs would provide a mechanism to compensate victims of AI-related accidents or errors while encouraging developers and manufacturers to prioritize safety and reliability of their systems.
Conclusion
AI has immensely impacted several industries: healthcare; automotive; financial services; retail and consumer; technology, communications entertainment; manufacturing; energy; and transport and logistics. Aside these benefits, AI also has potential risks which the laws must regulate and control to balance liability and causal effect.
As AI technology is rapidly evolving, the law also has to evolve alongside to curb its excesses and regulate its use and production. The debate is still ongoing as to whether AI should take some form of legal personality and any conclusions on this may change how the world implements both future and currently integrated AI systems. In the coming years the different forms of AI at likely to take on different forms of legal liability models.
Refrences
- Products liability law as a way to address AI harms,
- Artificial Intelligence (‘AI’): Legal Liability Implications,
- Assessing Liability in Artificial Intelligence Litigation
- Assessing Liability in Artificial Intelligence Litigation,
- Assessing Liability in Artificial Intelligence Litigation
- ibid ↑
- LIABILITY-FOR-DAMAGE-CAUSED-BY-ARTIFICAL-INTELLIGENCE.pdf
- LIABILITY-FOR-DAMAGE-CAUSED-BY-ARTIFICAL-INTELLIGENCE.pdf
- assessing-liability-in-artificial-intelligence-litigation/?slreturn=20230907112702
- artificial-intelligence-legal-liability-implications
- artificial-intelligence-legal-liability-implications
- Artificial Intelligence and Legal Liability,
- Ibid ↑
artificial-intelligence-who-is-liable-when-ai-fails-to-perform
artificial-intelligence-who-is-liable-when-ai-fails-to-perform
- artificial-intelligence-who-is-liable-when-ai-fails-to-perform
- artificial-intelligence-who-is-liable-when-ai-fails-to-perform
- artificial-intelligence-who-is-liable-when-ai-fails-to-perform
- artificial-intelligence-who-is-liable-when-ai-fails-to-perform
- artificial-intelligence-who-is-liable-when-ai-fails-to-perform
whos-responsible-addressing-liability-in-the-age-of-artificial-intelligence/
- Artificial Intelligence and Legal Liability,
- (1932) AC 562 ↑
- artificial-intelligence-legal-liability-implications ↑
- products-liability-law-as-a-way-to-address-ai-harms/ ↑
- Templars ↑
- ARTIFICIAL INTELLIGENCE LIABILITY: THE RULES ARE CHANGING
- Artificial intelligence and civil liability—do we need a new regime
- LIABILITY-FOR-DAMAGE-CAUSED-BY-ARTIFICAL-INTELLIGENCE.pdf ↑
- Artificial intelligence and civil liability—do we need a new regime
- artificial-intelligence-liability-rules-are-changing-1 ↑
- artificial-intelligence-liability-rules-are-changing-1 ↑
- artificial-intelligence-legal-liability-implications
- artificial-intelligence-liability-rules-are-changing-1
- ibid ↑
- who-responsible-addressing-liability-in-the-age-of-artificial-intelligence/