Artificial intelligence and automation systems are rapidly transforming industries across the globe, from transportation to healthcare to finance. As AI becomes more prevalent, it is also creating significant legal challenges that demand careful consideration and planning. In-house counsel of companies utilizing artificial intelligence must be prepared to navigate uncertain legal frameworks and rapidly changing technologies. The legal landscape and practitioners must quickly adjust to these new technologies, including facial recognition, autonomous vehicles, computer vision, augmented reality, and robotics. In this article, we will explore the current state of law concerning AI and discuss how lawyers can embrace innovation while protecting their clients against unprecedented risks.
Negligence and Liability
As AI is increasingly used to make decisions that might result in property damage and bodily harm should those algorithms fail, questions about liability inevitably arise. The consequences of a poor decision by an autonomous vehicle, assembly line robot, or air-traffic control system could prove catastrophic. Any organization using AI in these such situations must proactively ensure the safe operation of these systems, mitigate potential risks, and clarify where liability may lie in case the worst should occur.
Who would be responsible, for instance, if a self-driving vehicle strikes a pedestrian or mistakes a shadow for a road hazard and brakes suddenly, causing passengers to become injured? In most cases, the fault would lie with the system’s manufacturer rather than the user. But whether “manufacturer” means the firm that developed and programmed the AI, the car company that integrated it with the vehicle’s guidance and steering components, or some other party, will be up to the courts to decide. Lawyers play a critical role in navigating potential liabilities resulting from web 3 activities like automated decisions made by AI systems. The lack of human oversight or control over such systems means that businesses must exercise high levels of caution and choose carefully when implementing such technologies. A lawyer experienced in AI can help clients determine if they have a duty of care in building, selling, installing, and relying on AI-based decision-making technologies and how to best demonstrate that they adhere to that duty in their business operations.
is also creating novel and complex questions surrounding the ownership and infringement of AI-generated intellectual property. These questions call for new approaches to drafting and negotiating IP contracts, accounting for and protecting trade secrets, and handling technology transactions.
The courts have determined that AI is not legally a person. Just as it cannot be granted copyright for text or pictures it creates, neither can it be liable for IP infringement. That liability lies with the person or company that trains or uses the AI. But which should it be? The question is especially complex when the AI application creates new IP without significant human instruction or exhibits “creativity” that exceeds its intended programming. Congress is considering proposed legislation that would create a separate framework for AI applications to regulate situations and areas that traditional IP legislative frameworks fail to address.
While the algorithms forming the fundamental building blocks of AI systems are subject to copyright, there is other confidential and valuable information that requires similar affirmative protection measures as well. Neural networks and associated AI learning are trained using massive amounts of data that are often scraped from publicly available sources, and this data often involves embedded trade secrets that must be recognized and safeguarded.
The rise of AI presents significant challenges and opportunities for the field of intellectual property law. Companies must take proactive and strategic steps to minimize the risks and maximize the benefits of AI applications. These include innovative drafting of IP contracts, recognizing and safeguarding trade secrets, and taking unprecedented steps to prosecute and defend against allegations of infringement by AI applications.
One of AI’s greatest strengths is its ability to very quickly process and analyze vast amounts of data. However, much of this data may be sensitive, such as users’ habits, interests, locations, and in the case of haptic wearables, even their movements and responses to various stimuli. Access to these personal details places a heavy responsibility of care on companies that possess or use such data.
As with all activities involving business email databases, businesses using AI must ensure the processes they use for collecting, storing, and using personal data are fair, transparent, secure, and legal. This includes safeguards over how the algorithm will evolve over time, presenting a particularly challenging problem, given the complexity of AI technology. Additionally, it may be difficult to identify a single lawful basis for processing the data, particularly in Europe, which is casting a skeptical eye on data security. Companies will have to anticipate the potential repurposing of personal data and ensure that they are transparent about what data is being collected and used.
Many jurisdictions and industry observers question whether capturing such high amounts of data is strictly necessary, as the breadth of data collected can be perceived as excessive. Companies need a legal standing on which to base an explanation as to why they are scraping comprehensive data and retaining it for extended periods. Government regulators increasingly are reinforcing their constituents’ right to control their online profiles.
AI’s usage of and dependence on training data may pose a challenge to consumers’s right to be forgotten, and AI developers and users alike must give due consideration to this and related issues. Privacy by design and default is essential in ensuring that personal data is protected. Given the existence of sensitive information in the vast pools of data that support AI applications, companies will have to assess the information security risks to their data and implement reasonable and effective controls. They must also protect the integrity of their training and algorithmic development processes to prevent “AI poisoning”, which can severely undermine customer confidence in a company’s products and services.
Companies using AI must be extra careful to comply with regulations aimed at protecting consumers, employees, and creators. Gamma Law has noticed a surge in AI litigation in a number of industries and applications. We help businesses mitigate a range of increasingly common, industry-spanning risks and concerns:
- Human Resources – AI is increasingly used in workplaces to streamline operations, increase efficiency, and reduce costs. However, these practices may introduce discrimination, infringe on privacy, compromise fairness, and violate labor laws, exposing organizations to liability. Employers have begun to use AI applications for recruitment and hiring, with many vendors claiming that AI can eliminate latent human bias and select candidates based solely on their merits and qualifications. However, in some instances, AI has demonstrated its own inherent biases which affect its output to the point of constituting discrimination. Employers must retain critical human oversight of the recruiting function and scrutinize the efficiency and potential bias of AI applications. Companies are also utilizing AI in the context of performance management analytics, but this poses risks related to data privacy and workplace legal frameworks. AI-based solutions meant to curtail labor costs have created unpredictable scheduling practices and understaffing that exposes employees to unmanageable workloads and compromised personal safety.
- Contracts – AI can streamline the contracting process, making it faster and more efficient by leveraging natural language processing and machine learning algorithms to analyze and extract key information from contracts. This can help parties to identify potential issues and negotiate better terms.
- International Transactions – In addition to the privacy, IP, and liability concerns involved in any business venture using AI, international agreements are susceptible to government controls and other forms of regulation. Export controls are a significant concern for companies involved in international transactions that utilize AI. Many countries have strict export controls on technologies that have military or strategic applications, including AI. Companies must ensure that they comply with these export control laws and regulations when exporting AI technology across borders. This can be particularly challenging when dealing with complex international joint ventures that involve multiple parties and jurisdictions. Governments and courts have different interpretations of the legal standing of AI-generated products, causing confusion over which laws should take precedence. The use of AI in international transactions may raise ethical concerns, particularly around issues such as fairness, transparency, and accountability, particularly when AI systems are used in countries with different cultural norms and values.
The deployment of artificial intelligence is generating new legal challenges across industries around the world. Companies must engage with an AI-experienced law firm to adequately mitigate civil and criminal liability, withstand regulatory scrutiny, and guard against reputational harm from unintended outcomes.
Gamma Law is a San Francisco-based Web3 firm supporting select clients in complex and cutting-edge business sectors. We provide our clients with the legal counsel and representation they need to succeed in dynamic business environments, push the boundaries of innovation, and achieve their business objectives, both in the U.S. and internationally. Contact us today to discuss your business needs.