AI Voice Agents: Legal, Compliance & Regulatory Overview

Navigating U.S. regulations for AI voice assistants? While there's no single federal law, businesses must handle a complex mix of FTC rules, state privacy laws, and industry requirements. Here's what leaders need to know about compliance and risk management.

AI Voice Agents: Legal, Compliance & Regulatory Overview

AI is revolutionizing how we interact with technology, especially in the area of voice technology. AI voice technology is where artificial intelligence-based voices are used in virtual personal assistants, audiobooks, and customer service. As growth continues, so do questions surrounding the legal issues, compliance requirements, and regulation of AI (AI) Voice assistants enabled by technology. AI voice assistants incorporate Natural Language Processing (NLP) and Machine Learning algorithms to make the AI technology sound human.

In 2023, North America was the most significant region, with 39.1% of the market share of the AI voice assistant market, worth USD 1billion. AI voice assistants are commonly used in automated home systems to control smart home appliances, smartphones, andmusic speakers. Many businesses use these voice assistants to schedule meetings and provide basic customer service.


AI standards from ISO/IEC 

The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have provided some guidelines for using AI, focusing on technology. ISO ethical standards that are specifically relevant to AI include:

  1. ISO/IEC 27001 addressed the organizational aspects of the risk management process, such as the confidentiality, integrity, and availability of information in AI systems:
    1. Confidentiality means that only authorized persons can view information an organization holds.
    2. Information integrity is the process of ensuring that the information that is moved and stored within an organization or for the benefit of an organization or on the organization’s behalf is secure and accurate and meets business objectives. 
    3. Availability of data ensures that the organization and relevant clients have easy and timely access to the data and information that they require. 
  2. ISO-IEC 31700 is the default privacy standard for developing and using AI devices.
  3. ISO-IEC 5338 guides management in the use of tools to coordinate, control, improve, and monitor AI system’s operations.
  4. ISO-IEC 42001 defines, establishes, maintains, and improves AI Management Systems (AIMS) in organizations globally. 

 Compliance with ISO-IEC standards is not mandatory. The standards aim to enable organizations and society to get the best from AI as well as reassure stakeholders that systems are being developed and applied appropriately. 

ISO/IEC Certification 

An external and independent third party assesses whether an AI or system works as it should and if appropriate AI management principles are adequately implemented in an organization. When the audit is complete and successful, they will issue a certificate stating that all ISO-IEC requirements have been met. 

Certification is not compulsory and yet has many advantages:

  • Create confidence in the performance of the management system within the organization and outside it by ensuring proper management principles are adhered to. 
  • A systematic approach to process improvement and knowing where to target improvements is also possible. 
  • It also helps improve customer confidence and satisfaction, which may boost business. 
  • It also provides a competitive edge by meeting system certification requirements that customers, suppliers, and sub-contractors may have to conduct business with them. 

Artificial Intelligence Risk Management Framework (AI RMF 1)

In January 2023, the National Institute of Standards and Technology (NIST) released the Artificial Intelligence Risk Management Framework (AI RMF 1) on the responsible development and use of AI systems.

The framework consists of two parts. Part 1 includes the basic information regarding AI risks and issues regarding people and organizations. The framework identifies challenges in quantifying risks.

 Part 2 addresses AI threats. Businesses must determine their risk acceptance level and implement practices to manage unacceptable risks. The remaining risks should be identified to help AI system providers and users make decisions.

 AI system characteristics effectively deal with risks, including accountability, reliability, security, transparency, privacy, protection, and elimination of bias and discrimination.

Operationalizing the Framework: 

  • Introduce the organization to the AI RMF and all the elements that it comprises.
  • Identify all AI systems in the organization.
  • Identify and list all the risks for each AI system.
  • Classify the AI systems based on risk level and focus on the high-risk ones.
  • Describe how you will mitigate the risks.
  • Conduct regular reviews of AI systems should catch risks.
  • Train employees to be aware of risks inherent in AI systems.
  • Create management supervisory roles to minimize risk.

The General Data Protection Rules (GDPR) in the UK and Europe are considered the strictest data protection rules in the world.  They protect European and United Kingdom people’s rights regarding access t and use of their personal information. GDPR rules would impact the AI Voice Agent companies that engage in business across t borders but would not affect those that conduct business only in the U.S.  

The U.S. has no one law or requirement governing artificial intelligence or AI Voice Assistants. Several federal laws address AI privacy and focus on: 

  1. Privacy  
  2. Responsible use and AI development 
  3. Safety and Securing of data
  4. Bias and Discrimination

The following are some of the existing Federal laws that may apply to AI Voice Assistants.

The Federal Trade Commission Act

The Federal Trade Commission Act enables the Federal Trade Commission (FTC) to ensure that consumers are not misled or deceived by the trade practices of commercial entities. They issue regulations, enforce privacy laws, and take enforcement actions to ensure t consumer safety. To comply with the Act, companies should adhere to the following: 

  1. Clearly state the purpose of collecting personal data, how data will be used, and how it will be shared with other parties. 
  2. Do not make false claims about the features of products or services. 
  3. Ensure appropriate steps are taken to prevent unauthorized or unlawful processing of users’ information, including data interference, loss, or destruction.

The FTC has been vigilant in enforcing violated rules, imposing significant fines and negative publicity.

The Children’s Online Privacy Protection Act (COPPA)

If an AI voice assistant communicates with children under age 13 (minor children), COPPA imposes the following requirements:

  1. The company must obtain parental permission for the personal information of minor children. 
  2. Business privacy policies should be easily understandable by consumers. 
  3. Make sure obtained from children is secured and used appropriately.  

Violations of COPPA are punished severely and could include fines, criminal penalties, and negative publicity actions.

The Telephone Consumer Protection Act (TCPA)

The Telephone Consumer Protection Act (TCPA) of 1991 limits automatic dialing systems, artificial or prerecorded voice messages, SMS text messages, and fax machines to protect consumers from receiving unwanted telemarketing.

The Americans with Disabilities Act (ADA)

The Americans with Disabilities Act (ADA) and Section 508 of the Rehabilitation Act state that technology should be developed so people with disabilities can use it. AI voice assistants must: 

  1. Offer other ways of speech input.
  2. Have good sound output for people with speech impediments or hearing problems.

Applicable US State Laws

More than twenty separate U. S. states have implemented data privacy policies that concern the acquisition, storage, and use of information about individuals who are residents of the said state.

The California Privacy Rights Act (CPRA)

The California Privacy Rights Act (CPRA) became effective n January 2023 was the first and most stringent state data privacy law in the United States of America. Most other states’ data privacy laws are based on this law. The gist of the requirements areas follows: 

  1. Businesses gathering personal information must inform the consumer of what data will be used for, how long data will be stored, and any plans to share or market the data. 
  2. Consumers can request that a business change personal information that is incorrect or delete their personal information.
  3. Consumers can decline to allow the sharing or marketing of their personal information. 

In addition to California, nine states have passed privacy laws are in effect as of 2024,including New York, Virginia, Colorado, Connecticut, Utah, Montana, Rhode Island, Oregon, and Texas.

Ten State privacy laws have been passed and will become effective in 2025 or 2026:  Delaware, Indiana, Iowa, Nebraska, New Jersey, New Hampshire, Minnesota, Tennessee, Maryland, and Kentucky.


Industry-Specific Regulations

U.S. AI voice assistants such as Alexa, Google Assistant, Siri, etc., have made communication between and artificial intelligent machines and humans a reality in the last decade. The relatively fast growth has necessitated the development of U.S. industry regulations to ensure businesses operating in the AI industry adhere to set requirements that aim at protecting consumer privacy by being transparent and fairly conducting business.

Industry-specific regulations that could affect AI voice assistants include:

 HIPAA

The Health Insurance Portability and Accountability Act (HIPAA) regulations govern the health information use, copying, and distribution. AI phone agents that store and process sensitive information are prone to hacking, hence the need to protect patient dat. 

3 Key HIPAA requirements that may apply to businesses offering such services as AI phone agents to healthcare providers, plans or organizations:

  1. Privacy Rule: Set forth the privacy rules for the protection of the health information of an individual.
  2. Security Rule: Set the standard for protecting health information to avoid disclosure to unauthorized persons. 
  3. Secretary Breach of Notification Health Rule: A Human services requirement is to inform the Secretary of State of and individual’s health information breach.

Is HIPAA certification required? 

There is no specific requirement in HIPAA that a a healthcare client’s AI Voice Assistant (Agent) must be certified as compliant. 

GLBA

The Gramm Leach Bliley Act (GLBA) states that banks and financial institutions must notify customers about their information-sharing practices and protect sensitive information. If AI voice systems are applied in financial institutions, they must protect the consumers’ financial information by implementing strong specific security guidelines and measures. 

PCI DSS

The Payment Card Industry Data Security Standard (PCI DSS) defines the security standard for the encryption of credit card data. Applicable only if the AI Voice Agent handles sensitive billing data, which is unlikely.

BIPA

The Biometric Information Privacy Act (BIPA) has been enacted in Illinois and provides guidelines on using biometric data ,such as voice prints in AI assistants for identification or customization.

BIPA requires the following from businesses:

  1. Make sure the individual has given informed consent and data storage practices are clearly stated. 
  2. Do not market or sell biometric data.  

Potential Future Legislation

Most existing laws focus on technology development and privacy of personal information. Additional AI laws in development include:

  1. The proposed Algorithmic Accountability Act could require companies to identify and address risks associated with fairness in AI device algorithms. 
  2. Some states are drafting AI laws to protect consumers from bias and discrimination in AI.

Consequences of Non-Compliance

If you do not adhere to industry regulations, you could face the following consequences:

  1. Civil Monetary Penalties. The penalties range from a fine of $100 to $50,000 per violation, with a maximum of $1.5 million for similar violations within the same year. 
  2. Criminal Penalties. Fines or jail time could be imposed.
  3. Legal Action. Non-compliance could lead to lawsuits and loss of reputation.

Recommendations

  1. Implement Security: To protect the consumers’ information, one must use encryption, access control, and authentication. 
  2. Train staff and contract agents to know data protection and patient privacy requirements. 
  3. Monitor and Audit: Regularly check your artificial intelligence system to ensure you comply with U.S. regulations and laws.
  4. Regularly audit review and audit data gathering and use policies and practices.
  5. Adopt strong security tools and practices to protect an individual’s data.
  6. Develop business privacy policies that are transparent.
  7. Inform users in easily understandable language and obtain their consent to data use.
  8. Monitor new and changing legislation and adapt compliance measures quickly.

 U.S. legislation, compliance, and regulatory measures for AI voice assistants are still in development. Continuous monitoring of new developments and changes to existing laws and regulations is the best way to ensure compliance. Those AI voice businesses that are well-informed, proactive in compliance, transparent in their policies, and engaged in secure and ethical practices will set themselves up as trustworthy partners that will benefit as the AI voice technology market grows.