EPIC AI Rulemaking Petition
Calling on the FTC to regulate the use of artificial intelligence in commerce
In February 2020, EPIC filed a petition with the Federal Trade Commission calling on the FTC to conduct a rulemaking concerning the use of artificial intelligence in commerce. "Given the scale of commercial AI use, the rapid pace of AI development, and the very real consequences of AI-enabled decision-making for consumers, the Commission should immediately initiate a rulemaking to define and prevent consumer harms resulting from AI," EPIC urged. EPIC called on the FTC to enforce the AI standards established in the OECD AI Principles, the OMB AI Guidance, and the Universal Guidelines for AI. Several FTC Commissioners have already acknowledged the FTC's role in regulating the use of AI. EPIC's petition followed two prior EPIC complaints to the FTC about the use of AI in employment screening and the secret scoring of young athletes. EPIC's petition is the first formal effort to establish federal regulations for commercial AI use.
- EPIC v. AI Commission: Internal Report Alludes to 'Mass Surveillance,' 'Streets Carpeted with Cameras': In a FOIA lawsuit, EPIC has obtained more documents from the Commission on Artificial Intelligence. The records include internal correspondence and an unattributed report about China's social scoring, facial recognition tools, and AI-based surveillance. The internal report highlights the "draconian" consequences of China's AI use but states that "Mass surveillance is a killer application" for AI and that "having streets carpeted with cameras is good infrastructure for smart cities[.]" The Commission's disclosure to EPIC follows a ruling in EPIC v. AI Commission that the Commission is subject to the FOIA. The AI Commission held over 200 secret meetings with tech firms, defense contractors, and others. EPIC is also litigating to enforce the Commission's obligation to hold open meetings. The case is EPIC v. National Security Commission on AI, No. 19-2906 (D.D.C.). (Apr. 7, 2020) More top news »
The Growth of Commercial AI Use
The absence of effective AI regulations in the United States has accelerated the spread of unaccountable and untrustworthy AI tools. And the unregulated use of those AI tools has already caused serious harm to consumers, who are increasingly subject to opaque and unprovable decision-making in employment, credit, healthcare, housing, and criminal justice.
Businesses are currently relying on opaque AI techniques to make life-altering decisions about consumers. The scope of AI use in employment screening is sweeping. HireVue—just one competitor in the employment screening field—has over 700 corporate customers. And the use of opaque AI tools is not limited to the employment context. Students are subject to AI-based analysis, including automated screening of their communications on school-mandated laptops. Individuals are pressed to hand over intimate, real-time health data to insurance giants—data which may be fed into undisclosed risk assessment tools. And DNA testing services GEDmatch and 23andMe rely on proprietary algorithms to develop genetic profiles of consumers, information which law enforcement routinely seeks to obtain and use.
Businesses are employing AI with little or no accountability to consumers. For example, Clearview AI, uses a powerful algorithm and billions of facial images collected without consent for a facial recognition app capable of quickly identifying a person based on a single photo. Yet despite the recent public outcry over Clearview’s use of AI, individual consumers have little ability to hold the company accountable for developing and operating a facial recognition tool based on their personal data. And Clearview is not alone in the field: companies including Amazon, FaceFirst, and Vigilant Solutions have also developed large-scale—and largely unaccountable—facial recognition tools.
Businesses have failed to ensure that AI is fair to consumers and free from impermissible bias. Fairness is one of the cornerstones of AI use, but businesses have failed to demonstrate that their use of AI is fair to consumers or free from impermissible bias. A report by the National Institute of Standards and Technology analyzed the facial recognition algorithms of a “majority of the industry” and found the software up to 100 times more likely to return a false positive for a non-white individual than for a white individual.
Businesses are engaged in secret profiling of consumers. Secret scoring can harm the ability of individuals to obtain credit, jobs, housing, and other important opportunities. Predictive consumer scores are used by companies across many industries, such as Clearview AI and the Universal Tennis Rating.
Public Policy Frameworks Governing AI Use
Artificial intelligence poses unique risks to human rights, privacy, and autonomy. As Professors Danielle Keats Citron and Frank Pasquale explain, “New algorithmic decisionmakers are sovereign over important aspects of individual lives. If law and due process are absent from this field, we are essentially paving the way to a new feudal order of unaccountable reputational intermediaries.” Accordingly, policymakers and experts have established widely adopted legal standards for the use of AI.
In 2019, the member nations of the OECD, working also with many non-OECD members countries, promulgated the OECD Principles on Artificial Intelligence. The United States has endorsed the OECD AI Principles. The OECD AI Principles establish international standards for AI use:
- Inclusive growth, sustainable development and well-being.
- Human-centered values and fairness.
- Transparency and explainability.
- Robustness, security and safety.
The Universal Guidelines for Artificial Intelligence, a framework for AI governance based on the protection of human rights, were set out at the 2018 Public Voice meeting in Brussels, Belgium. The Universal Guidelines for AI have been endorsed by more than 250 experts and 60 organizations in 40 countries. The UGAI comprise twelve principles:
- Right to Transparency.
- Right to Human Determination.
- Identification Obligation.
- Fairness Obligation.
- Assessment and Accountability Obligations.
- Accuracy, Reliability, and Validity Obligations.
- Data Quality Obligation.
- Public Safety Obligation.
- Cybersecurity Obligation.
- Prohibition on Secret Profiling.
- Prohibition on Unitary Scoring.
- Termination Obligation.
In January 2020, the Office of Management and Budget, in coordination with the Office of Science and Technology and Policy, released its Guidance for Regulation of Artificial Intelligence Applications. The OMB AI Guidance, which applies to “all Federal agencies,” incorporates many of the precepts of the OECD AI Principles and the UGAI. The OMB AI Guidance lays out ten “Principles for the Stewardship of AI Applications”:
- Public Trust in AI.
- Public Participation.
- Scientific Integrity and Information Quality.
- Risk Assessment and Management.
- Benefits and Costs.
- Fairness and Non-Discrimination.
- Disclosure and Transparency.
- Safety and Security.
- Interagency Coordination.
The FTC's Rulemaking Authority
Under the FTC Act (15 U.S.C. § 57a), the Commission is empowered to issue trade regulation rules “which define with specificity acts or practices which are unfair or deceptive acts or practices in or affecting commerce[.]” These rules “may include requirements prescribed for the purpose of preventing such acts or practices.” A violation of a trade rule “shall constitute an unfair or deceptive act or practice in violation of section 5(a)(1) of [the FTC] Act, unless the Commission otherwise expressly provides in its rule.”
The Commission may initiate a trade regulation rulemaking when “it has reason to believe that the unfair or deceptive acts or practices which are the subject of the proposed rulemaking are prevalent.” Acts or practices are “prevalent” if the Commission “has issued cease and desist orders regarding such acts or practices” or—as in this case—when “information available to the Commission indicates a widespread pattern of unfair or deceptive acts or practices.”
The Need for the FTC to Conduct a Rulemaking
The unregulated use of AI to make decisions about consumers is unfair because it “causes or is likely to cause substantial injury to consumers” that is not outweighed by countervailing benefits. Businesses are regularly relying on biometric information, financial records, and other highly sensitive personal data to make individualized AI-based determinations about consumers. Many of these AI applications are completely unknown to consumers, as in the secret collection and processing of billions of facial images by Clearview. And even if consumers are notified that an AI system is in use, they are frequently given no explanation of the decisions made by that system and no meaningful opportunity to opt out. For example, many job applicants have little choice but to submit to HireVue’s AI-based screening tool—or else forgo an ever-growing list of employment opportunities.
Moreover, commercial uses of AI routinely violate established public policy frameworks. The AI tools deployed by HireVue, Clearview, Airbnb, and numerous other corporations flout the U.S.-endorsed OECD AI Principles and the Universal Guidelines on Artificial Intelligence. Instead, these AI tools are often opaque, unaccountable, and unreliable. For these reasons, strong majorities of the public have concluded that automated decisionmaking is “unacceptable” in criminal risk assessment (56%), resume screening (57%), job interviews (67%), and consumer scoring (68%).
Commercial applications of AI have also become sufficiently prevalent that the Commission must address the resulting unfairness to consumers through its trade regulation authority. According to a Gartner study, the commercial use of AI has increased 270% in the last 4 years, with 37% of businesses now using some form of the technology. By other accounts, the scale of commercial AI is even greater. Nearly half of respondents in one survey reported that “their organizations have embedded at least one [AI capability] into their standard business processes, while another 30 percent report piloting the use of AI.”
FTC Commissioners have repeatedly recognized that the rapid growth of AI implicates the FTC’s regulatory powers. In October 2018, Chairman Simons explained that the rise of AI “challenge[s] all of us to reexamine our regulatory approaches.” The following month, the Commission hosted a public event on “Consumer Protection Implications of Algorithms, Artificial Intelligence, and Predictive Analytics.” Commissioner Wilson recently noted that the FTC “has long been thinking about and grappling with” issues of algorithmic bias and discrimination, and Commissioner Phillips has acknowledged the FTC's power to “regulat[e] the private sector's use of technology and AI[.]” Commissioner Chopra has repeatedly warned about the “opacity and complexity of algorithmic decision-making” and that “we should never assume that algorithms will be free of bias.” And recently, Commissioner Slaughter sounded the alarm about the “mounting and urgent” nature of “algorithmic harms,” noting that a trade regulation rulemaking may be required.
EPIC is a longstanding advocate of algorithmic transparency and legal safeguards for the use of artificial intelligence. As EPIC has explained, “Algorithmic accountability is a complex topic, but the impact cuts broadly across life in America, from jobs and credit to housing and criminal justice.” EPIC has also warned of the urgency to act now: “The United States must work with other democratic countries to establish red lines for certain A.I. applications and ensure fairness, accountability and transparency as A.I. systems are deployed.”
EPIC has specifically recommended that the FTC respond to the emergence of AI techniques that are “unfair or deceptive,” and therefore in violation of the FTC Act. In 2019, EPIC filed a complaint against recruiting company HireVue alleging that the company falsely denied it uses facial recognition and failed to comply with baseline standards for AI decision-making. And in 2017, EPIC filed a complaint against Universal Tennis Rating, a secret, proprietary algorithm, used to assign personally identifiable numeric scores to tennis players under 13 years old. And EPIC filed a complaint against Facebook concerning the company's facial recognition practices.
EPIC publishes the AI Policy Sourcebook, the first reference book on AI policy.
- EPIC's Petition for Rulemaking Concerning Use of Artificial Intelligence in Commerce (Feb. 3, 2020)
- OECD AI Principles
- Universal Guidelines for Artificial Intelligence
- EPIC AI Policy Sourcebook 2020 (2020)
- EPIC, In re HireVue (Nov. 6, 2019)
- EPIC v. AI Commission (seeking public access to the records and meetings of the NSCAI)
- Comments of EPIC to the Council of Europe on Human Rights Impacts of Algorithmic Systems (Aug. 15, 2019)
- Comments of EPIC to OMB on Access to Federal Data for AI Research (Aug. 9, 2019)
- Comments of EPIC to NIST on AI Standards (May 31, 2019)
- Comments of EPIC to DOD on "Insider Threat Management and Analysis Center" (Apr. 22, 2019)
- Statement of EPIC to the Senate Comm. on the Judiciary on U.S. AI Policy (Nov. 30, 2018)
- The Public Voice, Universal Guidelines for Artificial Intelligence (Oct. 23, 2018)
- EPIC et al., Petition to OSTP for Request for Information on Artificial Intelligence Policy (July 4, 2018)
- EPIC v. DOJ, No. 18-5307 (seeking a DOJ report to the President and related records on the use of algorithms in the criminal justice system)
- EPIC, Algorithmic Transparency: End Secret Profiling (2015)
- EPIC et al., Petition for OSTP to Conduct Public Comment Process on Big Data and the Future of Privacy (Feb. 10, 2014)
- EPIC Asks FTC to Regulate Use of AI in Employment Screenings, FindBiometrics, Feb. 5, 2020
- EPIC Petitions FTC for Regulations on AI Use, MeriTalk, Feb. 3, 2020
- EPIC Asks Federal Trade Commission To Regulate Use Of Artificial Intelligence In Pre-Employment Screenings, Forbes, Feb. 3, 2020
- Morning Tech: A call for AI regulation, POLITICO, Feb. 3, 2020
- FTC Should Regulate Artificial Intelligence, EPIC Petition Says, Bloomberg Law, Feb. 3, 2020