Insights Shaping the Future of AI Governance
Equipping leaders with actionable intelligence to design transparent, ethical, and resilient AI governance frameworks.
Why Policy Research Matters
AI systems create complex interdependencies across data pipelines, model architectures, and deployment environments. That is why policy research is essential for mapping these technical–regulatory interfaces and aligning governance with system behavior. By analyzing reliability, auditability, lifecycle risks, and global regulatory convergence, it supports scalable oversight, consistent assurance mechanisms, and robust, interoperable AI governance frameworks.
Global Regulatory Trends
Gain insight into how nations are shaping AI governance worldwide. We analyze emerging laws, regulatory models, and international standards to help policymakers and organizations anticipate shifts and align with evolving global expectations.
Risk & Impact Assessments
We evaluate the technical, ethical, and societal risks of AI systems to support informed decision-making. Our assessments guide regulators and institutions in designing proportionate safeguards, ensuring responsible deployment and long-term trust.
Lessons from Other Technologies
We draw on governance frameworks from sectors such as data protection, cybersecurity, and biotechnology to inform effective AI regulation. These cross-sector lessons help build resilient, adaptable policies that can scale with advancing technologies.
A.I. Legislation Bill Status Updates
Stay informed on the progress of AI-related bills across jurisdictions. We track proposals, amendments, and legislative milestones to provide clear visibility into how regulations are evolving. Our updates help policymakers, industry leaders, and stakeholders anticipate compliance needs, evaluate policy trajectories, and align strategic decisions with the latest legislative developments.
Filters
Status
State
Country
Year
Enacted
Establishes requirements for contracts involving digital replicas of individuals, ensuring clear intended-use descriptions and representation to protect against unauthorized or exploitative use
Enacted
Protects residents against unauthorized use of their likeness, voice, or image, including AI-generated deepfakes; strengthens personal rights, allowing individuals to sue for damages and criminally penalizing violations, ensuring artists retain control over their intellectual property
Introduced
Advances U.S. leadership in technical standards by instructing the National Institute of Standards and Technology and the Department of State to take specific steps to facilitate U.S. involvement in creating standards and specifications for AI and other emerging, critical technologies
Introduced
Requires systematic review of AI systems before deployment by the Federal Government
Vetoed
Creates the Frontier Model Division within the California Technology Department to strengthen AI enforcement mechanisms; requires developers of advanced AI models to ensure safety and security through mandatory testing and incident reporting
Enacted
Requires developers and deployers of high-risk AI systems to avoid algorithmic discrimination and follow specified guidelines; mandates that developers disclose risks and provide documentation, while deployers implement risk management policies, conduct impact assessments, and allow consumers to correct data and appeal decisions
Introduced
Directs the Federal Communications Commission to create an AI-powered online tool to help the public identify fraudulent emails, texts, and websites by providing a scam likelihood rating
Introduced
Requires financial, housing, and national security regulators to study and report on the benefits and risks of AI across industries; aims to assess AI’s impact while providing legislative and regulatory recommendations for responsible AI adoption
Introduced
Requires federal financial agencies to study and report on standardized descriptions for vendor-provided AI systems; aims to improve transparency and accountability by assessing data practices, AI model design, and compliance with federal laws
Introduced
Prohibits the publication of non-consensual explicit materials, including AI-generated or “deepfake pornography”; requires websites to establish procedures to remove non-consensual explicit materials upon notification
Introduced
Establishes an administrative subpoena process allowing copyright owners to request records or copies of their works used to train generative AI models; aims to provide transparency and accountability for AI model developers while safeguarding copyright owners’ rights
Draft
Mandates transparency, consumer protection, and risk management for high-risk AI systems; prohibits discriminatory uses of AI systems, enacts enforcement mechanisms, and funds workforce training grants to support ethical AI development
Introduced
Increases penalties for crimes involving fraud that are committed with the assistance of AI; targets AI-enabled crimes, such as wire fraud, mail fraud, bank fraud, and money laundering
Introduced
Extends the Chief Data Officers Council to 2031 and tasks it with addressing data governance and sharing issues hindering AI and emerging technology adoption; requires the council to report best practices to Congress and the OMB
Introduced
Commissions a report on the benefits of using AI to more efficiently identify duplicative grant applications, among other provisions
Enacted
Enacted
Enacted
Enacted
Introduced
Introduced
Introduced
Introduced
Introduced
Introduced
Passed House, To Senate
Introduced
Introduced
Enacted
Sent to Governor
Passed Assembly, to senate
Passed Assembly, Eligible for governor
Passed senate, sent to governor
Passed Senate, eligible for governor
Passed Senate, eligible for governor
Sent to Governor
Sent to Governor
Introduced
Introduced
Passed Senate, To House
Enacted
Passed Senate, To House
Passed House, To Senate
Passed House, To Senate
Passed Assembly, Eligible For Governor
Passed Assembly, Eligible For Governor
Passed Senate, To Assembly
Passed Assembly, Eligible For Governor
Introduced
Introduced
Introduced
Introduced
Introduced
Enacted
Introduced
Introduced
Enacted
Enacted
Enacted
Introduced
Enacted
The Arizona Revised Statutes amend section 16-1023 to allow a candidate for public office to bring an action for digital impersonation within two years of knowing or exercising reasonable diligence that a digital impersonation was published. The sole remedy is judicial and permanent declability, except as otherwise provided. A prosecutor must prove that the impersonation was published without the person’s consent, that the publisher did not reasonably disclose the impersonation’s authenticity or that it was not otherwise obvious. If the impersonation is part of a paid advertisement, a cause of action for declatory judgment may be brought against the person or entity who ordered, placed, or paid for the advertisement. A person can also petition the Superior Court for the state in which they reside or for a preliminary judgment that a recording or image is a digital impersonation.
This text outlines the definition of digital impersonation, which refers to the use of electronic media to represent a person engaging in a sexual act or a criminal act. If the person is not eligible for expedited relief, they may face significant personal or financial hardship or loss of employment opportunities. The text also emphasizes the importance of justice and the right of the publisher to appear, be heard, and present evidence before a preliminary decLARatory judgment. If the publisher does not appear and no other party intervenes as a defendant, the prosecutor is not entitled to taxable costs. If the statute is deemed invalid, it does not affect other provisions or applications of the act that can be given effect without this provision. The text also states that a person bringing an action for digital impersonation has the right to recover indemnifying relief and damages if all requirements are met. The text also states that the section is narrowly constructed in favor of free and open discourse on matters of public concern and artistic expression and does not limit a party’s constitutional right to trial by jury.
Enacted
Introduced
Enacted
Establishespenaltiesforthedisseminationofdeepfakeswithoutconsentorclear
disclosure
Passed Senate To House
Amends definitions of sexual extorsion offenses, including those concerning minors,
to include the disclosure of AI-generated explicit materials
Passed House To Senate
Prohibits the distribution of deceptive synthetic media within 90 days of an election;
requires clear disclosures for such media and allows candidates to seek legal relief and
damages if depicted unfairly
Enacted
Provides requirements for school districts to receive grant funds to implement AI in
support of students and teachers, among other education provisions
Passed Senate To House
Prohibits an employer that uses predictive data analytics for employment decisions from
considering an applicant’s race or zip code
Passed Senate To House
Stipulates that certain agreement provisions relating to the use of digital replicas
or generative AI systems for personal or professional services may be deemed
unenforceable under certain circumstances
Passed Senate (Eligible for Governor)
Establishes the Generative AI and Natural Language Processing Task Force to assess
generative AI and natural language processing software and their uses; directs the task
force to make recommendations concerning generative AI, among other provisions
Enacted
Prohibits the unauthorized dissemination of AI-generated explicit images depicting
individuals without consent and establishes penalties; targets AI-generated content to
prevent harassment and privacy violations
Passed to House
Prohibits the dissemination of synthetic media that illustrate an individual’s intimate
parts or an individual conducting sexual acts without the consent of the depicted
individual
Sent To Governor
Requires developers and deployers of high-risk AI systems to avoid algorithmic
discrimination and follow specified guidelines; mandates that developers disclose risks
and provide documentation, while deployers implement risk management policies,
conduct impact assessments, and allow consumers to correct data and appeal decisions
Passed House
Provides a legal remedy to candidates for public office if they are targeted by digital
impersonations; permits victims to seek declaratory relief within two years of
becoming aware of the impersonation
Passed to Senate
Establishes a pilot program to explore ways AI can be used to support the Consumer
Product Safety Commission; directs the Secretary of Commerce and the Federal Trade
Commission to study the use of blockchain technology and tokens
Introduced (HAC)
Amends the Federal Election Campaign Act of 1971 to prohibit the distribution of
materially deceptive AI-generated audio or visual media relating to candidates for
federal office
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Gulf countries have been investing in AI research and development.
Discussion
Responsible for AI governance, policy, and regulation in Saudi Arabia.
Discussion
Different countries in South America have been exploring AI strategies.
Discussion
Efforts to leverage AI for development and address local challenges.
Discussion
Focuses on AI research, skills development, and industry collaboration.
Proposed
Aims to create a thriving AI ecosystem in New Zealand.
Discussion
Provides guidelines for ethical and accountable AI deployment.
Discussion
Focuses on AI research, technology transfer, and societal empowerment.
Discussion
Promotes AI adoption, research, workforce development, and ethics.
Proposed
Aims to promote AI adoption, research, and development in India.
Discussion
Offers guidance on responsible AI development and deployment.
Discussion
Includes state-level bills related to data privacy, consumer protection, algorithmic accountability, and more.
Discussion
Regulates biometric data, which can include voiceprints for AI.
Proposed
Aims to regulate personal data usage, includes provisions for AI.
Discussion
Requires data brokers to register and protect consumer data.
Discussion
Emphasizes principles of data privacy, control, and AI ethics.
Discussion
Discussion
Proposed
Addresses AI risk assessment, transparency, accountability for high-risk AI systems. GDPR influences AI data handling.
Discussion
Encourages collaboration for AI research, development, and ethics.
Discussion
Contains provisions for AI-related issues and transparency.
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Discussion
Failed
The law would also permit consumers to opt-out of “profiling in furtherance of decisions that produce legal or similarly significant effects concerning a consumer.” Profiling is defined as any type of automated processing performed on personal data to evaluate, analyze, or predict personal aspects” such as “economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.” Finally, the law would mandate that companies conduct a data protection assessment on their profiling activities, since profiling would be considered a processing activity with a heightened risk of harm to the consumer.
Proposed
IntroducedonJanuary18and19,2023,theMassachusettsDataPrivacyProtectionAct(MDPPA)wasfiledinboththeSenateSD745,andintheHouseHD2281.ThebillisbasedonthefederalAmericanDataPrivacyProtectionActwithadditionalprovisionsrelatingtoworkplacesurveillance.TheMDPPAwouldrequirecompaniestoconductimpactassessmentsiftheyusea“coveredalgorithm”inawaythatposesaconsequentialriskofharmtoindividuals.“Coveredalgorithm,”isdefinedas“acomputationalprocessthatusesmachinelearning,naturallanguageprocessing,artificialintelligencetechniques,orothercomputationalprocessingtechniquesofsimilarorgreatercomplexityandthatmakesadecisionorfacilitateshumandecision-makingwithrespecttocovereddata,includingdeterminingtheprovisionofproductsorservicesortorank,order,promote,recommend,amplify,orsimilarlydeterminethedeliveryordisplayofinformationtoanindividual.”
Discussion
Introduced (HHSC)
Establishes a task force in the Cybersecurity and Infrastructure Security Agency (CISA)
to address AI safety and security challenges and coordinate efforts to improve the
deployment of AI systems
Introduced (HAC)
Directs the Election Assistance Commission to issue a report containing voluntary
guidelines for election administration-related uses of AI technologies and
cybersecurity risks
Introduced
Ending FCC Meddling in Our Elections Act. Prohibits the Federal Communications Commission from enforcing rules related to AI-
generated content disclosure requirements in political advertisements.
Introduced
Next Generation Military Education Act. Directs the Department of Defense’s Chief Digital and Artificial Intelligence Officer to create an online AI education course, requires military branches to participate in the“Digital On-Demand” initiative, and mandates the inclusion of AI risks and threats in annual cybersecurity training.
Introduced
Directs the Department of Defense to develop a plan for the establishment of a secure computing and data storage environment for the testing of AI trained on biological data, and for other purposes
Introduced
Prohibits the use of AI-generated content for fraudulent misrepresentations of political candidates; broadens existing fraud rules to include anyone falsely acting on the behalf of a candidate or political group, and removes the requirement that such misrepresentations must be “damaging” to a candidate or party
Introduced
Mandates a study to examine the impact of AI technologies on job outlooks across sectors, with a focus on diverse demographic groups to prepare workers for AI-driven industries and ensure equitable access to technology job opportunities; establishes a $250 million grant program to fund K-12 educational programs, teacher development, and workforce upskilling in emerging technologies
Introduced
Mandates a study to examine the impact of AI technologies on job outlooks across sectors, with a focus on diverse demographic groups to prepare workers for AI-driven industries and ensure equitable access to technology job opportunities; establishes a $250 million grant program to fund K-12 educational programs, teacher development, and workforce upskilling in emerging technologies
Introduced
Directs NIST to develop and update best practices for AI systems, focusing on transparency, security, and risk management; mandates that NIST collaborates with stakeholders and reports to Congress within 18 months on their findings and recommendations
Introduced
Creates a new center to strengthen U.S. leadership in AI research, focusing on the reliability and safety of AI systems; mandates collaboration with federal agencies and private sectors to address vulnerabilities and improve AI standards
Introduced
Directs NOAA to use AI for improved forecasting and adaptation to extreme weather; promotes partnerships and open access to data for advancing weather and climate science
Introduced
Requires the Department of Homeland Security to notify Congress of all uses or extensions of transaction authority involving AI technology; amends Section 831 of the Homeland Security Act of 2002, related to DHS and other transaction agreements
Introduced
Establishes the AI Grand Challenges Program to award prizes for advancements in AI across various fields like national security, health, and cybersecurity; authorizes prize amounts up to $50 million and requires regular reporting to Congress
Passed to Senate
Requires the FAA administrator to review AI technologies that could enhance airport
safety and productivity regarding jet bridges, service vehicle traffic, air traffic control,
and aircraft taxi; investigates China AI’s impact on airport operations with other
federal agencies
Introduced (SFRC)
Directs the Secretary of Defense to establish a working group to develop and
coordinate an AI initiative among the Five Eyes countries
Passed to House
Establishes civil liability for the defamation of a candidate based on the use of
synthetic media, among other election provisions
Passed to House
Provides that individuals pursuing civil action regarding defamation in AI-generated
media products are entitled to the same damages as victims of conventional
defamation; bans AI-generated media in election-related content without disclosure
Passed Senate, To Assmebly
Amends California’s child pornography laws to include AI-generated explicit images of
minors
Passed House To Senate
Criminalizes the use of deepfake technology to influence elections, making it a Class B
misdemeanor, with increased penalties for violence or repeat offenses; provides certain
exceptions for speech and media rights
Enacted
EstablishestheTalentInnovationFundwithintheMarylandDepartmentofLaborto
helpresidentstakeadvantageofjobtrainingopportunitiesinartificialintelligence,
cybersecurity,andotherindustriesimportanttothestate
Enacted
Requiresstategovernmentagenciestoconductannualinventoriesandimpact
assessmentsforsystemsthatemployAI
Enacted
RequirestheDepartmentofInformationTechnologytoanalyzethefeasibilityofcreating
a3-1-1portalthatusesartificialintelligence;iftheportalisfeasible,theDepartment
mustprioritizethecreationoftheportal
Introduced
“Authorizes the National Science Foundation to disperse scholarships to further research; creates NSF programs to fund AI research in education, agriculture, and
advanced manufacturing”
Introduced
Codifies the NSF’s ExpandAI program to enhance AI capacity-building projects in populations historically underrepresented in STEM; supports partnerships within a broad interdisciplinary research community to broaden AI research, education, and workforce development
Introduced (FAC)
Amends the Export Control Reform Act of 2018 to include artificial intelligence
systems; authorizes the President to control exports of AI systems that have national
security implications
Passed Senate (Eligible for Governor)
Updates the state AI task force, expanding its membership to 17 and including new
experts in generative AI, discrimination advocacy, and youth safety; broadens the task
force’s study to include AI and biometric technology
Discussion
Introduced
Introduced
Passed House (Eligible For Governor)
Mandates the use of AI tools for classroom instruction and develops AI training
programs for educators and students
Passed House To Senate
Establishes the Delaware AI Commission to advise the General Assembly and Department
of Technology and Information on AI use and safety; directs the Commission to inventory
generative AI usage in state agencies and identify high-risk areas
Passed House To Senate
Provides civil and criminal remedies for wrongful disclosure of deepfakes depicting
nudity or sexual conduct, aligning with existing Delaware laws; designates felony
charges for adults creating such depictions of minors
Passed to Senate
Mandates federal agencies to determine if computer programs issued comments
on federal rules; requires the Government Accountability Office (GAO) to report
computer-generated comments and their effects on rulemaking procedures
Discussion
Discussion
Discussion
Discussion
Discussion
Enacted
Amends the definition of “material” regarding the sexual exploitation of children, to
include images created by, adapted, or modified by AI tools
Discussion
Enacted
Enacted
Discussion
Discussion
Discussion
Enacted
Enacted
Enacted
Enacted
Amends and adds to existing law to provide for the crime of visual representations of the sexual abuse of children and to revise provisions regarding the Internet Crimes Against Children Unit.
Enacted
Prohibits a developer, as defined, from collecting and using the personal information of consumers under 16 years of age to train an artificial intelligence system.
Enacted
Elections: Deceptive Media in Advertisements. Expands the period in which entities are prohibited from knowingly distributing election material containing deceptive AI-generated or manipulated content; expands the scope of existing laws to prohibit deceptive AI-generated content related to candidates, elected officials, and others.
Enacted
Contracts Against Public Policy: Digital Replicas. Requires contracts to clearly define the use of AI-generated replicas of a performer’s voice or likeness; mandates that performers must have professional representation during contract negotiations to safeguard their digital rights and protect against unauthorized AI replication.
Enacted
Defending Democracy from Deepfake Deception Act of 2024. Requires large online platforms to restrict materially deceptive election content that has been digitally altered during specified timeframes; exempts certain media outlets under specific requirements; allows candidates and certain officials to seek injunctive relief for platform noncompliance.
Enacted
Mandates that election advertisements disclose the use of content generated or altered by AI; authorizes the Fair Political Practices Commission to enforce these requirements through legal action or other remedies under the Political Reform Act
Enacted
Existing
Marylandlaw,HB1202,prohibitsanemployerfromusingafacialrecognitionserviceforthepurposeofcreatingafacialtemplateduringanapplicant’spre-employmentinterview,unlesstheapplicantconsentsbysigningaspecifiedwaiver.ThisworkplaceAIlawwentintoforceonOctober1,2020.
Passed senate, sent to governor
Discussion
Enacted
Enacted
Enacted
Enacted
Enacted
Enacted
Passed Assembly, To Senate
Grants consumers various rights pertaining to the personal information collected or
sold by a business, including the right to prohibit its sale
Discussion
AB 2013, Irwin. Generative artificial intelligence: training data transparency.
Existing law requires the Department of Technology, in coordination with other interagency bodies, to conduct, on or before September 1, 2024, a comprehensive inventory of all high-risk automated decision systems, as defined, that have been proposed for use, development, or procurement by, or are being used, developed, or procured by, state agencies, as defined.
This bill would require, on or before January 1, 2026, and before each time thereafter that a generative artificial intelligence system or service, as defined, or a substantial modification to a generative artificial intelligence system or service, released on or after January 1, 2022, is made available to Californians for use, regardless of whether the terms of that use include compensation, a developer of the system or service to post on the developer’s internet website documentation, as specified, regarding the data used to train the generative artificial intelligence system or service. The bill would require that this documentation include, among other requirements, a high-level summary of the datasets used in the development of the system or service, as specified.
Enacted
Discussion
Discussion
Enacted
Enacted
Enacted
Prohibits the commercial use of digital replicas of deceased performers in entertainment media without estate consent; aims to prevent unauthorized use of the performer’s digital likeness
Enacted
Enacted
Enacted
Enacted
Establishes an artificial intelligence task force (the task force) tasked with examining and evaluating the utilization of artificial intelligence technology within state agencies.
Introduced (SCSTC)
Updates cybersecurity reporting systems to improve information sharing between the
federal government and private companies; establishes a public database to track AI
security incidents
Introduced (SCSTC)
Directs the National Science Foundation (NSF) to establish the AI Grand Challenges
Program; awards $1 million minimum competitive prizes for AI research and
development
Enacted
Passed
Requiring a disclosure of deceptive artificial intelligence usage in political advertising.
Enacted
Draft
Sens. Elizabeth Warren, D-Mass., and Eric Schmitt, R-Mo., introduced new legislation known as the Protecting AI and Cloud Competition in Defense Act.
Proposed
Introduced on December 5, 2022, Bill A4909, would regulate the “use of automated tools in hiring decisions to minimize discrimination in employment.” The bill imposes limitations on the sale of automated employment decision tools (AEDTs), including mandated bias audits, and requires that candidates be notified that an AEDT was used in connection with an application for employment within 30 days of the use of the tool.
Discussion
Discussion
Discussion
Discussion
Introduced
To require generative artificial intelligence to disclose that their output has been generated by artificial intelligence, and for other purposes.
Failed
IntroducedonMay23,2023,theDataPrivacyandProtectionAct,HP1270,isacomprehensivebillaimedatprotectingconsumerdata.TheActincludesretentionlimits,userestrictions,andreportingrequirements.Section9615specificallygovernstheuseofalgorithms.Thesectionappliestocoveredentities,definedas“aperson,otherthananindividualactinginanon-commercialcontext,thataloneorjointlywithothersdeterminesthepurposesandmeansofcollecting,processingortransferringcovereddata”,excludingsmallbusinesses.TheActprovidesthatcoveredentitiesusingcoveredalgorithms(broadlydefined,includingmachinelearning,AI,andnaturallanguageprocessingtools)tocollect,process,ortransferdata“inamannerthatposesaconsequentialriskofharm”completeanimpactassessmentofthealgorithm.TheimpactassessmentmustbesubmittedtotheAttorneyGeneral’sofficewithin30daysoffinishingit.Theassessmentmustincludeapublicallyavailableandeasilyaccessiblesummary.Inadditiontoanimpactassessment,theActrequirescoveredentitiestocreateadesignevaluationpriortodeployingacoveredalgorithm.Thedesignevaluationmustincludethedesign,structure,andinputsofthecoveredalgorithm.Thisbillincludesaprivaterightofactionandallowsfortherecoveryofpunitivedamages.ItiscurrentlypendingintheMaineSenate.Ifenacted,thefirstassessmentwillbeduetwoyearsfromthedaythebillisenacted.
Discussion
Proposed
Introduced on March 30, 2023, HB62236, the Rhode Island Data Transparency And Privacy Protection Act, would establish an omnibus consumer privacy law along the lines of those enacted in states like Virginia. Among its requirements, the bill provides consumers with the right to opt-out of the processing of their personal data for purposes of “profiling in furtherance of solely automated decisions that produce legal or similarly significant effects concerning the customer.” Profiling is defined as “any form of automated processing performed on personal data to evaluate, analyze or predict personal aspects related to an identified or identifiable individual’s economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” The bill also mandates the performance of data protection assessments in connection with “profiling” where the profiling presents “a reasonably foreseeable risk of unfair or deceptive treatment of, or unlawful disparate impact on, customers, financial, physical or reputational injury to customers, a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of customers, where such intrusion would be offensive to a reasonable person, or other substantial injury to customers[.]”
Discussion
Proposed
Introduced on March 27, 2023, HB708, would establish an omnibus consumer privacy law along the lines of those enacted in states like Virginia. Among its requirements, the bill provides consumers with the right to opt-out of the processing of their personal data for purposes of “profiling in furtherance of in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.” Profiling is defined as a “form of automated processing performed on personal data to evaluate, analyze or predict personal aspects related to an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” The bill also mandates the performance of data protection assessments in connection with “profiling” where the profiling presents “a reasonably foreseeable risk of: (i) discriminatory, unfair or deceptive treatment of, or unlawful disparate impact on, consumers; (ii) financial, physical or reputational injury to consumers; (iii) a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers, where the intrusion would be offensive to a reasonable person; or (iv) other substantial injury to consumers.”
Discussion
Failed
Introduced on March 10, 2023, SB 5641, would amend labor law to establish criteria for the use of automated employment decision tools (AEDTs). The proposed bills mirrors NYC’s Local Law 144 in many ways. In particular, employers who utilize AEDTs must: (1) obtain from the seller of the AEDT a disparate impact analysis, not less than annually; (2) ensure that the date of the most recent disparate impact analysis and a summary of the results, along with the distribution date of the AEDT, are publicly available on the employer’s or employee agency’s website prior to the implementation or use of such tool; and (3) annually provide the labor department a summary of the most recent disparate impact analysis.
Failed
Introduced on March 10, 2023, HB4695, would prohibit the use of artificial intelligence technology to provide counseling, therapy, or other mental health services unless (1) the artificial intelligence technology application through which the services are provided is an application approved by the commission; and (2) the person providing the services is a licensed mental health professional or a person that makes a licensed mental health professional available at all times to each person who receives services through the artificial intelligence technology. The artificial intelligence technology must undergo testing and approval by the, Texas Health and Human Services Commission, the results of which will be made publicly available. If passed, the law would take effect September 1, 2023.
Failed
Introduced on March 7, 2023, A5309, would amend state finance law to require that where state units purchase a product or service that is or contains an algorithmic decision system, that such product or service adheres to responsible artificial intelligence standards. The bill requires the commissioner of taxation and finance to adopt regulations in support of the law.
Proposed
Introduced on March 7, 2023, HB49, would direct the Department of State to establish a registry of businesses operating artificial intelligence systems in the State. The registry would include (1) The name of the business operating artificial intelligence systems; (2) The IP address of the business; (3) The type of code the business is utilizing for artificial intelligence; (4) The intent of the software being utilized; (5) The personal information and first and last name of a contact person at the business; (6) The address, electronic email address and ten-digit telephone number of the contact person; and (7) A signed statement indicating that the business operating an artificial intelligence system has agreed for the Department of State to store the business’ information on the registry.
Failed
IntroducedonMarch1,2023,HF2309,wouldcreateanomnibusconsumerprivacylawbasedontheColoradoPrivacyActandConnecticutDataPrivacyAct,toregulate,amongotherdatauses,thecollectionandprocessingofpersonalinformation.Inparticular,thebillsetsoutrulesforprofilingandautomateddecision-making.Specifically,thebillenablesindividualstoopt-outof“profilinginfurtheranceofdecisionsthatproducelegalorsimilarlysignificanteffects”concerningtheconsumer.Profilingisdefinedas“anyformofautomatedprocessingofpersonaldatatoevaluate,analyze,orpredictpersonalaspectsconcerninganidentifiedoridentifiablenaturalperson’seconomicsituation,health,personalpreferences,interests,reliability,behavior,location,ormovements.”Controllersmustalsoperformadataprivacyandprotectionassessmentforhigh-riskprofilingactivities.
Discussion
Failed
Introduced on February 17, 2023, HB 3385, would create the Illinois Data Privacy and Protection Act, to regulate, among other data uses, the collection and processing of personal information and the use of “covered algorithms.” The bill defines “covered algorithm,” broadly as “a computational process that uses machine learning, natural language processing, artificial intelligence techniques, or other computational processing techniques of similar or greater complexity and that makes a decision or facilitates human decision-making with respect to covered data, including to determine the provision of products or services or to rank, order, promote, recommend, amplify, or similarly determine the delivery or display of information to an individual.” “Covered algorithm” is defined but not used further in the bill.
Discussion
Enacted
The California Consumer Privacy Act of 2018 (CCPA) grants a consumer various rights with respect to personal information that is collected or sold by a business, as defined, including the right to direct a business that sells or shares personal information about the consumer to third parties not to sell or share the consumer’s personal information, as specified. The California Privacy Rights Act of 2020, approved by the voters as Proposition 24 at the November 3, 2020, statewide general election, amended, added to, and reenacted the CCPA.
This bill would require a business to which another business transfers the personal information of a consumer as an asset that is part of a merger, acquisition, bankruptcy, or other transaction in which the transferee assumes control of all or part of the transferor to comply with a consumer’s opt-out direction to the transferor.
This bill would declare that its provisions further the purposes and intent of the California Privacy Rights Act of 2020.
Proposed
IntroducedonFebruary16,2023,HB1974,wouldregulatetheuseofartificialintelligence(AI)inprovidingmentalhealthservices.Inparticular,thebillprovidesthattheuseofAIbyanylicensedmentalhealthprofessionalintheprovisionofmentalhealthservicesmustsatisfythefollowingconditions:(1)pre-approvalfromtherelevantprofessionallicensingboard;(2)anyAIsystemusedmustbedesignedtoprioritizesafetyandmustbecontinuouslymonitoredbythementalhealthprofessionaltoensureitssafetyandeffectiveness;(3)patientsmustbeinformedoftheuseofAIintheirtreatmentandbeaffordedtheoptiontoreceivetreatmentfromalicensedmentalhealthprofessional;and(4)patientsmustprovidetheirinformedconsenttoreceivingmentalhealthservicesthroughtheuseofAI.AIisdefinedas“anytechnologythatcansimulatehumanintelligence,includingbutnotlimitedto,naturallanguageprocessing,traininglanguagemodels,reinforcementlearningfromhumanfeedbackandmachinelearningsystems.”
Proposed
IntroducedonFebruary16,2023,H1873,AnActPreventingADystopianWorkEnvironment,wouldrequirethatemployersprovideemployeesandindependentcontractors(collectively,“workers)withaparticularizednoticepriortotheuseofanAutomatedDecisionSystem(ADS)andtherighttorequestinformation,including,amongotherthings,whethertheirdataisbeingusedasaninputfortheADS,andwhatADSoutputisgeneratedbasedonthatdata.“AutomatedDecisionSystem(ADS)”or“algorithm”,isdefinedas“acomputationalprocess,includingonederivedfrommachinelearning,statistics,orotherdataprocessingorartificialintelligencetechniques,thatmakesorassistsanemployment-relateddecision.”Thebillfurtherrequiresthatemployersreviewandadjustasappropriateanyemployment-relateddecisionsorADSoutputsthatwerepartiallyorsolelybasedontheinaccuratedata,andinformtheworkeroftheadjustment.EmployersandvendorsactingonbehalfofanemployermustmaintainanupdatedlistofallADScurrentlyinuse,andmustsubmitthislisttothedepartmentoflaboronorbeforeJanuary31ofeachyear.ThebillalsoprohibitstheuseofADSsincertaincircumstancesandrequirestheperformanceofalgorithmicimpactassessments.
Proposed
IntroducedonFebruary16,2023,SB31,AnActdraftedwiththehelpofChatGPTtoregulategenerativeartificialintelligencemodelslikeChatGPT,wouldrequireanycompanyoperatingalarge-scalegenerativeartificialintelligencemodeltoadheretocertainoperatingstandardssuchasreasonablesecuritymeasurestoprotectthedataofindividualsusedtotrainthemodel,informedconsentfromindividualsbeforecollecting,using,ordisclosingtheirdata,andperformanceofregularriskassessments.A“large-scalegenerativeartificialintelligencemodel”isdefinedtomean“amachinelearningmodelwithacapacityofatleastonebillionparametersthatgeneratestextorotherformsofoutput,suchasChatGPT.”Thebillfurtherrequiresanycompanyoperatingalarge-scalegenerativeartificialintelligencemodeltoregisterwiththeAttorneyGeneralandprovidecertainenumeratedinformationregardingthemodel.
Enacted
IntroducedonFebruary16,2023,SB384,AnactestablishingtheConsumerDataPrivacyAct,wouldcreateanomnibusconsumerprivacylaw,toregulate,amongotherdatauses,thecollectionandprocessingofpersonalinformation,andprofilingandautomateddecision-making.Specifically,thebillcreatescertaintransparencyrequirementsaroundprofilingandenableindividualstoopt-outof“profilinginfurtheranceofautomateddecisionsthatproducelegalorsimilarlysignificanteffects”concerningtheconsumer.Profilingisdefinedas“anyformofautomatedprocessingperformedonpersonaldatatoevaluate,analyze,orpredictpersonalaspectsrelatedtoanidentifiedoridentifiableindividual’seconomicsituation,health,personalpreferences,interests,reliability,behavior,location,ormovements.”Controllersmustalsoperformadataprotectionassessmentforhigh-riskprofilingactivities.
Discussion
In 2019, Illinois became the first state to enact restrictions with respect to the use of AI in hiring. The Illinois AI Video Interview Act was amended in 2021 and went into effect in 2022, and now requires employers using AI-enabled assessments to:Notify applicants of AI use;Explain how the AI works and the “general types of characteristics” it uses to evaluate applicants;Obtain their consent;Share any applicant videos only with service providers engaged in evaluating the applicant;Upon an applicant’s request, destroy all copies of the applicant’s videos and instruct service providers to do so as well; andReport annually, after use of AI, a demographic breakdown of the applicants they offered an interview, those they did not, and the ones they hired.
Failed
Introduced on February 14, 2023, HB3498, the Consumer Data Protection Act, would create an omnibus consumer privacy law. The bill generally follows the Virginia Consumer Data Protection Act and sets out rules for profiling and automated decision-making. Specifically, the bill enables individuals to opt-out of the processing of their personal data for the purpose of “profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.” Profiling is defined as “any form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.” Controllers must also perform a data protection assessment for high-risk profiling activities.
Discussion
Introduced on February 3, 2023, HB1844, the Texas Data Privacy and Security Act, is based on the Virginia Consumer Data Protection Act. If passed, the bill would create similar requirements enabling individuals to opt-out of “profiling” that produces a legal or similarly significant effect concerning the individual. Controllers must also perform a data protection assessment for high-risk profiling activities.
Enacted
Introduced on February 2, 2023, B114, Stop Discrimination by Algorithms Act of 2023 (SDAA) would prohibit would prohibit both for-profit and nonprofit organizations from using algorithms that make decisions based on protected personal traits. This bill makes it unlawful for a DC business to make a decision stemming from an algorithm if it is based on a broad range of personal characteristics, including actual or perceived race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income or disability in a manner that makes “important life opportunities” unavailable to that individual or class of individuals. Any covered entity or service provider who violates the act would be liable for a civil penalty of up to $10,000 per violation.
Proposed
Introduced on February 1, 2023, SB146, would prohibit certain uses of automated decision systems and algorithmic operations in connection with video-lottery terminals and sports betting applications. The law would take effect upon passage.
Failed
Introduced on January 31, 2023, SB5643 and its companion HB1616, the People’s Privacy Act, would prohibit a covered entity or Washington governmental entity from operating, installing, or commissioning the operation or installation of equipment incorporating “artificial intelligence-enabled profiling” in any place of public resort, accommodation, assemblage, or amusement, or to use artificial intelligence-enabled profiling to make decisions that produce legal effects (e.g., denial or degradation of consequential services or support, such as financial or lending services, housing, insurance, educational enrollment, criminal justice, employment opportunities, health care services, and access to basic necessities, such as food and water) or similarly significant effects concerning individuals. “Artificial intelligence-enabled profiling” is defined as the “automated or semiautomated process by which the external or internal characteristics of an individual are analyzed to determine, infer, or characterize an individual’s state of mind, character, propensities, protected class status, political affiliation, religious beliefs or religious affiliation, immigration status, or employability.” The bill also ban the use of “face recognition” in any place of public resort, accommodation, assemblage, or amusement. “Face recognition” is defined as “i) An automated or semiautomated process by which an individual is identified or attempted to be identified based on the characteristics of the individual’s face; or (ii) an automated or semiautomated process by which the characteristics of an individual’s face are analyzed to determine the individual’s sentiment, state of mind, or other propensities including, but not limited to, the person’s level of dangerousness[.]”
Proposed
Introduced on January 30, 2023, AB 331, would, among other things, require an entity that uses an automated decision tool (ADT) to make a consequential decision (deployer), and a developer of an ADT, to, on or before January 1, 2025, and annually thereafter, perform an impact assessment for any ADT used that includes, among other things, a statement of the purpose of the ADT and its intended benefits, uses, and deployment contexts. The bill requires a deployer or developer to provide the impact assessment to the Civil Rights Department within 60 days of its completion. Before using an ADT to make a consequential decision deployers must notify any natural person that is the subject of the consequential decision that the depoloyer is using an ADT to make, or be a controlling factor in making, the consequential decision. Deployers are also required to accommodate a natural person’s request to not be subject to the ADT and to be subject to an alternative selection process or accommodation if a consequential decision is made solely based on the output of an ADT, assuming that an alternate process is technically feasible. This bill would also prohibit a deployer from using an ADT in a manner that contributes to algorithmic discrimination. Finally, the bill includes a private right of action which would open the door to significant litigation risk for users of ADT.
Discussion
An act relating to enhancing consumer privacy and the age-appropriate design code
Proposed
Introduced on January 25, 2023, H114, would restrict the use of electronic monitoring of employees and the use of automated decision systems (ADSs) for employment-related decisions. Electronic monitoring of employees may only be conducted when, for example, the monitoring is used to ensure compliance with applicable employment or labor laws or to protect employee safety, and certain notice is given to employees 15 days prior to commencement of the monitoring. ADSs must also meet a number of requirements, including corroboration of system outputs by human oversight of the employee and creation of a written impact assessment prior to using the ADS.
Proposed
Introduced on January 19, 2023, SB 255, would create an omnibus consumer privacy law based on a composite of the Colorado Privacy Act, Connecticut Data Privacy Act, and Virginia Consumer Data Protection Act. In particular, the bill sets out rules for profiling and automated decision-making. Specifically, the bill enables individuals to opt-out of “in furtherance of solely automated decisions that produce legal or similarly significant effects concerning the consumer.” Profiling is defined as “any form of automated processing of personal data to evaluate, analyze, or predict personal aspects concerning an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.” Controllers must also perform a data protection assessment for high-risk profiling activities.
Proposed
Introduced on January 20, 2023, SB974, the Hawaii Consumer Data Protection Act, would establish a framework to regulate controllers and processors’ access to personal consumer data and introduces penalties, as well as a new consumer privacy special fund.The bill also provides consumers the option to opt-out of the processing of their personal data for the purposes of “profiling in furtherance of decisions made by the controller that results in the provision or denial by the controller of financial and lending services, housing, insurance; education enrollment, criminal justice, employment opportunities, health care services, or access to basic necessities, including food and water.” “Profiling” is defined as any-form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable natural person’s economic situation; health, personal preferences, interests, reliability, behavior, location, or movements.The bill further requires covered entities to conduct a data protection assessment when they process personal data for purposes of profiling and the profiling presents “a reasonably foreseeable risk of: (A) Unfair or deceptive treatment of, or unlawful disparate impact on, consumers; (B) Financial, physical, or reputational injury to consumers; (C) A physical intrusion or other intrusion upon the solitude or seclusion, or the private affairs or concerns; of consumers, where the intrusion would be offensive to a reasonable person; or (D) Other substantial injury to consumers[.]”
Failed
Introduced on January 20, 2023, SB1110, an alternate version of the Hawaii Consumer Data Protection Act, would create materially similar obligations with respect to “profiling” as SB974.
Proposed
IntroducedonJanuary20,2023,inboththeSenateSD1971(assignedSB227),andintheHouseHD3263,theMassachusettsInformationPrivacyandSecurityAct(MIPSA),andcreatesvariousrightsforindividualsregardingtheprocessingoftheirpersonalinformation,includingtherighttoaprivacynoticeatorbeforethepointofcollectionofanindividual’spersonalinformation,therighttooptoutoftheprocessingofanindividual’spersonalinformationforthepurposesofsaleandtargetedadvertising,rightstoaccessandtransport,delete,andcorrectpersonalinformation,andtherighttorevokeconsent.Additionally,largedataholdersarerequiredtoperformriskassessmentswheretheprocessingisbasedinwholeorinpartonanalgorithmiccomputationalprocess.A“largedataholder”,isacontrollerthat,inacalendaryear:(1)hasannualglobalgrossrevenuesinexcessof$1,000,000,000;and(2)determinesthepurposesandmeansofprocessingofthepersonalinformationofnotlessthan200,000individuals,excludingpersonalinformationprocessedsolelyforthepurposeofcompletingapayment-onlycredit,checkorcashtransactionwherenopersonalinformationisretainedabouttheindividualenteringintothetransaction.
Proposed
Introduced on January 9, 2023, SB619, relating to protections for the personal data of consumers, would create an omnibus consumer privacy law. The bill generally follows the Virginia Consumer Data Protection Act and sets out rules for profiling and automated decision-making. Specifically, the bill enables individuals to opt-out of processing for the purpose of “profiling the consumer to support decisions that produce legal effects or effects of similar significant significance.” Profiling is defined as “an automated processing of personal data for the purpose of evaluating, analyzing or predicting an identified or identifiable consumer’s economic circumstances, health, personal preferences, interests, reliability, behavior, location or movements.” Controllers must also perform a data protection assessment for high-risk profiling activities.
Enacted
Introduced on January 9, 2023, SB5, would create an omnibus consumer privacy law along the lines of the Virginia Consumer Data Privacy Act and the Colorado Privacy Act, to regulate, among other data uses, the collection and processing of personal information. In particular, the bill sets out rules for profiling and automated decision-making. Specifically, the bill enables individuals to opt-out of “profiling in furtherance of decisions that produce legal or similarly significant effects” concerning the consumer. Profiling is defined as “any form of automated processing of personal data to evaluate, analyze, or predict personal aspects concerning an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements[.]” Controllers must also perform a data protection impact assessment for high-risk profiling activities.
Proposed
Introduced on January 4, 2023, SB73, and companion bill HB1181, introduced on January 31, 2023, the Tennessee Information Protection Act, would establish an omnibus consumer privacy law along the lines of those enacted in states like Virginia. Among its requirements, the bill mandates the performance of data protection assessments in connection with “profiling” where the profiling presents a reasonably foreseeable risk of: (A) Unfair or deceptive treatment of, or unlawful disparate impact on, consumers; (B) Financial, physical, or reputational injury to consumers; (C) A physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers, where the intrusion would be offensive to a reasonable person; or (D) Other substantial injury to consumers. “Profiling” is defined as “a form of automated processing performed on personal information to evaluate, analyze, or predict personal aspects related to an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements[.]” The bill gives the Tennessee Attorney General’s Office authority to impose civil penalties on companies who violate the law.
Failed
Introduced on January 4, 2023, A216, would require advertisements to disclose the use of synthetic media. Synthetic media is defined as “a computer-generated voice, photograph, image, or likeness created or modified through the use of artificial intelligence and intended to produce or reproduce a human voice, photograph, image, or likeness, or a video created or modified through an artificial intelligence algorithm that is created to produce or reproduce a human likeness.” Violators would be subject to a $1,000 civil penalty for a first violation and a $5,000 penalty for any subsequent violation
Enacted
The Connecticut Privacy Act (CTPA) which goes into force on July 1, 2023, provides consumers the right to opt-out of profiling if such profiling is in furtherance of automated decision-making that produces legal or other similarly significant effects. Controllers must also perform data risk assessments prior to processing consumer data when such processing presents a “heightened risk of harm.” These situations include certain profiling activities that present a reasonably foreseeable risk of unfair or deceptive treatment of or unlawful disparate impact on consumers, financial, physical or reputational injury to consumers, physical or other intrusion into the solitude, seclusion or private affairs or concerns of consumers that would be offensive to a reasonable person, or other substantial injury to consumers.
Discussion
Discussion
Enacted
Discussion
Proposed
Introduced on February 10, 2022, S1402, provides that it is unlawful discrimination and a violation of the law against discrimination for an automated decision system (ADS) to discriminate against any person or group of persons who is a member of a protected class in: (1) the granting, withholding, extending, modifying, renewing, or purchasing, or in the fixing of the rates, terms, conditions or provisions of any loan, extension of credit or financial assistance; (2) refusing to insure or continuing to insure, limiting the amount, extent or kind of insurance coverage, or charging a different rate for the same insurance coverage provided to persons who are not members of the protected class; or (3) the provision of health care services. Under the bill, ADS means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making.An ADS is discriminatory if the system selects individuals who are members of a protected class for participation or eligibility for services at a rate that is disproportionate to the rate at which the system selects individuals who are not members of the protected class. If passed, the law would take effect on the first day of the third month next following enactment.
Proposed
Introduced on January 1, 2022, A537, would require an automobile insurer using an automated or predictive underwriting system to annually provide documentation and analysis to the Department of Banking and Insurance to demonstrate that there is no discriminatory outcome in the pricing on the basis of race, ethnicity, sexual orientation, or religion, that is determined by the use of the insurer’s automated or predictive underwriting system. Under this bill, “automated or predictive underwriting system” is defined to mean a computer-generated process that is used to evaluate the risk of a policyholder and to determine an insurance rate. An automated or predictive underwriting system may include, but is not limited to, the use of robotic process automation, artificial intelligence, or other specialized technology in its underwriting process.
Discussion
Discussion
Discussion
Discussion
Discussion
Enacted
The Colorado Privacy Act (CPA), which goes into force on July 1, 2023, provides consumers the right to opt-out of the processing of their personal data for purposes of “profiling in furtherance of decisions that produce legal or similarly significant effects.” The law defines those decisions as “a decision that results in the provision or denial of financial and lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, health care services, or access to essential goods or services.” The CPA further requires that controllers conduct a data protection impact assessment (DPIA) if the processing of personal data creates a heightened risk of harm to a consumer. Processing that presents a heightened risk of harm to a consumer includes profiling if the profiling presents a reasonably foreseeable risk of:Unfair or deceptive treatment of, or unlawful disparate impact on, consumers;Financial or physical injury to consumers;A physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers if the intrusion would be offensive to a reasonable person; orOther substantial injury to consumersAll of which means that deployers of automated-decision making (which may or may not use AI) need to ensure that their design and implementation do not create the heightened risks outlined above, and are included in their DPIA. On March 15, 2023, the Colorado Attorney General’s Office finalized rules implementing the CPA
Enacted
In 2021, Colorado enacted SB 21-169, Protecting Consumers from Unfair Discrimination in Insurance Practices, a law intended to protect consumers from unfair discrimination in insurance rate-setting mechanisms. The law applies to insurers’ use of external consumer data and information sources (ECDIS), as well as algorithms and predictive models that use ECDIS in “insurance practices,” that “unfairly discriminate” based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.On February 1, 2023, the Colorado Division of Insurance (CDI) released a draft of the first of several regulations to implement the bill. At the time of publication, the regulations were still in the proposal stage.
Proposed
The Virginia Consumer Data Protection Act (VCDPA), which went into force on January 1, 2023, sets out rules for profiling and automated decision-making. Specifically, the VCDPA enables individuals to opt-out of “profiling in furtherance of decisions that produce legal or similarly significant effects” concerning the consumer, which is generally defined as “the denial and/or provision of financial and lending services, housing, insurance, education enrollment or opportunities, criminal justice, employment opportunities, healthcare services, or access to basic necessities.” Controllers must also perform a data protection impact assessment for high-risk profiling activities.
Discussion
Discussion
Impacts AI systems using personal data, gives consumers rights.
Enacted
Enacted
In December 2021, New York City passed the first law (Local Law 144), in the United States requiring employers to conduct bias audits of AI-enabled tools used for employment decisions. The law imposes notice and reporting obligations.Specifically, employers who utilize automated employment decision tools (AEDTs) must:Subject AEDTs to a bias audit, conducted by an independent auditor, within one year of their use;Ensure that the date of the most recent bias audit and a “summary of the results”, along with the distribution date of the AEDT, are publicly available on the career or jobs section of the employer’s or employee agency’s website;Provide each resident of NYC who has applied for a position (internal or external) with a notice that discloses that their application will be subject to an automated tool, identifies the specific job qualifications and characteristics that the tool will use in making its assessment, and informs candidates of their right to request an alternative selection process or accommodation (the notice shall be issued on an individual basis at least 10 business days before the use of a tool); and Allow candidates or employees to request alternative evaluation processes as an accommodation.While enforcement of the law has been delayed multiple times pending finalization of the law’s implementing rules, on April 6, 2023 the Department of Consumer and Worker Protection (DCWP) published the law’s Final Rule. The law will now go into effect on May 6. and enforcement will begin on July 6AIntroduced on January 4, 2023, SB 365, the New York Privacy Act, would be the state’s first comprehensive privacy law. The law would require companies to disclose their use of automated decision-making that could have a “materially detrimental effect” on consumers, such as a denial of financial services, housing, public accommodation, health care services, insurance, or access to basic necessities; or could produce legal or similarly significant effects. Companies must provide a mechanism for a consumer to formally contest a negative automated decision and obtain a human review of the decision, and must conduct an annual impact assessment of their automated decision-making practices to avoid bias, discrimination, unfairness or inaccuracies.
Discussion
Discussion
Discussion
Enacted
Introduced in 2018 as SB 1001, The Bolstering Online Transparency Act (BOT), went into effect in July 2019. BOT makes it unlawful for a person or entity to use a bot to communicate or interact online with a person in California in order to incentivize a sale or transaction of goods or services or to influence a vote in an election without disclosing that the communication is via a bot. The law defines a “bot” as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” The law applies only to communications with persons in California. In addition, it applies only to public-facing websites, applications, or social networks that have at least 10 million monthly U.S. visitors or users. BOT does not provide a private right of action.
Discussion
Discussion
Proposed
Introduced on January 28, 2023, SB404, would prohibit any operator of a website, an online service, or an online or mobile application, including any social media platform, to utilize an automated decision system (ADS) for content placement, including feeds, posts, advertisements, or product offerings, for a user under the age of eighteen. In addition, an operator that utilizes an ADS for content placement for residents of South Carolina who are eighteen years or older shall perform an age verification through an independent, third-party age-verification service, unless the operator employs the bill’s prescribed protections to ensure age verification. The bill includes a private right of action
Be Part of Responsible AI
Explore our newsletter centered on AI and its regulation. Get latest updates, news, and information on all things AI legislations and policies!
| | Thank you for Signing Up |