4828 results:


Description: Prohibits the provision of an artificial intelligence companion to a user unless such artificial intelligence companion contains a protocol for addressing possible suicidal ideation or self-harm expressed by a user, possible physical harm to others expressed by a user, and possible financial harm to others expressed by a user; requires certain notifications to certain users regarding crisis service providers and the non-human nature of such companion models.
Summary: This New York bill mandates that artificial intelligence companions must include protocols to address and notify users about potential self-harm, harm to others, and financial risks, promoting user safety.
Collection: Legislation
Status date: March 13, 2025
Status: Introduced
Primary sponsor: Clyde Vanel (sole sponsor)
Last action: referred to consumer affairs and protection (March 13, 2025)

Category:
Societal Impact
System Integrity (see reasoning)

The legislation focuses on the regulations surrounding AI companions, specifically addressing their protocols for managing users' mental health concerns, including suicidal ideation, self-harm, and potential harm to others. This directly intersects with societal impacts, particularly in contexts such as mental health, safety, and well-being, making it highly relevant under the Social Impact category. The data governance aspect is not specifically addressed, as the document does not focus on data management, accuracy, or security. System Integrity is somewhat relevant due to the requirement for notifications about the non-human nature of the AI, reflecting a level of transparency intended to protect users. Robustness is less relevant since the text does not discuss performance benchmarks or compliance standards of the AI systems. Therefore, the Social Impact category is assigned a high score, while others receive lower scores.


Sector:
Government Agencies and Public Services
Healthcare (see reasoning)

The legislation is highly relevant to the Healthcare sector given its focus on AI technologies that are designed to manage mental health concerns and risks. It may also relate to the Government Agencies and Public Services sector as it mandates protocols that can be overseen by regulatory bodies that protect public interest in the context of AI use. However, it does not pertain directly to Politics and Elections, the Judicial System, Private Enterprises, Labor, and Employment, Academic Institutions, International Cooperation, Nonprofits, or the Hybrid sector, making them less relevant. The strongest connection is to Healthcare, underscoring the importance of AI in mental health scenarios. Thus, the scoring reflects this relevance.


Keywords (occurrence): artificial intelligence (6) automated (1) show keywords in context

Description: As enacted, specifies that for the purposes of sexual exploitation of children offenses, the term "material" includes computer-generated images created, adapted, or modified by artificial intelligence; defines "artificial intelligence." - Amends TCA Title 39 and Title 40.
Summary: This bill amends Tennessee law to address sexual exploitation of children by including provisions related to artificial intelligence-generated images, expanding legal definitions for better protection against such exploitation.
Collection: Legislation
Status date: May 13, 2024
Status: Passed
Primary sponsor: Mary Littleton (3 total sponsors)
Last action: Effective date(s) 07/01/2024 (May 13, 2024)

Category:
Societal Impact (see reasoning)

The text discusses legislation that involves AI in the context of preventing sexual exploitation of children. It emphasizes the creation and modification of computer-generated images using AI, linking AI to potential harmful content. This directly impacts societal values, norms, and safety, reflecting strong relevance to the Social Impact category. It does not focus on data governance, system integrity, or robustness, as it does not discuss data management, system security, or performance benchmarks in the AI context. Thus, Social Impact is rated highly, while the other categories score low.


Sector: None (see reasoning)

The text primarily addresses the legal implications of AI in relation to child exploitation, emphasizing the role of AI in generating harmful content. This has a direct implication for laws concerning societal norms and values, but it does not specifically pertain to any of the defined sectors like politics, public service, healthcare, or others. Therefore, the score for the sectors remains low, as it does not directly address those areas, with the exception of implications for potential impacts on children in both the Public Service and Nonprofits sectors being slightly relevant due to its protective nature.


Keywords (occurrence): artificial intelligence (5) automated (1) show keywords in context

Description: Provides for the use of artificial intelligence by healthcare providers
Summary: This bill regulates the use of artificial intelligence by healthcare providers in Louisiana, allowing it for administrative tasks but prohibiting its use in treatment decisions and direct patient communication, establishing penalties for violations.
Collection: Legislation
Status date: March 25, 2025
Status: Introduced
Primary sponsor: Jessica Domangue (sole sponsor)
Last action: Under the rules, provisionally referred to the Committee on Health and Welfare. (March 25, 2025)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text directly involves the use of artificial intelligence (AI) specifically in healthcare settings, indicating its relevance to the Social Impact, Data Governance, System Integrity, and Robustness categories. The legislation addresses how AI can enhance healthcare services and outlines regulatory requirements that aim to mitigate potential negative impacts associated with AI use in a sensitive environment like healthcare. Thus, direct impacts on individuals and society (Social Impact) as well as considerations of data management and AI system performance (Data Governance and Robustness) make several categories highly relevant. However, the emphasis on compliance to established guidelines and security measures indicates particularly strong correlations with System Integrity. Overall, the detailed provisions in the text clearly connect with the themes of these categories.


Sector:
Healthcare (see reasoning)

The text clearly pertains to the healthcare sector as it establishes parameters for how healthcare providers can use artificial intelligence. The provisions highlight the roles and responsibilities of healthcare professionals when utilizing AI in patient care, ensuring safety and compliance. Additionally, it discusses penalties specific to the healthcare domain, further establishing its categorization under the Healthcare sector. As such, this text does not touch upon other sectors like politics or education meaningfully, making its relevance specifically concentrated on healthcare regulation.


Keywords (occurrence): automated (1) show keywords in context

Description: To Create The Arkansas Digital Responsibility, Safety, And Trust Act.
Summary: The Arkansas Digital Responsibility, Safety, and Trust Act aims to establish privacy protections for personal data, addressing risks associated with digital technology and ensuring responsible data handling practices by organizations.
Collection: Legislation
Status date: Feb. 19, 2025
Status: Introduced
Primary sponsor: Clint Penzo (2 total sponsors)
Last action: Read first time, rules suspended, read second time, referred to TRANSPORTATION, TECHNOLOGY & LEGISLATIVE AFFAIRS - SENATE (Feb. 19, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

This text contains numerous references to artificial intelligence (AI) and its implications, such as algorithmic discrimination and the use of AI systems affecting personal data processing. The mention of AI's role in decision-making and the risks associated with it highlights its potential societal impact, suggesting that the legislation is aimed at addressing ethical considerations and fostering trust in technology. Thus, the text is significantly relevant to the Social Impact category. Furthermore, the inclusion of definitions related to data privacy and AI in the governance framework indicates a strong emphasis on Data Governance as well, with AI being instrumental in processing personal information, thereby necessitating regulations to protect individuals and ensure data accuracy and security. Although there are aspects of System Integrity related to transparency and security in the handling of AI, their relevance is not as pronounced compared to the previous categories. Robustness is minimally touched upon, with limited implications on performance benchmarks. Overall, the connection to Social Impact and Data Governance is compelling and reinforces the need for effective oversight of AI systems to mitigate risks to society and individuals.


Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Hybrid, Emerging, and Unclassified (see reasoning)

The text has strong implications for various sectors, particularly Government Agencies and Public Services, as it deals with regulatory frameworks for digital technology and AI oversight. The legislation's focus on consumer protection and data governance relates directly to how government agencies will deploy AI in regulating and delivering services. The implications for Judicial System come from the intersection with privacy laws and personal data, indicating that these technologies may influence legal interpretations and practices. However, while there are mentions of employment and health data, the Healthcare and Private Enterprises sectors do not exhibit as strong a connection through the overall text as Government and Judicial Systems. The text's discussions about personal data handling and discrimination address essential frameworks applicable to broader social implications, fitting into Academic and Research Institutions for the educational aspects of AI technology as well. The implications for Politics and Elections are less direct but could be inferred to some extent due to discussions around personal data and its use in political campaigning. The other sectors hold varying degrees of relevance, but do not show as explicit connections. Hence, the strongest sector ties are with Government Agencies, Judicial System, and Academic Institutions.


Keywords (occurrence): artificial intelligence (90) machine learning (1) automated (2) show keywords in context

Description: Prohibiting a person from utilizing certain personal identifying information or engaging in certain conduct in order to cause certain harm; prohibiting a person from using certain artificial intelligence or certain deepfake representations for certain purposes; and providing that a person who is the victim of certain conduct may bring a civil action against a certain person.
Summary: House Bill 1425 prohibits identity fraud using artificial intelligence and deepfakes, allowing victims to sue for harm while imposing penalties for fraudulent actions involving personal information.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: C.T. Wilson (sole sponsor)
Last action: Hearing 3/06 at 1:00 p.m. (Feb. 10, 2025)

Category:
Societal Impact
System Integrity (see reasoning)

The text explicitly addresses the implications of artificial intelligence (AI) and deepfake representations within the context of identity fraud. It outlines prohibitions on the use of AI and deepfakes to defraud or harm others, indicating a significant impact on societal behavior and individual rights. This is particularly relevant to the Social Impact category, where it discusses consumer protections and the potential psychological and material harm caused by AI misuse. Regarding Data Governance, the legislation does not delve deeply into data management or privacy but addresses the responsibility associated with personal identifying information. The System Integrity category is relevant as it references the need for intent and knowledge in deploying AI tools, creating a layer of accountability. Lastly, Robustness is less relevant, as the document does not present benchmarks or standards for AI systems but focuses on criminal implications rather than performance metrics. Overall, the Social Impact and System Integrity categories stand out as crucial due to the direct relevance of AI in harming individuals and the accountability required from users of AI technologies.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The legislation primarily addresses identity fraud in relation to AI and deepfakes, making it relevant to both the political landscape regarding election integrity and the challenges of misinformation. However, it is less focused on how AI interfaces with government agencies or public services, particularly since it does not focus on the governance aspect of AI applications in these areas. While these texts can potentially affect healthcare and the judicial system indirectly through identity fraud implications, they do not specifically delineate AI's role in these sectors. Overall, the most prominent sectors appear to be Politics and Elections due to implications concerning misinformation and Government Agencies and Public Services because of law enforcement aspects, although the direct application to these sectors is limited.


Keywords (occurrence): artificial intelligence (4) automated (1) deepfake (6) show keywords in context

Description: Prohibiting a person from using fraud to influence or attempt to influence a voter's voting decision; providing that fraud includes the use of synthetic media; and defining "synthetic media" as an image, an audio recording, or a video recording that has been intentionally created or manipulated with the use of generative artificial intelligence or other digital technology to create a realistic but false image, audio recording, or video recording.
Summary: The bill prohibits using deepfakes to fraudulently influence voter decisions in elections, defining deepfakes and establishing penalties for violations, thereby aiming to protect electoral integrity.
Collection: Legislation
Status date: April 3, 2025
Status: Engrossed
Primary sponsor: Jessica Feldmark (17 total sponsors)
Last action: Third Reading Passed (127-10) (April 3, 2025)

Keywords (occurrence): deepfake (10) synthetic media (2) show keywords in context

Description: Social media; creating the Safe Screens for Kids Act. Effective date.
Summary: The Safe Screens for Kids Act prohibits minors from using social media without parental consent, mandates age verification, restricts data collection, and aims to protect minors' online safety.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Ally Seifried (sole sponsor)
Last action: Placed on General Order (March 4, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text heavily emphasizes the implications of AI technologies, algorithms, and data collection specifically as they relate to minors using social media platforms. In the context of Social Impact, it addresses concerns about potential psychological or emotional harm that could arise from interactions with AI-driven content and platforms. Data Governance is highly relevant as the act lays out strict mandates regarding the collection and processing of data from minor users, especially concerning de-identification and preventing targeted advertisements. System Integrity is also pertinent, especially regarding the prohibitions on using algorithms and AI to select or recommend content for minor users, promoting transparency and control. Robustness is not significantly addressed since the focus is on regulations and protective measures rather than establishing performance benchmarks or auditing mechanisms for AI. Overall, the relevance of AI-related portions to these categories leads to high scores in Social Impact, Data Governance, and System Integrity due to their comprehensive focus on protecting minors and maintaining ethical standards in AI usage.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The text predominantly focuses on the intersection of social media technologies and the protection of minors, which has significant implications for the fields of Politics and Elections and Government Agencies and Public Services. The legislation places strict regulations on social media practices which would inherently affect how political campaigns utilize these platforms. Moreover, the act emphasizes the responsibilities of social media companies under legal frameworks, which also impacts Government Agencies' roles in enforcement. However, there's little direct mention of AI's role in the Judicial System, Healthcare, Private Enterprises, Academic Institutions, International Cooperation, or NGOs. The focus is mainly on social media rather than the broader implications AI might have in other sectors. As such, the scores reflect a strong relevance to Politics and Elections and Government Agencies, while the other sectors receive lower relevance scores.


Keywords (occurrence): artificial intelligence (1) machine learning (2) algorithm (2) show keywords in context

Description: AN ACT relating to insurance; imposing requirements relating to prior authorization; prescribing certain requirements relating to the use of artificial intelligence by health insurers; requiring the compilation and publication of certain reports relating to prior authorization; providing for the investigation and adjudication of certain violations; providing for the imposition of civil and administrative penalties for such violations; and providing other matters properly relating thereto.
Summary: A.B. 295 revises health insurance provisions, reducing prior authorization response times, regulating AI use in decision-making, and enhancing reporting and penalty mechanisms for insurers.
Collection: Legislation
Status date: Feb. 25, 2025
Status: Introduced
Primary sponsor: Thaddeus Yurek (3 total sponsors)
Last action: From printer. To committee. (Feb. 26, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text prominently addresses the use of artificial intelligence by health insurers, which directly relates to the implications of AI on society particularly in the context of healthcare processes and regulations surrounding prior authorization. It specifies requirements for disclosing the use of AI systems and prohibits their use in making adverse decisions without human oversight, emphasizing accountability and consumer protection. Thus, it has significant relevance to the Social Impact category. The discussion on automated decision tools and requirements for independent reviews also fits well with the System Integrity category as it emphasizes transparency and human oversight. There are also components that touch on data management and practices related to reporting, justifying a score for Data Governance. Robustness, however, does not directly apply since the focus is more on oversight rather than performance benchmarks. Therefore, its relevance scores in the categories are high due to explicit mentions and implications for AI in health insurance, underscoring accountability, transparency, and consumer protection.


Sector:
Government Agencies and Public Services
Healthcare (see reasoning)

The text is fundamentally situated within the Healthcare sector as it explicitly deals with the use of artificial intelligence in the context of health insurance, outlining specific obligations for health carriers that employ AI systems in their authorization processes. Since it involves not just the operational processes within healthcare but also regulatory aspects impacting insurers and patient rights, it closely aligns with the Healthcare sector legislation. The mention of Medicaid and children’s health coverage further confirms this alignment. While aspects of the bill may relate to government operations, the primary focus is clearly on health insurers and the regulatory framework governing their use of AI. Thus, the score for the Healthcare sector is extremely relevant, while other sectors like Politics and Elections or Private Enterprises may not find significant relevance.


Keywords (occurrence): artificial intelligence (12) automated (19) show keywords in context

Description: An act to add Chapter 5.9.5 (commencing with Section 11549.80) to Part 1 of Division 3 of Title 2 of the Government Code, relating to artificial intelligence.
Summary: Assembly Bill No. 1405 mandates the California Government Operations Agency to establish an enrollment system for AI auditors, ensuring accountability and transparency in auditing AI systems, effective January 1, 2027.
Collection: Legislation
Status date: Feb. 21, 2025
Status: Introduced
Primary sponsor: Rebecca Bauer-Kahan (2 total sponsors)
Last action: Read second time and amended. (April 3, 2025)

Keywords (occurrence): artificial intelligence (7) machine learning (1) automated (2) show keywords in context

Description: AN ACT relating to elections; prohibiting the use of artificial intelligence in equipment used for voting, ballot processing or ballot counting; requiring certain published material that is generated through the use of artificial intelligence or that includes a materially deceptive depiction of a candidate to include certain disclosures; prohibiting, with certain exceptions, the distribution of synthetic media that contains a deceptive and fraudulent deepfake of a candidate; providing penalti...
Summary: The bill prohibits the use of artificial intelligence in voting equipment, mandates disclosures for AI-generated materials related to elections, and regulates the distribution of deceptive synthetic media, aiming to enhance election integrity.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Bert Gurr (4 total sponsors)
Last action: Read first time. Referred to Committee on Legislative Operations and Elections. To printer. (Feb. 20, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

This text primarily addresses the implications of artificial intelligence in the electoral process. It explicitly details the prohibition of AI usage in voting equipment, mandates disclosures for AI-generated material, and establishes penalties for the distribution of deceptive deepfakes. Therefore, it is highly relevant to the Social Impact category due to its focus on the fairness and integrity of elections, the protection of candidates' reputations, and preventing misinformation. The relevance to Data Governance is moderate, as it discusses accuracy and transparency in published materials, but does not delve deeply into data management issues. System Integrity is also moderately relevant since it emphasizes the need for securing electoral processes against automation, but does not focus on inherent system security measures. The Robustness category is less relevant as there are no discussions about benchmarks or auditing measures in AI performance within this context.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The text directly relates to the Politics and Elections sector as it outlines regulations on the use of AI in electoral processes, aiming to protect the integrity and fairness of elections. The Government Agencies and Public Services sector is also relevant as it discusses government regulations regarding voting equipment. However, the other sectors like Judicial System, Healthcare, Private Enterprises, Labor, and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified are not applicable here as the content is primarily focused on election-related provisions and AI's role in that particular context. Thus, the scores reflect that restriction.


Keywords (occurrence): artificial intelligence (9) machine learning (1) deepfake (6) synthetic media (10) show keywords in context

Description: Relating to the use of artificial intelligence by health care providers.
Summary: This bill establishes regulations for the use of artificial intelligence by health care providers in Texas, ensuring responsible use and patient notification about AI involvement in their care.
Collection: Legislation
Status date: March 11, 2025
Status: Introduced
Primary sponsor: Salman Bhojani (sole sponsor)
Last action: Filed (March 11, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text focuses primarily on the use of artificial intelligence by health care providers, emphasizing the need for regulation, responsibilities of health care providers when using AI, and mandatory disclosures to patients. This directly relates to the Social Impact category, as it considers accountability and consumer protections regarding the use of AI in health care contexts and addresses the implications for patient care and trust. For Data Governance, it recognizes the need for rules governing the responsible use of AI in managing health care data, though not explicitly covering data management protocols; hence it is moderately relevant. System Integrity relevance stems from the mention of rules governing the procedures for AI use, although the text does not delve into security measures or oversight specifics. Lastly, the Robustness category is not directly addressed, as the text does not mention benchmarks or performance standards for AI systems. Therefore, scores reflect the varying degrees of relevance to these categories.


Sector:
Healthcare (see reasoning)

This bill is explicitly geared towards the healthcare sector, detailing how AI is integrated into health care practices and involves healthcare providers' responsibilities in using AI. It mentions AI's role in patient communication, which is pertinent to the Healthcare sector, ensuring the responsible use of technology in medical practices. There is minimal relevance to other sectors as the focus remains strictly on healthcare providers. While there are implications for policy considerations that may affect Government Agencies, the specifics outlined are narrowly tailored to healthcare, which limits broader sector relevance. Thus, a high score for Healthcare and a lower score for other sectors.


Keywords (occurrence): artificial intelligence (7) machine learning (1) automated (1) show keywords in context

Description: As enacted, requires the board of trustees of the University of Tennessee, the board of regents, each local governing board of trustees of a state university, each local board of education, and the governing body of each public charter school to adopt a policy regarding the use of artificial intelligence technology by students, faculty, and staff for instructional and assignment purposes. - Amends TCA Title 49.
Summary: The bill requires public universities and charter schools in Tennessee to adopt policies for the use of artificial intelligence in education by set deadlines, enhancing instructional methods and accountability.
Collection: Legislation
Status date: March 19, 2024
Status: Passed
Primary sponsor: Scott Cepicky (4 total sponsors)
Last action: Comp. became Pub. Ch. 550 (March 19, 2024)

Category:
Societal Impact (see reasoning)

This legislation includes specific mandates regarding the use of artificial intelligence in educational settings, which has direct implications for social impact through its influence on teaching and learning. The act focuses on adopting policies pertaining to the use of AI technology by students, faculty, and staff. It addresses potential impacts on students' educational outcomes and social interactions, thereby linking to social equity and fairness, as well as accountability in education. It does not explicitly address aspects relevant to data governance, system integrity, or robustness as defined, such as security measures or compliance standards inherent in AI system operation. Hence, this bill closely aligns with the Social Impact category, while not meeting the criteria for the other categories.


Sector:
Academic and Research Institutions (see reasoning)

The legislation specifically pertains to academic institutions and their use of AI for instructional purposes. By mandating universities and public schools to adopt a policy on AI use among students and faculty, it directly impacts educational systems. This legislation does not cover other sectors like politics, healthcare, or private enterprises, which are not mentioned in the text. Its focus is primarily on the governance of AI in educational settings, making it highly relevant to the Academic and Research Institutions sector and not applicable to the others.


Keywords (occurrence): artificial intelligence (8) automated (3) show keywords in context

Description: Adopt the Ensuring Transparency in Prior Authorization Act
Summary: The Ensuring Transparency in Prior Authorization Act mandates clearer prior authorization processes and timelines for health care services in Nebraska, aiming to improve transparency and efficiency in health insurance practices.
Collection: Legislation
Status date: Jan. 9, 2025
Status: Introduced
Primary sponsor: Eliot Bostar (sole sponsor)
Last action: Notice of hearing for February 10, 2025 (Jan. 27, 2025)

Category: None (see reasoning)

The text mainly focuses on creating a transparent system for prior authorization within the healthcare sector. While it does mention processes and entities involved in medical health care services and prior authorization, it does not address social impacts related to AI nor data governance related to machine learning methodologies explicitly. There's a lack of AI-specific topics like performance benchmarks, security, or transparency of algorithms. No keywords concerning AI or related technologies are present, and the text does not suggest any implications of AI in these systems, thus falling short in relevance across all categories.


Sector:
Healthcare (see reasoning)

This legislation primarily pertains to the insurance and healthcare sectors, focusing on the regulations around prior authorization processes. There are no mentions of AI applications or regulations specific to AI usage in healthcare, nor implications of AI on political or business practices. It solely targets administrative processes within health benefit plans. The lack of connection to AI leads to a low relevance score across all sectors.


Keywords (occurrence): artificial intelligence (2) automated (1) algorithm (1) show keywords in context

Description: AN ACT relating to autonomous vehicles; revising requirements for a human operator to be present in an autonomous vehicle during operation; and providing other matters properly relating thereto.
Summary: The bill revises regulations on autonomous vehicles in Nevada, requiring a licensed human operator for certain heavy or passenger-carrying vehicles during operation to ensure safety.
Collection: Legislation
Status date: March 17, 2025
Status: Introduced
Primary sponsor: James Ohrenschall (sole sponsor)
Last action: From printer. To committee. (March 18, 2025)

Category:
Societal Impact
System Integrity
Data Robustness (see reasoning)

The text revolves around the legislation concerning autonomous vehicles, focusing particularly on their operational requirements and the role of human operators, which directly ties to the implications of AI in automated driving systems. Given that there's a clear reference to the 'automated driving system' and provisions for operation of 'fully autonomous vehicles', the relevance to AI is significant. Additionally, the legislation's implications on safety, control, and regulation of AI systems showcase both social impact and system integrity. Data governance and robustness might also have relevance, but they are more indirect compared to social impact and system integrity, which are clearly indicated by the need for human oversight and accountability in system failure scenarios.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

This legislation primarily pertains to the sector of Government Agencies and Public Services as it deals with the regulation of autonomous vehicles, which falls under the purview of state laws and the Department of Motor Vehicles. The context of operational mandates and the role of human authority directly correlates to the utilization of AI in government functions. While it has implications for other sectors, such as Private Enterprises concerning commercial licensing and the operations of transportation businesses, the primary focus remains on governmental regulation and oversight.


Keywords (occurrence): automated (8) autonomous vehicle (15) show keywords in context

Description: Requiring that certain carriers, pharmacy benefits managers, and private review agents ensure that artificial intelligence, algorithm, or other software tools are used in a certain manner when used for conducting utilization review.
Summary: House Bill 820 mandates that health insurance entities ensuring the responsible use of artificial intelligence in utilization reviews, emphasizing that such technology cannot solely determine healthcare decisions or harm enrollees.
Collection: Legislation
Status date: March 6, 2025
Status: Engrossed
Primary sponsor: Terri Hill (26 total sponsors)
Last action: Referred Finance (March 7, 2025)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text explicitly addresses how artificial intelligence and algorithms must be used within the context of health insurance utilization review. It outlines requirements for carriers, pharmacy benefits managers, and private review agents to utilize these systems in a way that is fair, equitable, and compliant with existing laws. The detailed mandates aim to ensure that AI tools do not cause harm or discrimination and that they abide by regulatory standards, indicating a strong focus on social implications and governance surrounding the use of AI in health insurance. Given these points, this legislation is likely to influence the social dynamics, data governance, and integrity of the healthcare system significantly, hence the high scores across categories.


Sector:
Healthcare (see reasoning)

The legislation pertains primarily to the healthcare sector as it specifically discusses health insurance utilization review and mandates for how AI should be applied within that context. It outlines responsibilities and compliance measures for a variety of healthcare entities, emphasizing the importance of ethical and fair AI use in healthcare decisions. Given its detailed focus on healthcare applications and the potential implications for patients' care and service standards, it has a high relevance score for this sector. It does not address other sectors within the provided description, thus receiving lower relevance scores for the other sectors.


Keywords (occurrence): artificial intelligence (2) algorithm (16) show keywords in context

Description: Relating to establishing a framework to govern the use of artificial intelligence systems in critical decision-making by private companies and ensure consumer protections; authorizing a civil penalty.
Summary: The bill establishes a framework for private companies using artificial intelligence in critical decision-making to protect consumers, including a civil penalty for non-compliance.
Collection: Legislation
Status date: March 14, 2025
Status: Introduced
Primary sponsor: Charles Schwertner (sole sponsor)
Last action: Filed (March 14, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly addresses the use of artificial intelligence systems in critical decision-making by private companies, emphasizing consumer protections. It introduces a framework for managing AI-related impacts, especially focused on consequential decisions that affect consumers. Therefore, it is highly relevant to the Social Impact category as it deals with accountability, fairness, and the potential harm from automated decision-making systems. The Data Governance category is relevant as well, since the bill discusses governance measures related to data usage in AI systems, particularly in ensuring that consumer protections are considered. The System Integrity category is moderately relevant as it relates to the governance and accountability of AI systems, though it does not delve deeply into security or transparency measures. The Robustness category is less relevant as it does not address performance benchmarks or certification processes for AI systems directly. Overall, the main focus of the bill aligns closely with Social Impact and Data Governance.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The text focuses on the role of artificial intelligence in critical decision-making by private companies, which connects primarily to the sector of Private Enterprises, Labor, and Employment. It emphasizes ensuring consumer protections within this context, which is pivotal for business practices involving AI. There is some overlap with Government Agencies and Public Services due to the potential implications for policy frameworks governing AI use; however, the emphasis on private companies is stronger. The other sectors, such as Politics and Elections, Judicial System, Healthcare, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, do not have significant relevance based on the content of this text.


Keywords (occurrence): artificial intelligence (5) machine learning (1) show keywords in context

Description: An act to add Chapter 41 (commencing with Section 22949.90) to Division 8 of the Business and Professions Code, relating to artificial intelligence.
Summary: The California Digital Content Provenance Standards bill mandates generative AI providers to apply and disclose provenance data for synthetic content, enhancing transparency and reducing risks associated with deceptive digital content. It establishes requirements for labeling and reporting by online platforms, aiming to protect consumers and maintain trust in digital media.
Collection: Legislation
Status date: May 22, 2024
Status: Engrossed
Primary sponsor: Buffy Wicks (sole sponsor)
Last action: Read second time. Ordered to third reading. (Aug. 26, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The proposed California Provenance, Authenticity, and Watermarking Standards Act focuses heavily on the implications of generative artificial intelligence (GenAI) technologies in society, especially regarding the authenticity and provenance of synthetic content. This directly relates to the Social Impact category, as it emphasizes the potential harms of GenAI, addressing issues such as misinformation, public trust, and transparency which affect societal norms and individual behaviors. The mandate for disclosure and labeling of synthetic content is a clear attempt to mitigate psychological and material harms related to this technology. In terms of Data Governance, the bill establishes stringent requirements for data management practices, including the creation of provenance data tied to AI-generated content and the obligation to report vulnerabilities. This aligns closely with the category’s focus on secure and accurate data collection. The bill also mentions the necessity for AI red-teaming exercises and public safety notifications, which indicate concerns about systemic integrity, placing it within the System Integrity domain. In regards to Robustness, the text discusses compliance and auditing mandates for generative AI providers, suggesting a framework for maintaining performance standards. Therefore, the act is relevant to all categories but especially so for Social Impact and Data Governance due to the emphasis on transparency, safety, and societal impacts of AI.


Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment
International Cooperation and Standards (see reasoning)

This legislation has significant implications across multiple sectors. In the context of Politics and Elections, the mention of GenAI's potential to skew election results highlights direct relevance, especially regarding transparency and voter trust. For Government Agencies and Public Services, the bill mandates compliance from state departments concerning the watermarking of AI-generated content, showcasing its applicability in governance. It touches upon the Judicial System in terms of potential legal ramifications from misuse of synthetic content, although this is less direct. In the Healthcare sector, while it doesn't explicitly address AI applications, principles of authenticity and provenance can apply to medical data and tools, but it is not primary enough for significant relevance. The Private Enterprises, Labor, and Employment sector is relevant because companies using generative AI will need to comply with the new regulations. However, Academic and Research Institutions may only find slight relevance due to a lack of explicit connection to academic research. Lastly, there is broad relevance in terms of International Cooperation and Standards, particularly regarding how California's regulations may influence or need to align with global standards for technology and AI. Overall, key sectors impacted most prominently are Politics and Elections, Government Agencies and Public Services, and Private Enterprises.


Keywords (occurrence): artificial intelligence (5) show keywords in context

Description: An act to amend, repeal, and add Section 12140 of, and to amend the heading of Chapter 3.7 (commencing with Section 12140) of Part 2 of Division 2 of, the Public Contract Code, relating to public contracts.
Summary: California Senate Bill 1220 mandates that state and local agencies contract call center services for public benefits exclusively with California workers, prohibiting the use of AI to automate core job functions until 2030.
Collection: Legislation
Status date: Aug. 30, 2024
Status: Enrolled
Primary sponsor: Monique Limon (2 total sponsors)
Last action: Enrolled and presented to the Governor at 4 p.m. (Sept. 10, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

This legislation is highly relevant to the Social Impact category because it explicitly addresses the implications of AI and automated systems on job functions, especially concerning workers employed in call centers related to public benefits. It emphasizes job security and the potential risks that AI can pose to employment, focusing on eliminating or automating core job functions, which directly relates to societal job impacts. Moreover, it mandates notifications and assessments regarding AI's use in a manner that protects workers' rights and calls for accountability from contractors, highlighting significant social considerations tied to AI. For the Data Governance category, the bill implies safeguards around data used in AI systems through mandated impact assessments and transparency requirements but does not explicitly provide detailed data governance measures, thus receiving a lower relevance score. The System Integrity category receives a relevance score as the legislation discusses mandates for contractor accountability and compliance, but it does not focus deeply on the overarching security or transparency of AI systems generally. For Robustness, it has moderate relevance due to the mention of assessment and reporting requirements but lacks more comprehensive frameworks for performance benchmarking or auditing of AI systems in detail.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The legislation closely aligns with the Government Agencies and Public Services sector since it addresses the use and regulations surrounding AI in the context of public benefit programs administered by state or local agencies. The explicit mention of call centers and the requirement for services to be performed by California workers under these agencies highlights the direct application of AI in public service delivery. While the legislation has potential implications for Private Enterprises, Labor, and Employment due to the focus on labor rights and job functions potentially being automated, it primarily governs the actions of government agencies. Other sectors such as Healthcare, Politics and Elections, and Judicial System do not find direct relevance through the provided text, concluding that the main focus of this bill lies within government functions and public services.


Keywords (occurrence): artificial intelligence (4) machine learning (1) automated (4) show keywords in context

Description: AN ACT relating to insurance; revising provisions relating to prior authorization for certain medical and dental care; revising provisions relating to the coverage of autism spectrum disorders for certain persons; prohibiting health insurers from considering the availability of certain public benefits for certain purposes; and providing other matters properly relating thereto.
Summary: Senate Bill 398 revises health insurance provisions, enhancing prior authorization processes, extending autism coverage, and prohibiting insurers from considering certain public benefits in claims processing.
Collection: Legislation
Status date: March 17, 2025
Status: Introduced
Primary sponsor: Lori Rogich (sole sponsor)
Last action: From printer. To committee. (March 19, 2025)

Category:
Societal Impact (see reasoning)

The text primarily pertains to health insurance regulations, specifically regarding prior authorization processes for medical and dental care, which can broadly relate to Social Impact through effects on patient access to care and fairness in treatment decisions. However, it does not explicitly address AI technologies or their implications, leading to limited relevance in other categories such as Data Governance, System Integrity, or Robustness. There is no mention of algorithms, AI, or related terms that would suggest a direct connection to systematic concerns regarding transparency or security in AI systems. As such, scores reflect limited and indirect relevance primarily linked to the social aspects of healthcare rather than direct AI implications.


Sector:
Healthcare (see reasoning)

The text is highly relevant to the Healthcare sector as it specifically discusses provisions related to health insurance and prior authorizations for medical and dental care. The legislative changes outlined directly affect the delivery of health services, impact patient rights, and modify regulations for healthcare providers and insurers alike. Therefore, a high score reflects its direct association with healthcare contexts. Other sectors are not applicable as there is no mention of political processes, judicial systems, education, or private enterprises regarding AI in this specific text.


Keywords (occurrence): artificial intelligence (2) show keywords in context

Description: Require disclosures of AI use by online media manufacturers
Summary: The bill mandates online media manufacturers in Montana to disclose AI usage in content creation, provide opt-out options for users, and include identifiable markers for AI-generated materials.
Collection: Legislation
Status date: Feb. 24, 2025
Status: Introduced
Primary sponsor: Daniel Emrich (sole sponsor)
Last action: (S) Referred to Committee (S) Energy, Technology & Federal Relations (Feb. 24, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

This legislation makes clear that it is focused on the requirements for manufacturers of online media that utilize artificial intelligence (AI). It addresses the impacts of AI on society, particularly regarding media consumption, making it highly relevant to the 'Social Impact' category. The requirement for disclosures about AI use aims to enhance transparency and consumer trust, which are crucial societal concerns. The legislation touches upon consumer protections, as it allows users to opt out and mandates visibility of AI utilization through identifiable markers. The 'Data Governance' category is relevant as it involves how data, especially in the context of AI-generated media, is managed and presented transparently to consumers. The 'System Integrity' category is relevant since the bill requires that AI's use in online media be clearly indicated, which is a matter of ensuring the integrity and transparency of automated decision-making processes. Finally, while this legislation ensures some level of accountability for AI applications, it does not directly introduce benchmarks or performance standards, so the 'Robustness' category is less relevant. Overall, the text holds significant relevance to the 'Social Impact' and 'Data Governance' categories based on its focus on transparency, consumer choice, and the role of AI in public media consumption.


Sector:
Government Agencies and Public Services (see reasoning)

This legislation focuses on the relationship between AI and online media manufacturers, thereby having strong implications for how AI is utilized and disclosed in media settings. The 'Politics and Elections' sector is not directly addressed, as the legislation does not deal with electoral processes or political campaigns. However, it does have an oversight aspect that could indirectly influence public discourse and media integrity, relevant to the 'Government Agencies and Public Services' sector. The 'Judicial System' sector is less relevant as the legislation does not discuss legal implications or judicial uses of AI. The 'Healthcare', 'Private Enterprises, Labor, and Employment', 'Academic and Research Institutions', 'International Cooperation and Standards', 'Nonprofits and NGOs', and 'Hybrid, Emerging, and Unclassified' sectors are not directly discussed or implicated in this text. Overall, the primary relevance lies in the media sector's implications for transparency and accountability of AI systems in public information dissemination.


Keywords (occurrence): artificial intelligence (9) show keywords in context
Feedback form