5026 results:


Description: Amends the Consumer Fraud and Deceptive Business Practices Act. Provides that it is an unlawful practice for any person to engage in a commercial transaction or trade practice with a consumer in which: (1) the consumer is communicating or otherwise interacting with a chatbot, artificial intelligence agent, avatar, or other computer technology that engages in a textual or aural conversation; (2) the communication may mislead or deceive a reasonable consumer to believe that the consumer is comm...
Summary: The bill prohibits deceptive practices in commercial interactions where consumers may mistakenly believe they are communicating with a human rather than AI, requiring clear disclosure of AI usage.
Collection: Legislation
Status date: Feb. 6, 2025
Status: Introduced
Primary sponsor: Abdelnasser Rashid (sole sponsor)
Last action: Referred to Rules Committee (Feb. 6, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text directly addresses issues related to the social impact of AI, particularly concerning consumer rights and the ethical implications of AI systems such as chatbots deceiving users. It emphasizes transparency and fairness, highlighting the need for consumers to be aware that they are interacting with AI and not a human. This aligns the text significantly with social impact, since it deals with accountability and the ethical use of AI in commercial practices. Data governance is moderately relevant as it touches on managing data related to consumer interactions but is primarily focused on consumer deception rather than data management itself. System Integrity is less relevant, primarily because the text deals with ethical behaviors in communication rather than security or transparency of AI systems. Robustness is not relevant as it does not address system performance or benchmarks.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The text relates strongly to the sector of Private Enterprises, Labor, and Employment, as it touches on consumer interactions within commercial contexts. It does not specifically address politics, government agencies, the judicial system, healthcare, academic institutions, international cooperation, nonprofits, or emerging sectors, making the relevance to these sectors lower. While there is a mention of consumers, the legislation does not focus on their role as employees or in broader organizational structures, thus narrowing the focus of applicable sectors.


Keywords (occurrence): artificial intelligence (4) chatbot (2) show keywords in context

Description: Concerning tools to protect minor users of social media.
Summary: The bill mandates social media companies in Colorado to implement protective measures for minor users, including age verification, user control settings, and privacy enhancements, aimed at safeguarding minors' mental health and data.
Collection: Legislation
Status date: Feb. 26, 2025
Status: Introduced
Primary sponsor: Jarvis Caldwell (4 total sponsors)
Last action: Introduced In House - Assigned to Health & Human Services (Feb. 26, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text primarily addresses the impact of social media on minors, which ties closely to how AI can influence user experiences through algorithms. The mention of algorithms indicates concerns about the social impact of AI, particularly regarding mental health and safety of minors in social media contexts. While it does touch upon data privacy, its focus is more on the implications for youth rather than on robust governance of AI data practices. This legislation is particularly relevant for how AI-driven features influence minors and sets requirements for social media platforms, demonstrating a clear concern for societal outcomes, making it fit well in the 'Social Impact' category. The document does suggest a governance framework for data utilization with implications for data privacy and security, hence having some relevance to the 'Data Governance' category, but not as prominently as 'Social Impact'. The mention of requiring oversight and the principles behind algorithmic engagement suggests a minor connection to 'System Integrity'. 'Robustness' is not directly addressed since there are no specified benchmarks or performance measures mentioned in the text.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

This legislation directly affects the 'Government Agencies and Public Services' sector, particularly as it pertains to regulations around social media use for minors, suggesting a role for governmental bodies in establishing protections. It also has implications for 'Private Enterprises, Labor, and Employment' because it mandates requirements for social media companies, including the operation of algorithmic systems that could affect user experiences and company practices concerning minors. This legislation does not specifically align with the 'Judicial System', 'Healthcare', 'Academic and Research Institutions', or 'Politics and Elections' sectors. Given the focus on AI as utilized by social media companies, the relevance to 'Hybrid, Emerging, and Unclassified' is minimal compared to more defined sectors. Overall, it mainly concerns government regulation directly impacting the operations of social media platforms.


Keywords (occurrence): automated (1) recommendation system (2) algorithm (2) show keywords in context

Description: The purpose of this bill is to prohibit the use of synthetic media and artificial intelligence to influence an election.
Summary: The bill prohibits using synthetic media and artificial intelligence to influence elections in West Virginia, imposing penalties for violations, and mandates disclosures to ensure transparency regarding manipulated content.
Collection: Legislation
Status date: Feb. 14, 2025
Status: Introduced
Primary sponsor: Jack Woodrum (3 total sponsors)
Last action: Laid over on 1st reading 2/28/2025 (Feb. 28, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text of the bill explicitly relates to the use of synthetic media and artificial intelligence in the context of influencing elections. It outlines the definition and regulation of AI-generated content, focusing on its implications for misinformation and potential harm to democratic processes. This positions the bill primarily within the realm of Social Impact, as it addresses the societal issues posed by AI misuse. While the regulation of data in synthetic media is touched upon, its focus is more on prohibiting the misuse rather than comprehensive data governance. System Integrity and Robustness do not significantly apply as the bill is more concerned with the ethical implications and penalties related to AI usage. Overall, the primary emphasis is on the social implications of AI in political contexts.


Sector:
Politics and Elections (see reasoning)

The bill specifically addresses the regulation of synthetic media and AI in the context of influencing elections, aligning closely with the political sector. It sets out rules about what is permissible in political advertising and establishes penalties for violations, directly impacting the political landscape. The implications for Government Agencies and Public Services are minimal since it primarily deals with electoral processes rather than public services at large. Other sectors such as Judicial System, Healthcare, etc., are not relevant as they do not pertain to the content of the bill. Therefore, the clear relevance to Politics and Elections is strong, while other sectors receive much lower scores.


Keywords (occurrence): artificial intelligence (6) synthetic media (20) show keywords in context

Description: Relating to the use of an automated employment decision tool by an employer to assess a job applicant's fitness for a position; imposing an administrative penalty.
Summary: The bill regulates the use of automated employment decision tools by employers, requiring applicant notification and consent, and imposes penalties for violations to protect job seekers' rights.
Collection: Legislation
Status date: March 14, 2025
Status: Introduced
Primary sponsor: Nathan Johnson (sole sponsor)
Last action: Filed (March 14, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text directly pertains to the use of automated employment decision tools that involve AI, algorithms, and machine learning, specifically addressing accountabilities connected to their use. It examines the fairness and consent aspects of AI systems in hiring processes, making it highly relevant to the Social Impact category. The Data Governance category is also significant, as it deals with the handling and assessment of applicant data through automated systems, raising questions about data protection, bias, and fairness. System Integrity and Robustness are less applicable as the text primarily focuses on the use of algorithmic decision-making tools rather than their internal security, transparency, or performance benchmarks.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text focuses on the utilization of AI in employment settings, specifically through automated decision tools used by employers for assessing job applicants. Therefore, it is particularly relevant to the Private Enterprises, Labor, and Employment sector, as it addresses both the regulatory framework around AI's impact on hiring practices and the rights of applicants. The Government Agencies and Public Services sector is also relevant to a lesser extent, given the administrative oversight involved in enforcing these regulations. Other sectors are less directly connected as the focus is firmly on the applicability of AI in employment rather than broader governmental or institutional frameworks.


Keywords (occurrence): artificial intelligence (2) machine learning (2) automated (14) algorithm (1) show keywords in context

Description: Prohibits the provision of an artificial intelligence companion to a user unless such artificial intelligence companion contains a protocol for addressing possible suicidal ideation or self-harm expressed by a user, possible physical harm to others expressed by a user, and possible financial harm to others expressed by a user; requires certain notifications to certain users regarding crisis service providers and the non-human nature of such companion models.
Summary: This New York bill mandates that artificial intelligence companions must include protocols to address and notify users about potential self-harm, harm to others, and financial risks, promoting user safety.
Collection: Legislation
Status date: March 13, 2025
Status: Introduced
Primary sponsor: Clyde Vanel (sole sponsor)
Last action: referred to consumer affairs and protection (March 13, 2025)

Category:
Societal Impact
System Integrity (see reasoning)

The legislation focuses on the regulations surrounding AI companions, specifically addressing their protocols for managing users' mental health concerns, including suicidal ideation, self-harm, and potential harm to others. This directly intersects with societal impacts, particularly in contexts such as mental health, safety, and well-being, making it highly relevant under the Social Impact category. The data governance aspect is not specifically addressed, as the document does not focus on data management, accuracy, or security. System Integrity is somewhat relevant due to the requirement for notifications about the non-human nature of the AI, reflecting a level of transparency intended to protect users. Robustness is less relevant since the text does not discuss performance benchmarks or compliance standards of the AI systems. Therefore, the Social Impact category is assigned a high score, while others receive lower scores.


Sector:
Government Agencies and Public Services
Healthcare (see reasoning)

The legislation is highly relevant to the Healthcare sector given its focus on AI technologies that are designed to manage mental health concerns and risks. It may also relate to the Government Agencies and Public Services sector as it mandates protocols that can be overseen by regulatory bodies that protect public interest in the context of AI use. However, it does not pertain directly to Politics and Elections, the Judicial System, Private Enterprises, Labor, and Employment, Academic Institutions, International Cooperation, Nonprofits, or the Hybrid sector, making them less relevant. The strongest connection is to Healthcare, underscoring the importance of AI in mental health scenarios. Thus, the scoring reflects this relevance.


Keywords (occurrence): artificial intelligence (6) automated (1) show keywords in context

Description: As enacted, specifies that for the purposes of sexual exploitation of children offenses, the term "material" includes computer-generated images created, adapted, or modified by artificial intelligence; defines "artificial intelligence." - Amends TCA Title 39 and Title 40.
Summary: This bill amends Tennessee law to address sexual exploitation of children by including provisions related to artificial intelligence-generated images, expanding legal definitions for better protection against such exploitation.
Collection: Legislation
Status date: May 13, 2024
Status: Passed
Primary sponsor: Mary Littleton (3 total sponsors)
Last action: Effective date(s) 07/01/2024 (May 13, 2024)

Category:
Societal Impact (see reasoning)

The text discusses legislation that involves AI in the context of preventing sexual exploitation of children. It emphasizes the creation and modification of computer-generated images using AI, linking AI to potential harmful content. This directly impacts societal values, norms, and safety, reflecting strong relevance to the Social Impact category. It does not focus on data governance, system integrity, or robustness, as it does not discuss data management, system security, or performance benchmarks in the AI context. Thus, Social Impact is rated highly, while the other categories score low.


Sector: None (see reasoning)

The text primarily addresses the legal implications of AI in relation to child exploitation, emphasizing the role of AI in generating harmful content. This has a direct implication for laws concerning societal norms and values, but it does not specifically pertain to any of the defined sectors like politics, public service, healthcare, or others. Therefore, the score for the sectors remains low, as it does not directly address those areas, with the exception of implications for potential impacts on children in both the Public Service and Nonprofits sectors being slightly relevant due to its protective nature.


Keywords (occurrence): artificial intelligence (5) automated (1) show keywords in context

Description: Amends the Consumer Fraud and Deceptive Business Practices Act. Provides that the owner, licensee, or operator of a generative artificial intelligence system shall conspicuously display a warning on the system's user interface that is reasonably calculated to consistently apprise the user that the outputs of the generative artificial intelligence system may be inaccurate or inappropriate. Provides that a violation of the provision constitutes an unlawful practice within the meaning of the Act.
Summary: The bill mandates that operators of generative artificial intelligence systems display clear warnings on their user interfaces about potential inaccuracies or inappropriate content in the outputs, classifying violations as unlawful practices.
Collection: Legislation
Status date: Feb. 6, 2025
Status: Introduced
Primary sponsor: Laura Ellman (sole sponsor)
Last action: Referred to Assignments (Feb. 6, 2025)

Category:
Societal Impact (see reasoning)

This text explicitly discusses the implications of generative artificial intelligence systems in relation to user safety and informed consent, addressing how the outputs of such systems can be inaccurate or inappropriate. This is directly relevant to the Social Impact category as it pertains to consumer protections and accountability in AI outputs, ensuring that users are aware of potential risks. The inclusion of a requirement for warnings aligns with fairness and bias considerations, as these warnings are a method of reducing psychological and informational harm stemming from misleading AI outputs. It does not specifically address data management or the security and integrity of AI systems, which makes it less relevant for Data Governance, System Integrity, and Robustness.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text addresses the regulation of AI systems in a way that directly engages with consumer rights and protections which can impact various sectors. It is particularly relevant for consumers in the context of private enterprises since it affects how businesses utilize AI technologies to ensure safety and transparency. It also has implications in the realm of Government Agencies and Public Services, as these regulations can dictate how AI applications function within these sectors. However, it is not focused specifically on political campaigns, healthcare, academic institutions, or international standards, leading to a moderate relevance overall in sectors.


Keywords (occurrence): artificial intelligence (8) automated (1) show keywords in context

Description: To Create The Arkansas Digital Responsibility, Safety, And Trust Act.
Summary: The Arkansas Digital Responsibility, Safety, and Trust Act aims to establish privacy protections for personal data, addressing risks associated with digital technology and ensuring responsible data handling practices by organizations.
Collection: Legislation
Status date: Feb. 19, 2025
Status: Introduced
Primary sponsor: Clint Penzo (2 total sponsors)
Last action: Read first time, rules suspended, read second time, referred to TRANSPORTATION, TECHNOLOGY & LEGISLATIVE AFFAIRS - SENATE (Feb. 19, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

This text contains numerous references to artificial intelligence (AI) and its implications, such as algorithmic discrimination and the use of AI systems affecting personal data processing. The mention of AI's role in decision-making and the risks associated with it highlights its potential societal impact, suggesting that the legislation is aimed at addressing ethical considerations and fostering trust in technology. Thus, the text is significantly relevant to the Social Impact category. Furthermore, the inclusion of definitions related to data privacy and AI in the governance framework indicates a strong emphasis on Data Governance as well, with AI being instrumental in processing personal information, thereby necessitating regulations to protect individuals and ensure data accuracy and security. Although there are aspects of System Integrity related to transparency and security in the handling of AI, their relevance is not as pronounced compared to the previous categories. Robustness is minimally touched upon, with limited implications on performance benchmarks. Overall, the connection to Social Impact and Data Governance is compelling and reinforces the need for effective oversight of AI systems to mitigate risks to society and individuals.


Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Hybrid, Emerging, and Unclassified (see reasoning)

The text has strong implications for various sectors, particularly Government Agencies and Public Services, as it deals with regulatory frameworks for digital technology and AI oversight. The legislation's focus on consumer protection and data governance relates directly to how government agencies will deploy AI in regulating and delivering services. The implications for Judicial System come from the intersection with privacy laws and personal data, indicating that these technologies may influence legal interpretations and practices. However, while there are mentions of employment and health data, the Healthcare and Private Enterprises sectors do not exhibit as strong a connection through the overall text as Government and Judicial Systems. The text's discussions about personal data handling and discrimination address essential frameworks applicable to broader social implications, fitting into Academic and Research Institutions for the educational aspects of AI technology as well. The implications for Politics and Elections are less direct but could be inferred to some extent due to discussions around personal data and its use in political campaigning. The other sectors hold varying degrees of relevance, but do not show as explicit connections. Hence, the strongest sector ties are with Government Agencies, Judicial System, and Academic Institutions.


Keywords (occurrence): artificial intelligence (90) machine learning (1) automated (2) show keywords in context

Description: Authorizes first responder amputees to continue to serve as first responders; creates Florida Medal of Valor & Florida Blue/Red Heart Medal; prohibits use of motor vehicle kill switches; requires mandatory minimum term of imprisonment for attempted murder in first degree committed against specified justice system personnel; prohibits depriving specified officers of digital recording devices or restraint devices, rendering them useless, or otherwise preventing officer from defending himself or...
Summary: The bill enhances protections for law enforcement officers and first responders, establishes recognition awards, prohibits certain vehicle modifications, and introduces measures for managing public safety and emergency responses, including infrastructure mapping and crime reporting.
Collection: Legislation
Status date: April 29, 2025
Status: Enrolled
Primary sponsor: Judiciary Committee (12 total sponsors)
Last action: Ordered enrolled (April 29, 2025)

Category:
Societal Impact
System Integrity (see reasoning)

The text primarily deals with legislation related to criminal justice and regulation of AI concerning data from first responder body cameras. The use of the phrase 'artificial intelligence' indicates that the legislation is addressing the implications of AI usage in a specific legal context, particularly regarding the monitoring and review of body camera footage without AI intervention. This suggests an emphasis on regulation and potential impact on justice system processes. However, it lacks broader references to social implications or detailed governance structures tied to AI technologies outside this context. Additionally, the section on AI applies to a specific criminal justice setting rather than a comprehensive view of AI technologies.


Sector:
Government Agencies and Public Services
Judicial system (see reasoning)

The text relates to the government sector through its implications for law enforcement (e.g., restrictions on AI use with body cameras) and speaks to the judicial system regarding how AI is managed in the context of law enforcement practices. It does not touch upon all sectors, but the clearest connections are to government operations and the judicial system due to the nature of the legislation and its focus on law enforcement.


Keywords (occurrence): automated (1) show keywords in context

Description: Prohibiting a person from using fraud to influence or attempt to influence a voter's voting decision; providing that fraud includes the use of synthetic media; and defining "synthetic media" as an image, an audio recording, or a video recording that has been intentionally created or manipulated with the use of generative artificial intelligence or other digital technology to create a realistic but false image, audio recording, or video recording.
Summary: The bill prohibits using deepfakes to fraudulently influence voter decisions in elections, defining deepfakes and establishing penalties for violations, thereby aiming to protect electoral integrity.
Collection: Legislation
Status date: April 3, 2025
Status: Engrossed
Primary sponsor: Jessica Feldmark (17 total sponsors)
Last action: Third Reading Passed (127-10) (April 3, 2025)

Keywords (occurrence): deepfake (10) synthetic media (2) show keywords in context

Description: Relating to criminal and civil liability related to sexually explicit media and artificial intimate visual material; creating a criminal offense; increasing a criminal penalty.
Summary: The bill establishes criminal and civil penalties for producing, distributing, or threatening deep fake sexually explicit material without consent, focusing on protecting individuals from exploitation and harm.
Collection: Legislation
Status date: June 1, 2025
Status: Enrolled
Primary sponsor: Juan Hinojosa (5 total sponsors)
Last action: Reported enrolled (June 1, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text primarily addresses the regulation of deep fake media, particularly its production and distribution, which directly impacts social dynamics such as consent and victim rights, particularly in the context of sexual exploitation and privacy. This aspect makes it highly relevant to Social Impact. Given the legal implications surrounding consent and potential psychological harm caused by artificial intimate visual material, it also strongly relates to System Integrity, as it deals with accountability, compliance, and safeguards against misuse of AI technologies. Data Governance is somewhat relevant as it touches on information management concerning consent and legal liability, while Robustness has minimal relevance since the text does not discuss AI performance benchmarks or audit requirements in detail.


Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)

The legislation significantly relates to Private Enterprises, Labor, and Employment as it outlines the responsibilities of AI application providers and online platforms regarding the content they host, indicating a need for industry awareness and compliance. There's also mention of civil liabilities which can have implications on the Judicial System in terms of case law and damages awarded for non-compliance. Additionally, aspects of health and mental well-being of victims affected by deep fake media may touch on Healthcare, though this is a more tangential connection. The regulation is also relevant to Government Agencies and Public Services since enforcement and compliance measures would involve government oversight. Other sectors such as Politics and Elections, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified do not have immediate relevance to the content of this text.


Keywords (occurrence): artificial intelligence (7) machine learning (1) automated (1) deepfake (14) show keywords in context

Description: Social media; creating the Safe Screens for Kids Act. Effective date.
Summary: The Safe Screens for Kids Act prohibits minors from using social media without parental consent, mandates age verification, restricts data collection, and aims to protect minors' online safety.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Ally Seifried (sole sponsor)
Last action: Placed on General Order (March 4, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text heavily emphasizes the implications of AI technologies, algorithms, and data collection specifically as they relate to minors using social media platforms. In the context of Social Impact, it addresses concerns about potential psychological or emotional harm that could arise from interactions with AI-driven content and platforms. Data Governance is highly relevant as the act lays out strict mandates regarding the collection and processing of data from minor users, especially concerning de-identification and preventing targeted advertisements. System Integrity is also pertinent, especially regarding the prohibitions on using algorithms and AI to select or recommend content for minor users, promoting transparency and control. Robustness is not significantly addressed since the focus is on regulations and protective measures rather than establishing performance benchmarks or auditing mechanisms for AI. Overall, the relevance of AI-related portions to these categories leads to high scores in Social Impact, Data Governance, and System Integrity due to their comprehensive focus on protecting minors and maintaining ethical standards in AI usage.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The text predominantly focuses on the intersection of social media technologies and the protection of minors, which has significant implications for the fields of Politics and Elections and Government Agencies and Public Services. The legislation places strict regulations on social media practices which would inherently affect how political campaigns utilize these platforms. Moreover, the act emphasizes the responsibilities of social media companies under legal frameworks, which also impacts Government Agencies' roles in enforcement. However, there's little direct mention of AI's role in the Judicial System, Healthcare, Private Enterprises, Academic Institutions, International Cooperation, or NGOs. The focus is mainly on social media rather than the broader implications AI might have in other sectors. As such, the scores reflect a strong relevance to Politics and Elections and Government Agencies, while the other sectors receive lower relevance scores.


Keywords (occurrence): artificial intelligence (1) machine learning (2) algorithm (2) show keywords in context

Description: Prohibiting a person from utilizing certain personal identifying information or engaging in certain conduct in order to cause certain harm; prohibiting a person from using certain artificial intelligence or certain deepfake representations for certain purposes; and providing that a person who is the victim of certain conduct may bring a civil action against a certain person.
Summary: House Bill 1425 prohibits identity fraud using artificial intelligence and deepfakes, allowing victims to sue for harm while imposing penalties for fraudulent actions involving personal information.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: C.T. Wilson (sole sponsor)
Last action: Hearing 3/06 at 1:00 p.m. (Feb. 10, 2025)

Category:
Societal Impact
System Integrity (see reasoning)

The text explicitly addresses the implications of artificial intelligence (AI) and deepfake representations within the context of identity fraud. It outlines prohibitions on the use of AI and deepfakes to defraud or harm others, indicating a significant impact on societal behavior and individual rights. This is particularly relevant to the Social Impact category, where it discusses consumer protections and the potential psychological and material harm caused by AI misuse. Regarding Data Governance, the legislation does not delve deeply into data management or privacy but addresses the responsibility associated with personal identifying information. The System Integrity category is relevant as it references the need for intent and knowledge in deploying AI tools, creating a layer of accountability. Lastly, Robustness is less relevant, as the document does not present benchmarks or standards for AI systems but focuses on criminal implications rather than performance metrics. Overall, the Social Impact and System Integrity categories stand out as crucial due to the direct relevance of AI in harming individuals and the accountability required from users of AI technologies.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The legislation primarily addresses identity fraud in relation to AI and deepfakes, making it relevant to both the political landscape regarding election integrity and the challenges of misinformation. However, it is less focused on how AI interfaces with government agencies or public services, particularly since it does not focus on the governance aspect of AI applications in these areas. While these texts can potentially affect healthcare and the judicial system indirectly through identity fraud implications, they do not specifically delineate AI's role in these sectors. Overall, the most prominent sectors appear to be Politics and Elections due to implications concerning misinformation and Government Agencies and Public Services because of law enforcement aspects, although the direct application to these sectors is limited.


Keywords (occurrence): artificial intelligence (4) automated (1) deepfake (6) show keywords in context

Description: An act to amend Sections 3273.65, 3273.66, 3273.67, and 3345.1 of the Civil Code, relating to social media platforms.
Summary: Assembly Bill No. 1137 amends laws regarding social media platforms' responsibilities to report and handle child sexual abuse material, enhancing user reporting mechanisms and imposing penalties for non-compliance to protect minors.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Maggy Krell (3 total sponsors)
Last action: From committee: Do pass and re-refer to Com. on JUD. (Ayes 13. Noes 0.) (April 22). Re-referred to Com. on JUD. (April 23, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text discusses amendments to civil code relating to the responsibilities of social media platforms in the context of child sexual abuse material. The use of 'artificial intelligence' in the context of data transparency is present, indicating AI's relevance in ensuring compliance with regulations, which aligns with concerns surrounding social impact due to the potential consequences of AI on vulnerable populations (like minors depicted in harmful content). System Integrity is also heavily implied through the requirement for human intervention in report assessments, indicating a need for reliability in AI systems that manage sensitive content. However, the text does not advocate for performance benchmarks or robust guidelines for AI systems, which minimizes its relevance to Robustness. Data Governance is relevant as it entails liabilities for the handling of data and mechanisms for reporting, emphasizing accountability in data management practices related to AI usage. Overall, the text is significant concerning societal implications of AI and the governance of data within AI systems, resulting in moderately high scores across social impact and data governance, while system integrity also carries relevance due to the necessity for human oversight.


Sector:
Government Agencies and Public Services (see reasoning)

The text primarily relates to the Government Agencies and Public Services sector as it pertains to the legislative measures that federal and state government agencies must adhere to when dealing with social media platforms. It addresses the legal obligations of these platforms to ensure the safety of minors and protect against the facilitation of child sexual abuse, thus reflecting the government's role in protecting citizens through regulation. There is no direct relevance to Politics and Elections, the Judicial System, or Healthcare as those sectors are not explicitly mentioned or implicated in the text's provisions. Private Enterprises encompass social media companies, but the context of the bill aligns more with public service responsibilities rather than corporate governance, making Government Agencies and Public Services the most relevant sector for this text.


Keywords (occurrence): artificial intelligence (1) algorithm (1) show keywords in context

Description: Long-range information technology appropriations
Summary: The bill appropriates funds for various state information technology capital projects through 2027, aiming to enhance system security, efficiency, and overall technological infrastructure across state agencies.
Collection: Legislation
Status date: April 28, 2025
Status: Enrolled
Primary sponsor: John Fitzpatrick (sole sponsor)
Last action: (H) Sent to Enrolling (April 28, 2025)

Category:
Data Governance
System Integrity (see reasoning)

The text mentions various information technology projects, including a specific mention of 'Artificial Intelligence and Legacy System Modernization (Technical Debt Relief Fund).' This inclusion places the bill within the context of AI, particularly regarding its integration and modernization alongside existing legacy systems. However, while it is relevant to the categories, it does not deeply explore the implications of AI on society, data governance, system integrity, or robustness within the broader context of these projects. Thus, some categories are only moderately to slightly relevant since they don’t emphasize social impacts, data management in depth, or performance benchmarks specifically tied to AI systems.


Sector:
Government Agencies and Public Services (see reasoning)

The text does not clearly align with any specific sector, as it primarily discusses appropriations for IT capital projects without addressing a particular operational sector such as healthcare, judicial systems, or public services regarding the application of AI. However, its mentions of statewide networks and cybersecurity initiatives suggest it could relate to government agencies and public services broadly, leading to a moderate relevance rating here. The other sectors features in this context of funding and appropriations remain unrelated or only tangentially relevant.


Keywords (occurrence): automated (1)

Description: AN ACT relating to insurance; imposing requirements relating to prior authorization; prescribing certain requirements relating to the use of artificial intelligence by health insurers; requiring the compilation and publication of certain reports relating to prior authorization; providing for the investigation and adjudication of certain violations; providing for the imposition of civil and administrative penalties for such violations; and providing other matters properly relating thereto.
Summary: A.B. 295 revises health insurance provisions, reducing prior authorization response times, regulating AI use in decision-making, and enhancing reporting and penalty mechanisms for insurers.
Collection: Legislation
Status date: Feb. 25, 2025
Status: Introduced
Primary sponsor: Thaddeus Yurek (3 total sponsors)
Last action: From printer. To committee. (Feb. 26, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text prominently addresses the use of artificial intelligence by health insurers, which directly relates to the implications of AI on society particularly in the context of healthcare processes and regulations surrounding prior authorization. It specifies requirements for disclosing the use of AI systems and prohibits their use in making adverse decisions without human oversight, emphasizing accountability and consumer protection. Thus, it has significant relevance to the Social Impact category. The discussion on automated decision tools and requirements for independent reviews also fits well with the System Integrity category as it emphasizes transparency and human oversight. There are also components that touch on data management and practices related to reporting, justifying a score for Data Governance. Robustness, however, does not directly apply since the focus is more on oversight rather than performance benchmarks. Therefore, its relevance scores in the categories are high due to explicit mentions and implications for AI in health insurance, underscoring accountability, transparency, and consumer protection.


Sector:
Government Agencies and Public Services
Healthcare (see reasoning)

The text is fundamentally situated within the Healthcare sector as it explicitly deals with the use of artificial intelligence in the context of health insurance, outlining specific obligations for health carriers that employ AI systems in their authorization processes. Since it involves not just the operational processes within healthcare but also regulatory aspects impacting insurers and patient rights, it closely aligns with the Healthcare sector legislation. The mention of Medicaid and children’s health coverage further confirms this alignment. While aspects of the bill may relate to government operations, the primary focus is clearly on health insurers and the regulatory framework governing their use of AI. Thus, the score for the Healthcare sector is extremely relevant, while other sectors like Politics and Elections or Private Enterprises may not find significant relevance.


Keywords (occurrence): artificial intelligence (12) automated (19) show keywords in context

Description: AN ACT relating to elections; prohibiting the use of artificial intelligence in equipment used for voting, ballot processing or ballot counting; requiring certain published material that is generated through the use of artificial intelligence or that includes a materially deceptive depiction of a candidate to include certain disclosures; prohibiting, with certain exceptions, the distribution of synthetic media that contains a deceptive and fraudulent deepfake of a candidate; providing penalti...
Summary: The bill prohibits the use of artificial intelligence in voting equipment, mandates disclosures for AI-generated materials related to elections, and regulates the distribution of deceptive synthetic media, aiming to enhance election integrity.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Bert Gurr (4 total sponsors)
Last action: Read first time. Referred to Committee on Legislative Operations and Elections. To printer. (Feb. 20, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

This text primarily addresses the implications of artificial intelligence in the electoral process. It explicitly details the prohibition of AI usage in voting equipment, mandates disclosures for AI-generated material, and establishes penalties for the distribution of deceptive deepfakes. Therefore, it is highly relevant to the Social Impact category due to its focus on the fairness and integrity of elections, the protection of candidates' reputations, and preventing misinformation. The relevance to Data Governance is moderate, as it discusses accuracy and transparency in published materials, but does not delve deeply into data management issues. System Integrity is also moderately relevant since it emphasizes the need for securing electoral processes against automation, but does not focus on inherent system security measures. The Robustness category is less relevant as there are no discussions about benchmarks or auditing measures in AI performance within this context.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The text directly relates to the Politics and Elections sector as it outlines regulations on the use of AI in electoral processes, aiming to protect the integrity and fairness of elections. The Government Agencies and Public Services sector is also relevant as it discusses government regulations regarding voting equipment. However, the other sectors like Judicial System, Healthcare, Private Enterprises, Labor, and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified are not applicable here as the content is primarily focused on election-related provisions and AI's role in that particular context. Thus, the scores reflect that restriction.


Keywords (occurrence): artificial intelligence (9) machine learning (1) deepfake (6) synthetic media (10) show keywords in context

Description: Relating to the use of artificial intelligence by health care providers.
Summary: This bill establishes regulations for the use of artificial intelligence by health care providers in Texas, ensuring responsible use and patient notification about AI involvement in their care.
Collection: Legislation
Status date: March 11, 2025
Status: Introduced
Primary sponsor: Salman Bhojani (sole sponsor)
Last action: Filed (March 11, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text focuses primarily on the use of artificial intelligence by health care providers, emphasizing the need for regulation, responsibilities of health care providers when using AI, and mandatory disclosures to patients. This directly relates to the Social Impact category, as it considers accountability and consumer protections regarding the use of AI in health care contexts and addresses the implications for patient care and trust. For Data Governance, it recognizes the need for rules governing the responsible use of AI in managing health care data, though not explicitly covering data management protocols; hence it is moderately relevant. System Integrity relevance stems from the mention of rules governing the procedures for AI use, although the text does not delve into security measures or oversight specifics. Lastly, the Robustness category is not directly addressed, as the text does not mention benchmarks or performance standards for AI systems. Therefore, scores reflect the varying degrees of relevance to these categories.


Sector:
Healthcare (see reasoning)

This bill is explicitly geared towards the healthcare sector, detailing how AI is integrated into health care practices and involves healthcare providers' responsibilities in using AI. It mentions AI's role in patient communication, which is pertinent to the Healthcare sector, ensuring the responsible use of technology in medical practices. There is minimal relevance to other sectors as the focus remains strictly on healthcare providers. While there are implications for policy considerations that may affect Government Agencies, the specifics outlined are narrowly tailored to healthcare, which limits broader sector relevance. Thus, a high score for Healthcare and a lower score for other sectors.


Keywords (occurrence): artificial intelligence (7) machine learning (1) automated (1) show keywords in context

Description: An act to add Chapter 5.9.5 (commencing with Section 11549.80) to Part 1 of Division 3 of Title 2 of the Government Code, relating to artificial intelligence.
Summary: Assembly Bill No. 1405 mandates the California Government Operations Agency to establish an enrollment system for AI auditors, ensuring accountability and transparency in auditing AI systems, effective January 1, 2027.
Collection: Legislation
Status date: Feb. 21, 2025
Status: Introduced
Primary sponsor: Rebecca Bauer-Kahan (2 total sponsors)
Last action: Read second time and amended. (April 3, 2025)

Keywords (occurrence): artificial intelligence (7) machine learning (1) automated (2) show keywords in context

Description: As enacted, requires the board of trustees of the University of Tennessee, the board of regents, each local governing board of trustees of a state university, each local board of education, and the governing body of each public charter school to adopt a policy regarding the use of artificial intelligence technology by students, faculty, and staff for instructional and assignment purposes. - Amends TCA Title 49.
Summary: The bill requires public universities and charter schools in Tennessee to adopt policies for the use of artificial intelligence in education by set deadlines, enhancing instructional methods and accountability.
Collection: Legislation
Status date: March 19, 2024
Status: Passed
Primary sponsor: Scott Cepicky (4 total sponsors)
Last action: Comp. became Pub. Ch. 550 (March 19, 2024)

Category:
Societal Impact (see reasoning)

This legislation includes specific mandates regarding the use of artificial intelligence in educational settings, which has direct implications for social impact through its influence on teaching and learning. The act focuses on adopting policies pertaining to the use of AI technology by students, faculty, and staff. It addresses potential impacts on students' educational outcomes and social interactions, thereby linking to social equity and fairness, as well as accountability in education. It does not explicitly address aspects relevant to data governance, system integrity, or robustness as defined, such as security measures or compliance standards inherent in AI system operation. Hence, this bill closely aligns with the Social Impact category, while not meeting the criteria for the other categories.


Sector:
Academic and Research Institutions (see reasoning)

The legislation specifically pertains to academic institutions and their use of AI for instructional purposes. By mandating universities and public schools to adopt a policy on AI use among students and faculty, it directly impacts educational systems. This legislation does not cover other sectors like politics, healthcare, or private enterprises, which are not mentioned in the text. Its focus is primarily on the governance of AI in educational settings, making it highly relevant to the Academic and Research Institutions sector and not applicable to the others.


Keywords (occurrence): artificial intelligence (8) automated (3) show keywords in context
Feedback form