4828 results:
Description: To amend sections 1331.01, 1331.04, and 1331.16 and to enact sections 1331.05 and 1331.50 of the Revised Code to regulate the use of pricing algorithms.
Summary: The bill regulates pricing algorithms by prohibiting the use of nonpublic competitor data, aiming to prevent anti-competitive practices and ensure fair market competition in Ohio.
Collection: Legislation
Status date: Feb. 4, 2025
Status: Introduced
Primary sponsor: Louis Blessing
(2 total sponsors)
Last action: Introduced (Feb. 4, 2025)
Societal Impact (see reasoning)
The text primarily discusses the regulation of pricing algorithms, which incorporates artificial intelligence and machine learning techniques. This directly relates to how AI impacts business practices and pricing strategies, which can have social repercussions regarding fairness and bias in pricing decisions. The legislation does not focus explicitly on data governance, system integrity, or the establishment of performance benchmarks, which would typically characterize robustness. Therefore, the most relevant category for this text would fall under Social Impact due to its implications on fairness and accountability in pricing decisions influenced by AI algorithms, with more limited relevance to other categories.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
This text primarily pertains to the private sector and its use of AI within business contexts, particularly focusing on pricing algorithms that can affect competitive practices. It emphasizes business regulations that will affect how commercial entities set their prices and interact with market data. However, it does not specifically address sectors like government agencies, healthcare, academia, or international standards. Hence, the relevance to the Private Enterprises, Labor, and Employment sector is moderate. There is no significant emphasis on political, legal, or institutional use of AI, leading to the lower relevancy scores across those sectors.
Keywords (occurrence): artificial intelligence (1) algorithm (19) show keywords in context
Description: Bans the use of AI on state assets if the AI is owned or developed by a foreign corporate entity. (Flesch Readability Score: 68.0). Prohibits any hardware, software or service that uses artificial intelligence from being installed or downloaded onto or used or accessed by state information technology assets if the artificial intelligence is developed or owned by a corporate entity that is incorporated or registered under the laws of a foreign country. Provides for exceptions.
Summary: House Bill 3936 prohibits the use of AI on state assets if developed or owned by foreign corporate entities, enhancing the security of state information technology. Exceptions apply for certain regulatory uses.
Collection: Legislation
Status date: March 18, 2025
Status: Introduced
Primary sponsor: Darcey Edwards
(sole sponsor)
Last action: First reading. Referred to Speaker's desk. (March 18, 2025)
Societal Impact
System Integrity (see reasoning)
The text explicitly addresses the use of artificial intelligence in state assets, focusing on security concerns related to AI that is owned or developed by foreign corporate entities. This directly reflects societal concerns about the impact of foreign technology on local governance and security, highlighting issues of accountability and transparency in the use of AI. Given this focus on the societal implications of AI, such as security and accountability, this document aligns closely with the Social Impact category. The text does not address data collection or governance standards, making the Data Governance category less relevant. It does imply aspects of System Integrity by enforcing limits on the installation of AI software, but its primary focus on banning specific AI products renders it more pertinent to Social Impact. It does not delve into the technical specifications of AI algorithms or system benchmarks, therefore the Robustness category is not applicable. By evaluating the text in these terms, Social Impact scores highest among the categories.
Sector:
Government Agencies and Public Services (see reasoning)
The text primarily revolves around the implications of AI's use within state information technology assets, focusing on the prohibition of foreign AI products for state security. This directly relates to Government Agencies and Public Services, as it deals with how AI is utilized by state agencies and the systems in place to protect public assets. It does not discuss issues pertinent to Politics and Elections or the Judicial System, as it lacks references to regulatory frameworks or fair elections linked to AI. Similarly, the other sectors like Healthcare, Private Enterprises, Academic Institutions, International Standards, and Nonprofits do not pertain to the specific discussion of state-defined AI usage. Thus, the score for Government Agencies and Public Services reflects its importance, while other sectors remain at 1 due to lack of relevance.
Keywords (occurrence): artificial intelligence (3) show keywords in context
Description: To require the Secretary of Homeland Security to produce a report on emerging threats and countermeasures related to vehicular terrorism, and for other purposes.
Summary: The bill mandates the Secretary of Homeland Security to produce a report on vehicular terrorism threats and countermeasures, enhancing public safety and coordinated response strategies against such attacks.
Collection: Legislation
Status date: Feb. 26, 2025
Status: Introduced
Primary sponsor: Carlos Gimenez
(2 total sponsors)
Last action: Referred to the Subcommittee on Counterterrorism and Intelligence. (Feb. 26, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text of the Department of Homeland Security Vehicular Terrorism Prevention and Mitigation Act of 2025 contains several references to artificial intelligence and machine learning, particularly in the context of threats posed by emerging automotive technologies, including autonomous vehicles and advanced driver assistance systems (ADAS). These technologies are highlighted as creating vulnerabilities that could be exploited for vehicular terrorism, particularly with the mention of AI-enabled technologies and predictive analytics used to detect suspicious behaviors. This aligns the text significantly with the categories of Social Impact, due to its implications for public safety and civil liberties, and Data Governance, given the attention to privacy and civil rights in the deployment of countermeasures. System Integrity is relevant since it discusses technology protocols to monitor and restrict vehicle access, and Robustness is relevant because it touches on the need for guidelines in research and development of security measures. Overall, the text is particularly focused on the societal impacts and governance aspects of AI technologies used in the context of countering vehicular terrorism.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment
International Cooperation and Standards (see reasoning)
The legislation is highly relevant across multiple sectors. It directly addresses Government Agencies and Public Services since it enforces the role of the Department of Homeland Security in managing vehicle-based threats and enhances public service through safety measures. The Judicial System is moderately relevant due to the implications this report may have concerning law enforcement’s response strategies, but it is not a central focus. In terms of Healthcare, there is limited relevance as it touches on healthcare facilities as high-risk locations but does not delve into health systems. It has implications for Private Enterprises, Labor, and Employment since the mention of industry collaboration highlights the role of private sector engagement with technology and safety practices. The nature of this legislative document also touches upon International Cooperation and Standards because of its broader implications for the development of security standards in various industries. However, it does not fit squarely into Academic and Research Institutions or Nonprofits and NGOs. Overall, the most significant relevance is with Government Agencies and Public Services and Private Enterprises. The report's focus on technology and AI tools pushes its relevance across these sectors.
Keywords (occurrence): artificial intelligence (1) machine learning (2) show keywords in context
Description: An act to amend Section 8586.5 of the Government Code, relating to technology.
Summary: Assembly Bill 979 mandates the California Cybersecurity Integration Center to create an AI Cybersecurity Collaboration Playbook by July 2026. This playbook aims to enhance information sharing within the AI community, strengthening cybersecurity defenses against emerging threats.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Jacqui Irwin
(sole sponsor)
Last action: From committee chair, with author's amendments: Amend, and re-refer to Com. on P. & C.P. Read second time and amended. (March 28, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text acknowledges and addresses multiple aspects of artificial intelligence (AI), particularly around cybersecurity and the development of frameworks for AI-related activities. The references to the California AI Cybersecurity Collaboration Playbook and automated decision systems show an emphasis on safeguarding public and private sectors from potential AI-driven threats, connecting closely with the Social Impact category. Additionally, it establishes guidelines for overseeing data sharing related to AI systems, which ties directly into Data Governance. Legislative mandates concerning human oversight and system transparency, necessary for cybersecurity, connect to System Integrity. The standards developed for AI cybersecurity performance also resonate with the Robustness category, as it requires frameworks for assessing and auditing these AI systems. Overall, the legislative focus on AI in various capacities suggests that the bill is highly relevant to each of the categories, especially concerning its implications for society, data handling, system oversight, and performance metrics, warranting a score of 5 in each instance.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The bill's provisions significantly impact several sectors, particularly regarding how AI is integrated and regulated within government entities for cybersecurity efforts. The direct mentions of interaction with various state departments and agencies position the legislation in the Government Agencies and Public Services sector. Additionally, the components regarding threat assessment and information sharing also tie into the General Judicial System as it pertains to legal standards surrounding AI's use in public governance. Although less emphasized, certain references to operational collaboration could touch on aspects relevant to nonprofits and NGOs for cybersecurity support. The text focuses primarily on government operations and cybersecurity, prompting strong relevance to the Government Agencies and Public Services sector and moderate relevance to the Judicial System sector. Therefore, it scores a 5 for Government Agencies and Public Services and a 3 for the Judicial System.
Keywords (occurrence): artificial intelligence (9) machine learning (1) automated (2) show keywords in context
Description: To direct the Secretary of Agriculture and the Director of the National Science Foundation to carry out cross-cutting and collaborative research and development activities focused on the joint advancement of Department of Agriculture and National Science Foundation mission requirements and priorities, and for other purposes.
Summary: The NSF and USDA Interagency Research Act mandates collaboration between the Department of Agriculture and the National Science Foundation to enhance agricultural and scientific research, addressing various critical issues and technology advancements.
Collection: Legislation
Status date: June 4, 2024
Status: Introduced
Primary sponsor: Frank Lucas
(2 total sponsors)
Last action: Ordered to be Reported (Amended) by Voice Vote. (June 13, 2024)
Societal Impact
Data Governance (see reasoning)
The text explicitly mentions key AI-related terms such as 'artificial intelligence,' 'machine learning,' and 'automation' as part of the focus areas for collaborative research and development activities. Since it includes AI as a pivotal element fostering agricultural advances and technology improvements, this indicates a significant relevance to the categories of Social Impact and Data Governance, as it pertains to how these technologies could impact farming practices and data collection efforts. However, there is less emphasis on issues pertaining to system integrity or robustness within the text compared to the first two categories. Thus, the categorization should reflect this focus on societal and data implications of AI while acknowledging the lower connection to system integrity and robustness.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The legislation specifically ties into the agricultural sector through its focus on research related to the Department of Agriculture and the National Science Foundation’s mission requirements. Given the potential applications of AI in agricultural practices as highlighted in the text (like precision agriculture tools and food safety technologies), it is most relevant to the Healthcare sector due to data handling and food safety concerns, and to some degree, to Academic and Research Institutions through its emphasis on STEM education and workforce development. However, there is less direct relevance to sectors like Politics and Elections, Government Agencies, or the Judicial System. This reflects a potential interdisciplinary effect within the Agriculture and Educational sectors.
Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context
Description: Quantum research tax incentives. Amends the state sales and use tax exemption for data centers to include projects for investments in a quantum computing research, advanced computing, and defense infrastructure network that result in a minimum qualified investment within five years of at least $50,000,000.
Summary: House Bill 1601 amends Indiana tax law to provide sales and use tax exemptions for investments in quantum computing and advanced technologies, promoting economic development in the state.
Collection: Legislation
Status date: Feb. 19, 2025
Status: Engrossed
Primary sponsor: Edmond Soliday
(8 total sponsors)
Last action: Senator Randolph added as cosponsor (April 3, 2025)
Description: An Act regulating autonomous vehicles; and providing for an effective date.
Summary: SB 148 regulates autonomous vehicles in Alaska, establishing requirements for registration, liability in accidents, and defining the scope of autonomous technology. The aim is to ensure safe operation and accountability.
Collection: Legislation
Status date: March 28, 2025
Status: Introduced
Primary sponsor: Robert Myers
(sole sponsor)
Last action: REFERRED TO TRANSPORTATION (March 28, 2025)
Societal Impact
System Integrity (see reasoning)
This text explicitly addresses regulations concerning the operation of autonomous vehicles, which falls under various relevant categories. For Social Impact, the text has implications for public safety and accountability, especially in instances of accidents involving autonomous vehicles, making it very relevant. In terms of Data Governance, while the text does not directly outline data management practices, information on the operation of autonomous vehicles may implicitly involve data-related regulations, thus allowing for a slightly relevant score. The System Integrity category is also very relevant, as the legislation outlines requirements for human intervention and defines autonomous technology with an emphasis on safety and control mechanisms. Lastly, concerning Robustness, the text does not clearly delve into performance benchmarks or auditing for AI systems related to autonomous vehicles, resulting in a score leaning towards slightly relevant. Overall, the legislation's overall relevance to the established categories is pronounced due to the nature of autonomous vehicles being directly tied to artificial intelligence and its societal implications.
Sector:
Government Agencies and Public Services
Judicial system
Hybrid, Emerging, and Unclassified (see reasoning)
The text's focus on autonomous vehicles directly aligns with several sectors. Politics and Elections are relevant only to a slight degree since it does not discuss political implications or regulations regarding electoral processes. The Government Agencies and Public Services sector is also moderately relevant as the regulations concern public safety and the legal use of AI by state agencies responsible for transportation. The Judicial System has a very relevant connection, especially regarding liability and accountability in the event of accidents involving these vehicles. The Private Enterprises, Labor, and Employment sector may be tangentially related due to potential impacts on employment in transportation, but it’s not a primary focus of the bill. There are no clear connections with Healthcare, Academic and Research Institutions, International Cooperation and Standards, or Nonprofits and NGOs. The Hybrid, Emerging, and Unclassified sector is also relevant as autonomous vehicles represent a significant technological development not wholly classified within the existing sectors. Thus, the scores for the sectors vary based on the direct and indirect implications of the text.
Keywords (occurrence): automated (6) autonomous vehicle (4) show keywords in context
Description: An Act to Address the Safety of Nurses and Improve Patient Care by Enacting the Maine Quality Care Act
Summary: The Maine Quality Care Act mandates minimum staffing levels for direct care registered nurses in healthcare facilities, aiming to enhance nurse safety and improve patient care.
Collection: Legislation
Status date: March 25, 2025
Status: Introduced
Primary sponsor: Stacy Brenner
(7 total sponsors)
Last action: Unfinished Business (March 25, 2025)
The text primarily focuses on legislation related to the quality of care and staffing requirements for nurses in healthcare facilities. It does not specifically mention AI, nor does it address the implications of AI systems in the healthcare context. Other related terms like automation or machine learning are absent, indicating that while AI could potentially impact nursing or healthcare delivery, this specific bill does not incorporate relevant AI considerations.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
This bill relates directly to the healthcare sector, as it establishes staffing requirements for nurses in healthcare facilities to ensure adequate patient care. However, it does not mention AI specifically, which would typically limit its relevance to other sectors such as politics or government services. The focus is on patient care and nursing rather than any legislative or operational aspects involving AI implementations within healthcare.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Artificial intelligence; Responsible Deployment of AI Systems Act; AI Council; AI Regulatory Sandbox Program; Artificial Intelligence Workforce Development Program; effective date.
Summary: The Responsible Deployment of AI Systems Act establishes regulations for AI systems in Oklahoma, mandating risk assessments, compliance with laws, and the creation of oversight councils and workforce development initiatives.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Arturo Alonso-Sandoval
(sole sponsor)
Last action: Authored by Representative Alonso-Sandoval (Feb. 3, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly focuses on the development, regulation, and responsible deployment of artificial intelligence systems. It outlines various evaluations, audits, and risk classifications that AI systems must undergo, thereby addressing potential societal impacts, risks to individuals, and calls for transparency. This directly aligns with aspects of Social Impact, especially concerning accountability, bias, and ethical use of AI. The sections on governance and oversight indicate a strong relevance to System Integrity as they require qualified oversight, documentation, and independent audits of AI systems, ensuring security and compliance with existing laws. Additionally, it touches on elements of Data Governance by requiring the identification of potential biases in AI data sets and compliance with data privacy laws. Robustness is also relevant since the bill mandates assessments and audits that function as benchmarks for AI systems' performance. Overall, Social Impact and System Integrity are the two most relevant categories, with Data Governance and Robustness following closely behind.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs (see reasoning)
The text explicitly addresses the use and regulation of artificial intelligence across various sectors. It primarily relates to Government Agencies and Public Services, where AI systems are deployed, monitored, and evaluated according to the established guidelines. It also involves the Judicial System as it touches on potential impacts on civil liberties and rights, emphasizing accountability in AI systems used in decision-making processes. The legislation likely differs from sectors such as Healthcare and Private Enterprises mainly because it focuses on regulatory frameworks rather than specific applications or implications of AI therein. It could impact Academic and Research Institutions through the introduction of the Artificial Intelligence Workforce Development Program, promoting AI-related education and training. However, due to its broad scope regarding governance and oversight of AI systems and implications across public services, its strongest alignment is with Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (13) deepfake (1) show keywords in context
Description: An Act to Ensure Transparency in Consumer Transactions Involving Artificial Intelligence
Summary: The bill prohibits the use of AI chatbots in consumer transactions unless consumers are clearly informed they are not interacting with a human, ensuring transparency and preventing deception.
Collection: Legislation
Status date: April 17, 2025
Status: Introduced
Primary sponsor: Amy Kuhn
(6 total sponsors)
Last action: In concurrence. ORDERED SENT FORTHWITH. (April 17, 2025)
Summary: The bill H.R. 8753 addresses persistent challenges faced by communities with shared ZIP Codes by directing the USPS to assign unique ZIP Codes, improving mail delivery and geographic identity.
Collection: Congressional Record
Status date: Dec. 17, 2024
Status: Issued
Source: Congress
Summary: The "Equal Treatment of Public Servants Act of 2024" aims to amend Social Security rules, eliminating pension offsets, revising benefit calculations for public service workers, and enhancing reporting on noncovered earnings.
Collection: Congressional Record
Status date: Dec. 17, 2024
Status: Issued
Source: Congress
Description: An Act to Prohibit the Use of Artificial Intelligence in the Denial of Health Insurance Claims
Summary: The bill prohibits health insurers from using artificial intelligence to deny or adjust health insurance claims unless a qualified clinical peer reviews the decision, ensuring fair treatment.
Collection: Legislation
Status date: March 25, 2025
Status: Introduced
Primary sponsor: Michael Tipping
(10 total sponsors)
Last action: In concurrence. ORDERED SENT FORTHWITH. (March 25, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text directly addresses the impact of AI within the healthcare sector, specifically prohibiting its use for denying health insurance claims, which fundamentally touches on accountability, fairness, and discrimination in AI systems. It outlines clear protections against AI-driven discrimination based on a variety of important factors, indicating a commitment to reducing potential harms associated with AI in this context. This makes it highly relevant to the 'Social Impact' category, while the detailed requirements for AI use and governance can be associated with 'Data Governance.' The criteria set forth for AI utilization also indicate a degree of concern for ensuring the integrity and reliability of these systems, but they are not the primary focus, aligning 'System Integrity' and 'Robustness' at a lesser degree as the text emphasizes accountability and fairness over technical performance metrics.
Sector:
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
This legislation specifically regulates the use of AI in the healthcare context, particularly concerning insurance claims. The AI systems’ decisions must take into account individual clinical circumstances and provider recommendations, emphasizing fairness and transparency. This bill significantly engages the healthcare sector by addressing direct implications of AI on health insurance processes. Additionally, the mention of proper governance of AI usage aligns it with regulations relevant to the healthcare sector. Thus, the relevance to both 'Healthcare' and indirectly to 'Private Enterprises, Labor, and Employment' is noteworthy, but the primary focus remains on healthcare.
Keywords (occurrence): artificial intelligence (7) show keywords in context
Description: Amend The South Carolina Code Of Laws By Adding Article 9 To Chapter 5, Title 39 So As To Provide Definitions; To Provide That A Social Media Company May Not Permit Certain Minors To Be Account Holders; To Provide Requirements For Social Media Companies; To Provide That A Social Media Company Shall Provide Certain Parents Or Guardians With Certain Information; To Provide That A Social Media Company Shall Restrict Social Media Access To Minors During Certain Hours; To Provide For Consumer Comp...
Summary: The South Carolina Social Media Regulation Act aims to restrict minors' access to social media by implementing age verification, parental consent, and limiting features that promote excessive use.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Engrossed
Primary sponsor: Weston Newton
(17 total sponsors)
Last action: Scrivener's error corrected (Feb. 21, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This legislation is highly relevant to Social Impact, as it specifically addresses the use of AI-driven features in social media platforms, particularly concerning minors. It aims to protect minors from potential harms that can arise from engaging with social media, including compulsive usage, exposure to harmful content, and data privacy concerns. The provisions aimed at regulating how social media companies interact with minors inherently connect to concerns about AI-driven personalized recommendations and targeted advertising, making this category extremely relevant. Data Governance is also relevant, as the bill includes strict regulations on how personal data of minors should be collected, used, and shared, emphasizing the need for accuracy and transparency in data practices, particularly in AI systems that process minors' information. System Integrity has moderate relevance; while it doesn't focus on security protocols, it emphasizes protecting minors from exploitative practices, which can connect to broader notions of system integrity in AI design. Robustness has minimal relevance as there is no direct focus on benchmarks or performance metrics for AI systems established in this legislation. Overall, the bill addresses significant concerns regarding AI's impact on minors and data governance, thus categorizing it under Social Impact and Data Governance as the most relevant categories.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The bill has clear implications for the sector of Government Agencies and Public Services, particularly as it pertains to regulating and overseeing social media companies in their interactions with minors. While it does not explicitly address political campaigns or electoral processes (Politics and Elections), it does touch on the regulation of public services for youth safety from online platforms, making it relevant to public service delivery. The Judicial System is not explicitly mentioned, and thus it receives a lower score. In Healthcare, there's no direct focus on AI regulations in that sector, so that receives a lower score too. The bill indirectly affects Private Enterprises, Labor, and Employment, as it requires social media companies to adjust their operational strategies concerning minors, but this doesn't make it highly relevant. Academic and Research Institutions may fall under some implications related to minors but are not a primary focus, thus scoring low. International Cooperation and Standards are not addressed here either. Nonprofits and NGOs, given their potential interest in child protection, could also be slightly relevant. However, no explicit collaborations or regulations outlined in the bill warrant high scores. The emphasis on minors' security positions this bill primarily within Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (1) automated (3) recommendation system (1) show keywords in context
Description: Amend KRS 186.450 to allow persons who are at least 15 years of age to apply for a motor vehicle instruction permit; establish that an instruction permit is valid for four years; amend KRS 186.410, 186.452 and 159.051 to conform; EMERGENCY.
Summary: The bill amends Kentucky's motor vehicle instruction permit laws, lowering the age to apply to 15, establishing supervision requirements, and enhancing penalties for driving violations among minors.
Collection: Legislation
Status date: March 13, 2025
Status: Enrolled
Primary sponsor: Steven Rudy
(32 total sponsors)
Last action: delivered to Governor (March 13, 2025)
The text primarily concerns the issuance and regulations surrounding motor vehicle instruction permits and related requirements for minors in Kentucky. While there is a mention of 'automated driving system' within the context of licensing, it does not delve deeper into the implications of AI on society, nor does it address data governance, system integrity, or robustness related to AI. The involvement of AI is minimal, and does not necessitate specific legislation addressing broader impacts, data collection, or performance benchmarks. Therefore, the relevance to the categories is low.
Sector: None (see reasoning)
The text is focused on the regulations for instruction permits and does not engage extensively with the sectors defined. Although there is a mention of 'automated driving systems', it does not address political processes, public services, or any application of AI in healthcare, legal systems, employment, or education. Its primary focus is on driver's education regulations and licensing, thus rendering it irrelevant to the specified sectors.
Keywords (occurrence): automated (2) autonomous vehicle (1) show keywords in context
Description: The purpose of this bill is to prohibit the use of synthetic media and artificial intelligence to influence an election.
Summary: The bill prohibits using synthetic media and artificial intelligence to influence elections in West Virginia, imposing penalties for violations, and mandates disclosures to ensure transparency regarding manipulated content.
Collection: Legislation
Status date: Feb. 14, 2025
Status: Introduced
Primary sponsor: Jack Woodrum
(3 total sponsors)
Last action: Laid over on 1st reading 2/28/2025 (Feb. 28, 2025)
Societal Impact
Data Governance (see reasoning)
The text of the bill explicitly relates to the use of synthetic media and artificial intelligence in the context of influencing elections. It outlines the definition and regulation of AI-generated content, focusing on its implications for misinformation and potential harm to democratic processes. This positions the bill primarily within the realm of Social Impact, as it addresses the societal issues posed by AI misuse. While the regulation of data in synthetic media is touched upon, its focus is more on prohibiting the misuse rather than comprehensive data governance. System Integrity and Robustness do not significantly apply as the bill is more concerned with the ethical implications and penalties related to AI usage. Overall, the primary emphasis is on the social implications of AI in political contexts.
Sector:
Politics and Elections (see reasoning)
The bill specifically addresses the regulation of synthetic media and AI in the context of influencing elections, aligning closely with the political sector. It sets out rules about what is permissible in political advertising and establishes penalties for violations, directly impacting the political landscape. The implications for Government Agencies and Public Services are minimal since it primarily deals with electoral processes rather than public services at large. Other sectors such as Judicial System, Healthcare, etc., are not relevant as they do not pertain to the content of the bill. Therefore, the clear relevance to Politics and Elections is strong, while other sectors receive much lower scores.
Keywords (occurrence): artificial intelligence (6) synthetic media (20) show keywords in context
Description: Concerning tools to protect minor users of social media.
Summary: The bill mandates social media companies in Colorado to implement protective measures for minor users, including age verification, user control settings, and privacy enhancements, aimed at safeguarding minors' mental health and data.
Collection: Legislation
Status date: Feb. 26, 2025
Status: Introduced
Primary sponsor: Jarvis Caldwell
(4 total sponsors)
Last action: Introduced In House - Assigned to Health & Human Services (Feb. 26, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text primarily addresses the impact of social media on minors, which ties closely to how AI can influence user experiences through algorithms. The mention of algorithms indicates concerns about the social impact of AI, particularly regarding mental health and safety of minors in social media contexts. While it does touch upon data privacy, its focus is more on the implications for youth rather than on robust governance of AI data practices. This legislation is particularly relevant for how AI-driven features influence minors and sets requirements for social media platforms, demonstrating a clear concern for societal outcomes, making it fit well in the 'Social Impact' category. The document does suggest a governance framework for data utilization with implications for data privacy and security, hence having some relevance to the 'Data Governance' category, but not as prominently as 'Social Impact'. The mention of requiring oversight and the principles behind algorithmic engagement suggests a minor connection to 'System Integrity'. 'Robustness' is not directly addressed since there are no specified benchmarks or performance measures mentioned in the text.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
This legislation directly affects the 'Government Agencies and Public Services' sector, particularly as it pertains to regulations around social media use for minors, suggesting a role for governmental bodies in establishing protections. It also has implications for 'Private Enterprises, Labor, and Employment' because it mandates requirements for social media companies, including the operation of algorithmic systems that could affect user experiences and company practices concerning minors. This legislation does not specifically align with the 'Judicial System', 'Healthcare', 'Academic and Research Institutions', or 'Politics and Elections' sectors. Given the focus on AI as utilized by social media companies, the relevance to 'Hybrid, Emerging, and Unclassified' is minimal compared to more defined sectors. Overall, it mainly concerns government regulation directly impacting the operations of social media platforms.
Keywords (occurrence): automated (1) recommendation system (2) algorithm (2) show keywords in context
Description: Relating to the use of an automated employment decision tool by an employer to assess a job applicant's fitness for a position; imposing an administrative penalty.
Summary: The bill regulates the use of automated employment decision tools by employers, requiring applicant notification and consent, and imposes penalties for violations to protect job seekers' rights.
Collection: Legislation
Status date: March 14, 2025
Status: Introduced
Primary sponsor: Nathan Johnson
(sole sponsor)
Last action: Filed (March 14, 2025)
Societal Impact
Data Governance (see reasoning)
The text directly pertains to the use of automated employment decision tools that involve AI, algorithms, and machine learning, specifically addressing accountabilities connected to their use. It examines the fairness and consent aspects of AI systems in hiring processes, making it highly relevant to the Social Impact category. The Data Governance category is also significant, as it deals with the handling and assessment of applicant data through automated systems, raising questions about data protection, bias, and fairness. System Integrity and Robustness are less applicable as the text primarily focuses on the use of algorithmic decision-making tools rather than their internal security, transparency, or performance benchmarks.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text focuses on the utilization of AI in employment settings, specifically through automated decision tools used by employers for assessing job applicants. Therefore, it is particularly relevant to the Private Enterprises, Labor, and Employment sector, as it addresses both the regulatory framework around AI's impact on hiring practices and the rights of applicants. The Government Agencies and Public Services sector is also relevant to a lesser extent, given the administrative oversight involved in enforcing these regulations. Other sectors are less directly connected as the focus is firmly on the applicability of AI in employment rather than broader governmental or institutional frameworks.
Keywords (occurrence): artificial intelligence (2) machine learning (2) automated (14) algorithm (1) show keywords in context
Description: Amends the Consumer Fraud and Deceptive Business Practices Act. Provides that it is an unlawful practice for any person to engage in a commercial transaction or trade practice with a consumer in which: (1) the consumer is communicating or otherwise interacting with a chatbot, artificial intelligence agent, avatar, or other computer technology that engages in a textual or aural conversation; (2) the communication may mislead or deceive a reasonable consumer to believe that the consumer is comm...
Summary: The bill prohibits deceptive practices in commercial interactions where consumers may mistakenly believe they are communicating with a human rather than AI, requiring clear disclosure of AI usage.
Collection: Legislation
Status date: Feb. 6, 2025
Status: Introduced
Primary sponsor: Abdelnasser Rashid
(sole sponsor)
Last action: Referred to Rules Committee (Feb. 6, 2025)
Societal Impact
Data Governance (see reasoning)
The text directly addresses issues related to the social impact of AI, particularly concerning consumer rights and the ethical implications of AI systems such as chatbots deceiving users. It emphasizes transparency and fairness, highlighting the need for consumers to be aware that they are interacting with AI and not a human. This aligns the text significantly with social impact, since it deals with accountability and the ethical use of AI in commercial practices. Data governance is moderately relevant as it touches on managing data related to consumer interactions but is primarily focused on consumer deception rather than data management itself. System Integrity is less relevant, primarily because the text deals with ethical behaviors in communication rather than security or transparency of AI systems. Robustness is not relevant as it does not address system performance or benchmarks.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text relates strongly to the sector of Private Enterprises, Labor, and Employment, as it touches on consumer interactions within commercial contexts. It does not specifically address politics, government agencies, the judicial system, healthcare, academic institutions, international cooperation, nonprofits, or emerging sectors, making the relevance to these sectors lower. While there is a mention of consumers, the legislation does not focus on their role as employees or in broader organizational structures, thus narrowing the focus of applicable sectors.
Keywords (occurrence): artificial intelligence (4) chatbot (2) show keywords in context
Description: Amends the Consumer Fraud and Deceptive Business Practices Act. Provides that the owner, licensee, or operator of a generative artificial intelligence system shall conspicuously display a warning on the system's user interface that is reasonably calculated to consistently apprise the user that the outputs of the generative artificial intelligence system may be inaccurate or inappropriate. Provides that a violation of the provision constitutes an unlawful practice within the meaning of the Act.
Summary: The bill mandates that operators of generative artificial intelligence systems display clear warnings on their user interfaces about potential inaccuracies or inappropriate content in the outputs, classifying violations as unlawful practices.
Collection: Legislation
Status date: Feb. 6, 2025
Status: Introduced
Primary sponsor: Laura Ellman
(sole sponsor)
Last action: Referred to Assignments (Feb. 6, 2025)
Societal Impact (see reasoning)
This text explicitly discusses the implications of generative artificial intelligence systems in relation to user safety and informed consent, addressing how the outputs of such systems can be inaccurate or inappropriate. This is directly relevant to the Social Impact category as it pertains to consumer protections and accountability in AI outputs, ensuring that users are aware of potential risks. The inclusion of a requirement for warnings aligns with fairness and bias considerations, as these warnings are a method of reducing psychological and informational harm stemming from misleading AI outputs. It does not specifically address data management or the security and integrity of AI systems, which makes it less relevant for Data Governance, System Integrity, and Robustness.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text addresses the regulation of AI systems in a way that directly engages with consumer rights and protections which can impact various sectors. It is particularly relevant for consumers in the context of private enterprises since it affects how businesses utilize AI technologies to ensure safety and transparency. It also has implications in the realm of Government Agencies and Public Services, as these regulations can dictate how AI applications function within these sectors. However, it is not focused specifically on political campaigns, healthcare, academic institutions, or international standards, leading to a moderate relevance overall in sectors.
Keywords (occurrence): artificial intelligence (8) automated (1) show keywords in context