5017 results:
Description: A bill to amend title 18, United States Code, to prohibit United States persons from advancing artificial intelligence capabilities within the People's Republic of China, and for other purposes.
Summary: The "Decoupling America's Artificial Intelligence Capabilities from China Act of 2025" prohibits U.S. persons from advancing AI technology in China, aiming to limit collaboration and technology transfer to safeguard national security.
Collection: Legislation
Status date: Jan. 29, 2025
Status: Introduced
Primary sponsor: Josh Hawley
(sole sponsor)
Last action: Read twice and referred to the Committee on the Judiciary. (Jan. 29, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text of the Decoupling America's Artificial Intelligence Capabilities from China Act of 2025 revolves around the prohibition of advancing AI technologies in China. It discusses AI in terms of research, development, technology, and regulations associated with intellectual property related to AI. Therefore, this text is highly relevant to all categories dealing with the social impact of AI, data governance, system integrity, and robustness as it pertains to national security, ethical considerations, and the integrity of AI systems. Each category reflects aspects of the legislation's aim to regulate and control AI development in a global context. The direct references to AI technologies and the implications of these actions provide a strong basis for assigning high relevance scores across these categories.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions
International Cooperation and Standards
Hybrid, Emerging, and Unclassified (see reasoning)
The sectors primarily influenced by this legislation include government operations due to its direct implications on U.S. foreign policy and national security concerning AI technology. The bill does not directly mention healthcare, political campaigns, or judicial systems, limiting its relevance to those sectors. However, the government sector is pivotal, as it establishes regulations that will likely affect various government functions and public services, particularly in connection with AI research and development. The text's primary focus on international relations and technology regulation fits squarely into the domain of government agencies and public services, hence the higher relevance score.
Keywords (occurrence): artificial intelligence (25) automated (2) show keywords in context
Description: High-risk artificial intelligence; development, deployment, and use; civil penalties. Creates requirements for the development, deployment, and use of high-risk artificial intelligence systems, defined in the bill, and civil penalties for noncompliance, to be enforced by the Attorney General. The bill has a delayed effective date of July 1, 2026.
Summary: The bill establishes regulations regarding the development and deployment of high-risk artificial intelligence systems in Virginia, focusing on preventing algorithmic discrimination and imposing civil penalties for non-compliance.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Engrossed
Primary sponsor: Michelle Maldonado
(24 total sponsors)
Last action: Passed by for the day (Feb. 18, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This text establishes requirements and standards for the development, deployment, and use of high-risk artificial intelligence systems, emphasizing accountability for algorithmic discrimination, consumer protection, and operational standards. The references to 'high-risk artificial intelligence systems' and 'algorithmic discrimination' highlight the potential social impact and regulatory measures required to prevent discrimination and protect individuals. Furthermore, it outlines safety and responsibility frameworks for developers and deployers of AI, making it highly relevant to all categories specified. The need for documentation, risk management plans, and standards compliance directly impacts social welfare, data governance, system integrity, and the robustness of AI systems.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Hybrid, Emerging, and Unclassified (see reasoning)
The text mentions developers and deployers of high-risk AI, which could include various sectors like healthcare, public services, or private enterprises but does not specifically restrict itself to any single sector. The focus on algorithmic discrimination and consumer rights suggests relevance to various sectors, especially those directly interfacing with consumers (like healthcare and public services) and risk management in business environments. However, since the language is broad and does not focus exclusively on any one sector, the scores reflect general applicability rather than direct regulation within specific sectors.
Keywords (occurrence): artificial intelligence (138) machine learning (2) automated (1) algorithm (1) autonomous vehicle (1) show keywords in context
Description: Requires advertisements to disclose the use of a synthetic performer; imposes a $1,000 civil penalty for a first violation and a $5,000 penalty for any subsequent violation.
Summary: The bill mandates advertisements to disclose the use of synthetic performers created by artificial intelligence. It imposes penalties for violations, aiming to enhance transparency in advertising.
Collection: Legislation
Status date: Jan. 8, 2025
Status: Introduced
Primary sponsor: Michael Gianaris
(sole sponsor)
Last action: REFERRED TO CONSUMER PROTECTION (Jan. 8, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly addresses the use of generative artificial intelligence in relation to advertisements and synthetic performers. This relevance can be assessed across the categories. In terms of Social Impact, the legislation directly concerns the implications of synthetic performers in public perception and trust, thus highlighting psychological and material harm, which aligns well with concerns about AI in society. In Data Governance, the text discusses the requirements for disclosure regarding synthetic performers, which touches on data management policies as they relate to transparency. System Integrity is also relevant due to the transparency demanded in the use of AI in advertisements, ensuring that AI systems are used responsibly. Robustness applies since the scope of the legislation suggests a need for standards around how AI-generated content is used in commercial settings to build trust with consumers. Overall, these elements substantiate high relevance in more than one area.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily addresses the implications of AI within the context of advertising, which fits best within the broader framework of private enterprises. It ensures fair practices in commercial advertising using AI, thereby directly impacting how businesses interact with AI technologies. There is also a relevant implication for consumer protection, ensuring that individuals are aware of AI influence in the advertisements they encounter. This does not strongly align with sectors such as Politics and Elections or Judicial System, but there are connections to Government Agencies and Public Services due to regulatory oversight of advertising standards that the state may enforce. Overall, however, Private Enterprises may still receive the highest score given the primary sector of influence in this legislation.
Keywords (occurrence): artificial intelligence (2) machine learning (1) algorithm (1) show keywords in context
Description: Imposes liability for misleading, incorrect, contradictory or harmful information to a user by a chatbot that results in financial loss or other demonstrable harm.
Summary: This bill establishes liability for chatbot proprietors in New York for harmful or misleading information provided by their chatbots, particularly if it leads to financial loss or self-harm. It aims to ensure user safety and accountability in AI interactions.
Collection: Legislation
Status date: Jan. 8, 2025
Status: Introduced
Primary sponsor: Clyde Vanel
(sole sponsor)
Last action: print number 222a (Feb. 28, 2025)
Societal Impact
Data Governance (see reasoning)
The text explicitly addresses the accountability and liability of proprietors who operate chatbots, focusing on the implications of misleading or harmful information generated by these AI systems. This is highly relevant to the Social Impact category, as it pertains to consumer protection, the responsibility for AI outputs, and the potential for harm caused by AI interactions. The text also relates to data governance due to the requirement for chatbots to provide accurate information and adhere to policies, but the primary focus on liability suggests a stronger connection to the social implications of AI use. System Integrity is touched upon in terms of human oversight of chatbot operations, particularly regarding the information provided. However, it lacks explicit mandates for technical methods or transparency standards, ultimately minimizing its relevance in this category. Robustness is not directly addressed as it doesn't delve into performance benchmarks or compliance with standards of AI systems. Overall, the text emphasizes social accountability resulting from AI technology use.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text addresses the liability and operational requirements of chatbots, placing it primarily within the sphere of Private Enterprises, Labor, and Employment, as it deals directly with businesses employing AI in consumer interactions and the implications of those roles. It has moderate relevance to Government Agencies and Public Services, particularly given the mention of liability and the potential impact on government entities utilizing chatbots. However, its focus is on business operations rather than direct regulations related to government services. There's minimal connection to the Judicial System, as while legal accountability is mentioned, it does not specifically pertain to judicial use. The text offers no significant insight into Healthcare, Academic Institutions, or Nonprofits, leading to low relevance scores in those areas. International Cooperation and Standards do not apply here, and the text does not fit within the Hybrid, Emerging, and Unclassified category.',
Keywords (occurrence): artificial intelligence (1) chatbot (32) show keywords in context
Description: Artificial Intelligence Amendments
Summary: The bill regulates mental health chatbots in Utah, establishing user protections, restrictions on personal data use, disclosure requirements, and enforcement authority to ensure safety and transparency in AI interactions.
Collection: Legislation
Status date: Feb. 10, 2025
Status: Introduced
Primary sponsor: Jefferson Moss
(sole sponsor)
Last action: House/ received fiscal note from Fiscal Analyst in House Economic Development and Workforce Services Committee (Feb. 20, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text pertains primarily to the regulation of artificial intelligence technologies used in mental health chatbots. It discusses user protections, personal data handling, disclosure requirements, and responsibilities for suppliers of these chatbots. Given this focus, it fits into several categories. For the Social Impact category, the regulations to protect users and the stipulations related to mental health and user data indicate a very relevant connection. For Data Governance, the provisions concerning the handling and sharing of personal information and user input provide clear governance implications. System Integrity is relevant due to the regulations imposed on chatbot transparency and the oversight by the Division of Consumer Protection. Robustness is included as the text outlines the need for policies covering safety, risk evaluation, and user protection measures for AI systems. Overall, each category has strong connections to the AI components mentioned throughout the text.
Sector:
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily deals with the application of AI in the healthcare context, specifically regarding mental health services provided through chatbots. As a result, it strongly fits into the healthcare sector. It also touches on Government Agencies and Public Services due to the involvement of the Division of Consumer Protection in enforcing regulations. The Judicial System category may apply due to the legal ramifications outlined, particularly concerning enforcement and compliance aspects, though not as strongly as the first two. The Private Enterprises, Labor, and Employment sector may apply in terms of how companies develop and supply these chatbots, but it is less direct. Therefore, the main relevance lies within Healthcare and Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (13) chatbot (36) show keywords in context
Description: Prohibiting a person from using fraud to influence or attempt to influence a voter's voting decision; providing that fraud includes the use of synthetic media; and defining "synthetic media" as an image, an audio recording, or a video recording that has been intentionally created or manipulated with the use of generative artificial intelligence or other digital technology to create a realistic but false image, audio recording, or video recording.
Summary: House Bill 525 prohibits using fraud, including synthetic media, to influence a voter’s decision. It defines fraud and establishes penalties for violations, aiming to safeguard election integrity.
Collection: Legislation
Status date: Jan. 22, 2025
Status: Introduced
Primary sponsor: Jessica Feldmark
(8 total sponsors)
Last action: Hearing 2/04 at 1:00 p.m. (Jan. 22, 2025)
Societal Impact
Data Governance (see reasoning)
This text focuses primarily on the use of synthetic media, a concept tied directly to artificial intelligence technologies such as generative AI. The bill seeks to regulate the influence of this media on voter decisions, emphasizing societal impacts and potential fraud. Given its explicit connection to synthetic media and its implications for societal integrity, the Social Impact category is highly relevant. Data Governance, System Integrity, and Robustness could hold some relevance but are secondary to the bill's main focus, which is on the ethical use and potential harms of AI in the electoral context.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The text explicitly addresses the influence of AI-generated synthetic media on electoral processes, making it particularly relevant to the Politics and Elections sector. The significant implications of using generative AI in shaping public perception and voter decisions fall squarely within this sector. Although there might be an indirect connection to Government Agencies and Public Services due to the implementation of electoral regulations, the primary focus remains on the political context.
Keywords (occurrence): synthetic media (2)
Description: Department of Law; Division of Emerging Technologies, Cybersecurity, and Data Privacy established. Establishes within the Department of Law a Division of Emerging Technologies, Cybersecurity, and Data Privacy to oversee and enforce laws governing cybersecurity, data privacy, and the use of artificial intelligence (AI) and other emerging technologies. The bill requires the Division to submit an annual report to the Joint Commission on Technology and Science (JCOTS) by November 1 of each year d...
Summary: The bill establishes a Division of Emerging Technologies, Cybersecurity, and Data Privacy within the Virginia Department of Law to enforce compliance with cybersecurity and data privacy laws and oversee AI usage.
Collection: Legislation
Status date: Jan. 7, 2025
Status: Introduced
Primary sponsor: Bonita Anthony
(sole sponsor)
Last action: Left in Appropriations (Feb. 4, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text predominantly discusses the establishment of a Division of Emerging Technologies, Cybersecurity, and Data Privacy that specifically addresses the use of artificial intelligence (AI). This clearly indicates relevance to both the Social Impact and Data Governance categories, as it pertains to laws governing cybersecurity, data privacy, and the implications of AI use. Additionally, the mention of compliance audits and investigations of AI-related laws aligns with concerns of System Integrity. However, the focus seems more on governance and compliance rather than the performance benchmarks or adherence to international standards, which is what the Robustness category primarily covers. Thus, it's moderately relevant there.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text details the establishment of a Division to oversee various technological implementations including AI within government frameworks. This indicates relevance to Government Agencies and Public Services since it applies to state governance through the newly formed Division. It does not specifically address the use of AI in the Political context, nor does it detail implications related to Healthcare or the Judicial System. There is no emphasis on sectors like Nonprofits or Academic Institutions, thus they are not relevant either. However, it can be considered somewhat relevant to Private Enterprises, Labor, and Employment as the compliance and regulatory aspects could indirectly impact businesses that utilize AI, but this connection is more peripheral. As a result, the strongest affiliations are identified with Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (3) automated (3) show keywords in context
Description: An Act amending the act of December 17, 1968 (P.L.1224, No.387), known as the Unfair Trade Practices and Consumer Protection Law, further providing for definitions and for unlawful acts or practices and exclusions.
Summary: This bill amends Pennsylvania's Consumer Protection Law to include violations related to artificial intelligence, specifically addressing unfair business practices regarding guarantees or warranties generated by AI systems.
Collection: Legislation
Status date: Feb. 5, 2025
Status: Introduced
Primary sponsor: Craig Williams
(7 total sponsors)
Last action: Referred to CONSUMER PROTECTION, TECHNOLOGY AND UTILITIES (Feb. 5, 2025)
Societal Impact
Data Governance (see reasoning)
The text primarily discusses amendments to the Unfair Trade Practices and Consumer Protection Law, focusing on the definitions and implications of AI in consumer protections. It notably defines 'artificial intelligence' and outlines unfair practices involving AI. Regarding 'Social Impact,' the text's provisions include regulations aimed at holding AI-driven businesses accountable, which intends to protect consumers from deceptive practices, hence it is very relevant. The 'Data Governance' category is also applicable as it pertains to the accurate definitions and compliance regarding AI-generated guarantees and policies, indicating responsibilities related to data usage in these contexts. 'System Integrity' is not quite as applicable as it does not specifically address security measures or oversight beyond the definitions of unfair practices. 'Robustness' is not relevant because it does not cover benchmarks or performance standards for AI systems. Overall, Social Impact and Data Governance are significantly addressed, while System Integrity and Robustness show minimal to no relevance.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text's primary scope includes consumer protection laws that affect general market practices involving AI. 'Politics and Elections' is not relevant as there’s no mention of electoral processes or campaigns. 'Government Agencies and Public Services' does not apply as it does not focus on government applications of AI. The 'Judicial System' category is unrelated; it doesn't address legal frameworks within judicial practices. 'Healthcare' does not connect due to no specified mention of health-related technologies. The 'Private Enterprises, Labor, and Employment' sector is moderately relevant as it relates to consumer interactions with businesses, particularly in how they handle AI technologies and their compliance with non-deceptive practices; however, it's more about consumer rights than employment contexts. 'Academic and Research Institutions' is not applicable, as the text doesn't reference these contexts. 'International Cooperation and Standards' is irrelevant as there are no discussions of international agreements. 'Nonprofits and NGOs' is also irrelevant based on the content. Finally, 'Hybrid, Emerging, and Unclassified' isn't a fit as the text fits our understanding of consumer law rather than emerging sectors. Thus, the most relevant sector here is Private Enterprises, Labor, and Employment on a lower scale due to its focus on protecting consumer interactions with businesses that utilize AI.
Keywords (occurrence): artificial intelligence (2) machine learning (1) neural network (1) show keywords in context
Description: An act relating to an age-appropriate design code
Summary: The bill establishes an age-appropriate design code to protect minors online, ensuring businesses avoid harmful design features and practices that could invade their privacy or cause distress.
Collection: Legislation
Status date: Feb. 13, 2025
Status: Introduced
Primary sponsor: Wendy Harrison
(15 total sponsors)
Last action: Favorable report with recommendation of amendment by Committee on Institutions (March 11, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text contains crucial mentions of 'algorithmic recommendation system,' which indicates it has relevance to the algorithms that may impact social dynamics. Additionally, there is a focus on personal data protection, privacy, and ethical considerations concerning minors. These elements align closely with social impacts of AI, particularly in terms of consumer protections and addressing discrimination or harm that may arise from AI systems. There is also a reference to 'neural data,' which connects to broader implications of AI in assessing mental states but isn't directly related to systematic assessments of societal impacts. Data governance is pertinent due to the emphasis on processing personal data, addressing privacy concerns, and ensuring safe practices given the mentioned 'minimum duty of care.' System integrity is moderately relevant since the text indirectly touches upon AI's responsibility through the regulation of businesses regarding minors' personal data. However, it doesn't delve deeply into transparency or security measures as they relate to AI systems. Robustness receives a lower relevance score because the legislation does not focus on performance benchmarks for AI systems but rather on ethical practices regarding data handling and user interaction.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The bill has direct implications for minors accessing online services and incorporates algorithms that might be used to tailor these services. This ties it specifically to sectors like Government Agencies and Public Services, which involves applying technology to enhance public service delivery for youth protection. It may also touch upon aspects of the Private Enterprises sector as businesses must comply with these regulations. However, it does not explicitly engage with the Healthcare or Judicial System sectors based on the text provided, nor does it specifically limit itself to politics, NGOs, or international standards. Academic impacts could be indirectly involved through understanding how technology affects minors in learning environments but remain abstract. Overall, the primary sectors impacted are Government Agencies and Public Services due to the nature of business accountability and compliance.
Keywords (occurrence): automated (2) recommendation system (2) algorithm (1) show keywords in context
Description: An Act amending Title 18 (Crimes and Offenses) of the Pennsylvania Consolidated Statutes, in computer offenses, providing for artificial intelligence; and imposing a penalty.
Summary: This bill mandates a watermark for AI-generated content in Pennsylvania, labeling it as "Artificial Intelligence Generated Material," and establishes penalties for non-compliance, emphasizing transparency in AI usage.
Collection: Legislation
Status date: Jan. 27, 2025
Status: Introduced
Primary sponsor: Johanny Cepeda-Freytiz
(22 total sponsors)
Last action: Referred to COMMUNICATIONS AND TECHNOLOGY (Jan. 27, 2025)
Societal Impact (see reasoning)
The text explicitly addresses the role of artificial intelligence in creating content and mandates the use of watermarks for AI-generated materials. This relates closely to the impact of AI on public perception and trust, tying in with issues of misinformation and the traceability of AI outputs. Thus, it is very relevant to Social Impact, as it seeks to mitigate negative consequences of AI technology on individuals and society. Regarding Data Governance, while the legislation sets guidelines for watermarking, it does not deeply engage with themes around data accuracy, privacy, and management practices in a broader sense. For System Integrity, there is a very limited discussion of security measures or transparency metrics, reducing its relevance in this category. Lastly, the Robustness category, which revolves around performance benchmarks and continuous compliance standards, does not link well to the provided text. Therefore, Social Impact will receive a high score, while the other categories score lower due to limited relevance.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The legislation addresses the implications of AI within a societal context, particularly focusing on the requirements for AI-generated content and the penalties related to misuse. Its relevance to the sectors includes implications for media and entertainment production, which hints at potential overlaps with Private Enterprises, Labor, and Employment. However, it does not directly inform aspects of politics or governance in the express context of government operations or public services. Therefore, while there is a relevance to Private Enterprises, the overall connection to the other sectors such as Politics and Elections, Government Agencies and Public Services, Judicial System, Healthcare, etc., is minimal. Thus, Private Enterprises scores solidly, while the other sectors score lower.
Keywords (occurrence): artificial intelligence (10) show keywords in context
Description: Requiring a certain developer of, and a certain deployer who uses, a certain high-risk artificial intelligence system to use reasonable care to protect consumers from known and reasonably foreseeable risks of certain algorithmic discrimination in a certain high-risk artificial intelligence system; regulating the use of high-risk artificial intelligence systems by establishing certain requirements for disclosures, impact assessments, and other consumer protection provisions; authorizing the At...
Summary: Senate Bill 936 mandates developers and deployers of high-risk AI systems in Maryland to mitigate algorithmic discrimination risks and provide transparency and consumer rights against potential biases in AI decisions.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Katie Hester
(3 total sponsors)
Last action: Hearing 2/27 at 1:00 p.m. (Feb. 6, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly addresses legislation aimed at protecting consumers from the risks associated with high-risk artificial intelligence systems, mainly focusing on algorithmic discrimination and requiring developers and deployers to implement care and documentation for consumer protection. This legislation demonstrates relevance to the Social Impact category as it directly addresses potential harms caused by AI systems. It calls for oversight and requires information that consumers need to understand risks, enhancing accountability within AI systems. The Data Governance category is also relevant since it encompasses mandates for developers to ensure accuracy and fairness in the algorithms used, with emphasis on rectifying biases in AI data sets. The System Integrity category is relevant as it stipulates requirements for documentation and processes intended to ensure the operational reliability and safety of high-risk AI systems. Finally, while the Robustness category may be tangentially relevant due to performance evaluations, the bill does not primarily focus on benchmarks or certification, making its applicability weaker compared to the other categories.
Sector:
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment
International Cooperation and Standards (see reasoning)
The text specifically outlines regulations for the deployment and development of high-risk AI systems, which have implications across various sectors. The Consumer Protection focus indicates an application in Private Enterprises, Labor, and Employment, as it discusses the consequences of AI in employment-related decisions such as lending and housing. These issues are also significant for Government Agencies and Public Services, particularly in the context of regulatory compliance and the ethical use of AI in public interactions. Additionally, the implications of algorithmic discrimination resonate with diverse sectors, such as Healthcare and Judicial Systems, although they are not explicitly detailed in the text. The legislation does not directly relate to the remaining sectors such as Politics and Elections or Nonprofits and NGOs, thus they may receive lower relevance scores. Overall, the strongest associations to the relevant sectors appear to be with Private Enterprises, Labor, and Employment, followed by Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (12) autonomous vehicle (1) show keywords in context
Description: An act relating to consumer data privacy and online surveillance
Summary: The bill aims to establish strong consumer data privacy and online surveillance protections for Vermonters, regulating how personal data is collected, processed, and used by businesses.
Collection: Legislation
Status date: Feb. 12, 2025
Status: Introduced
Primary sponsor: Monique Priestley
(51 total sponsors)
Last action: Read first time and referred to the Committee on Commerce and Economic Development (Feb. 12, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text relates to various aspects of consumer data privacy and online surveillance, significantly impacting individuals and society by emphasizing consumer rights over personal data. Regarding 'Social Impact', the bill addresses accountability for data processing practices, consent for data usage, and the protection of especially vulnerable individuals (such as minors or individuals needing gender-affirming health services), demonstrating a strong societal relevance. In 'Data Governance', the legislation sets clear frameworks for data processing, including definitions of personal data, biometric data, and consumer health data, enforcing secure management of these data types. 'System Integrity' is relevant as it emphasizes the need for controlled processing of data and consent requirements to ensure transparency and accountability, while 'Robustness' is moderately relevant, as the framework outlines requirements for consent that indirectly relate to AI performance metrics in data processing but does not explicitly mention benchmarks or auditing for AI systems. Overall, the legislation primarily impacts data governance and social norms in the data landscape.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Nonprofits and NGOs (see reasoning)
The bill pertains to 'Government Agencies and Public Services' since it establishes how government entities may interact with consumer data and ensures data privacy; it also relates to 'Private Enterprises, Labor, and Employment' as it impacts businesses that handle consumer data. Additionally, aspects of 'Healthcare' are implied due to the focus on consumer health data, particularly in the context of gender-affirming health care. There is a moderate connection to 'Nonprofits and NGOs', as they often work on data privacy issues, although this is less pronounced. The bill does not specifically address the other sectors as it is primarily centered on consumer privacy and data management.
Keywords (occurrence): artificial intelligence (2) automated (3) show keywords in context
Description: A bill to require covered platforms to remove nonconsensual intimate visual depictions, and for other purposes.
Summary: The TAKE IT DOWN Act requires digital platforms to remove nonconsensual intimate visual depictions and establishes penalties for intentional disclosures of such content, protecting individuals' privacy rights.
Collection: Legislation
Status date: Feb. 14, 2025
Status: Engrossed
Primary sponsor: Ted Cruz
(22 total sponsors)
Last action: Held at the desk. (Feb. 14, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The TAKE IT DOWN Act explicitly addresses the issue of nonconsensual intimate visual depictions, which are often produced or manipulated by using technologies related to AI, such as machine learning and deepfake technology. The relevance to 'Social Impact' is significant as the legislation seeks to safeguard individuals from psychological and reputational harm, targeting exploitation through digital forgeries—a form of AI manipulation. The act prompts a response to the societal consequences of AI technology in terms of safety and privacy. For 'Data Governance', the act touches upon the management and reporting processes when dealing with nonconsensual content, highlighting potential implications related to user data handling and consent, though it doesn’t delve deeply into data privacy or rectification mandates. 'System Integrity' is relevant due to the act’s focus on requiring platforms to maintain mechanisms for users to report and remove harmful content, indicating a need for secure and transparent process enforcement. Finally, 'Robustness' is not as directly applicable, as the act does not focus heavily on performance benchmarks or systemic compliance measures. Overall, the act is very relevant to the societal impacts of AI manipulation, while also moderately addressing data governance and system integrity.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The text primarily addresses how nonconsensual intimate visual depictions should be handled, which has direct implications in the sectors of 'Government Agencies and Public Services' as it involves legal structures and enforcement mechanisms. As the act mandates accountability from platforms, it also indirectly touches on 'Judicial System' issues related to legal proceedings against violators and the protection of identifiable individuals. However, there is limited direct reference to specific applications in 'Healthcare', 'Private Enterprises', or 'Academic Institutions', which diminishes their relevance. 'Politics and Elections' is similarly not relevant since the act does not discuss AI regulation in political contexts. Thus, the relevant sectors primarily revolve around the governmental and regulatory aspects, with a lower likelihood of application in other sectors.
Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context
Description: Creates a state office of algorithmic innovation to set policies and standards to ensure algorithms are safe, effective, fair, and ethical, and that the state is conducive to promoting algorithmic innovation.
Summary: The bill establishes a state Office of Algorithmic Innovation to create policies and standards ensuring algorithms are safe, effective, fair, and ethical, promoting innovation in New York.
Collection: Legislation
Status date: Jan. 9, 2025
Status: Introduced
Primary sponsor: Jenifer Rajkumar
(4 total sponsors)
Last action: referred to science and technology (Jan. 9, 2025)
Societal Impact
System Integrity
Data Robustness (see reasoning)
The text explicitly discusses the creation of a state office tasked with setting policies and standards for algorithms, which directly relates to the social impact of AI. Ensuring that algorithms are safe, effective, fair, and ethical indicates a focus on accountability and the prevention of harm to individuals and society, which is a key aspect of social impact legislation. Additionally, the use of algorithms in decision-making raises concerns regarding fairness, bias, and discrimination, which are integral to the category's focus. The mention of auditing algorithms also supports the idea of measuring and managing the social implications of AI systems. Moreover, the establishment of a dedicated office indicates a commitment to ongoing oversight and regulation concerning the social implications of algorithmic technologies. Regarding data governance, while the text refers to the regulation of algorithmic use, it does not explicitly focus on data management practices or privacy concerns, making this category less relevant. With respect to system integrity, the focus on safe and effective algorithms suggests a degree of importance; however, actual implementations of human oversight or security measures are not mentioned. Robustness is moderately relevant as the establishment of standards implies an intention to create benchmarks for algorithm performance, but it is not the primary focus of the legislation. Thus, the emphasis on social accountability places high relevance on the social impact category, while other categories reflect varying degrees of relevance.
Sector:
Government Agencies and Public Services (see reasoning)
The text involves the regulation of algorithms within state governance, which aligns closely with the 'Government Agencies and Public Services' sector, as the creation of a state office suggests direct application of AI in public service contexts. The legislation may also have implications for transparency and accountability in the use of AI systems within government functions. There are less clear connections to sectors like Politics and Elections or the Judicial System, as the text does not specifically address these areas. While the regulation of algorithms may affect 'Private Enterprises, Labor, and Employment', it seems to be more focused on governmental oversight rather than direct implications on employment practices. Overall, the relevance to the 'Government Agencies and Public Services' sector stands out, with minimal relevance to others.
Keywords (occurrence): artificial intelligence (1) algorithm (1) show keywords in context
Description: An Act To Enact The "ensuring Likeness, Voice And Image Security (elvis) Act Of 2025"; To Define Terms; To Stipulate That Every Individual Has A Property Right In The Use Of That Individual's Name, Photograph, Voice Or Likeness For The Purpose Of Expanding Artificial Intelligence Protections To Individuals' Said Property Rights; To Stipulate That Property Rights Provided In This Act Are Exclusive To The Individual; To Implement Commercial Exploitation Guidelines; To Create A Civil Action Upon...
Summary: The ELVIS Act of 2025 establishes property rights for individuals over their likeness, voice, and image, expanding protections against unauthorized use, particularly regarding AI-generated deep fakes. It allows civil actions and imposes criminal penalties for violations.
Collection: Legislation
Status date: Feb. 4, 2025
Status: Other
Primary sponsor: Jill Ford
(sole sponsor)
Last action: Died In Committee (Feb. 4, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The ELVIS Act of 2025 directly addresses the implications of artificial intelligence, particularly with regard to deep fakes, and how they affect individual rights over their likeness, voice, and image. The relevance to the 'Social Impact' category is clear, as it seeks to protect individuals from harm that can be caused by unauthorized uses of their likenesses in the context of AI-generated content. It also has implications for 'Data Governance', since the act mentions the unauthorized use of algorithms and software that can produce deep fakes, linking it to the collection and management of data involving personal likenesses. The 'System Integrity' category may also apply, given the act's focus on the need for legal frameworks governing AI-generated likenesses and enforcing individuals' rights. However, while robustness of AI performance isn't explicitly mentioned, there is a concern regarding the integrity and ethical compliance of AI technologies that are used to create or exploit individuals' likenesses, which can relate it to the 'Robustness' category as well. Overall, the act primarily aligns with 'Social Impact' due to its protective nature against AI misuse, while also touching upon other categories to a moderate degree.
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment
International Cooperation and Standards
Hybrid, Emerging, and Unclassified (see reasoning)
The entitlement of individuals over their likeness, name, and voice relates intersectionally to several sectors. In 'Politics and Elections', there is a potential for AI-generated deep fakes to influence public opinion and electoral integrity, thus linking this act to concerns around personal rights in political contexts. The 'Government Agencies and Public Services' sector is relevant as government entities might need to enforce or facilitate the protections outlined in this act. The 'Judicial System' is also directly connected, as the act provides provisions for civil action and how courts can enforce individual rights against unauthorized use, reinforcing the need for legal clarity concerning AI and likeness rights. The 'Healthcare' sector seems less relevant, unless we consider situations where AI impacts data use for patient imaging and representations. The 'Private Enterprises, Labor, and Employment' sector could be engaged as companies must comply with new rights regarding likeness in marketing and employment processes. 'Academic and Research Institutions' could potentially relate if the act influences AI studies around likeness rights or ethics. 'International Cooperation and Standards' may apply if states need to harmonize legislation regarding likeness rights across borders. 'Nonprofits and NGOs' could also be relevant if they advocate for individual rights affected by AI misuse. However, 'Hybrid, Emerging, and Unclassified' fits well considering the unique intersectionality with newer AI technologies that complicate existing rights frameworks.
Keywords (occurrence): algorithm (1) show keywords in context
Description: As introduced, provides for a new definition for the term "artificial intelligence" for the Tennessee artificial intelligence advisory council act; expands, from 24 to at least 24 and at most 27, the number of members on the council; directs the council to make various other changes. - Amends TCA Title 4, Chapter 3, Part 31.
Summary: The bill amends the Tennessee Code to refine the artificial intelligence advisory council's structure, responsibilities, and membership requirements, aiming to enhance data privacy and streamline AI-related regulations.
Collection: Legislation
Status date: Feb. 6, 2025
Status: Introduced
Primary sponsor: Greg Martin
(sole sponsor)
Last action: Assigned to s/c Departments & Agencies Subcommittee (Feb. 12, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text explicitly mentions 'artificial intelligence (AI)' and outlines the formation and responsibilities of an advisory council pertaining to AI within Tennessee state law. The definition of AI is provided, indicating its implications on decision-making, which relates strongly to how AI impacts society (Social Impact). The advisory council is charged with making recommendations on best practices for data privacy and security and identifying overlaps in laws concerning AI, which connects to Data Governance. System integrity is also an aspect of the advisory council's role, particularly in recommending best practices for handling AI-related data and ensuring compliance with existing legislation, linking to the idea of maintaining the integrity of AI implementations. However, there is less emphasis on performance benchmarks or auditing which would be directly aligned with Robustness. Overall, the document is highly relevant across multiple categories, although the primary focus seems to lean more towards Social Impact and Data Governance.
Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)
The text references the establishment of a council specifically targeting artificial intelligence, indicating its implications for governmental oversight and policy relating to AI. This links specifically to Government Agencies and Public Services because it establishes a formal mechanism for the state to manage AI's role across various sectors and ensure appropriate oversight. While other sectors such as Healthcare or Private Enterprises may intersect with AI's applications discussed in this text, they are not directly targeted or detailed in this particular legislation. Thus, the council's mission is primarily governmental in nature, leading to the conclusion that Government Agencies and Public Services is the most relevant sector, while others receive lower scores due to a lack of specificity.
Keywords (occurrence): artificial intelligence (6) automated (1) show keywords in context
Description: As introduced, imposes requirements for health insurance issuers using artificial intelligence, algorithms, or other software for utilization review or utilization management functions. - Amends TCA Title 8, Chapter 27; Title 56 and Title 71.
Summary: The bill amends Tennessee health insurance laws to regulate the use of artificial intelligence in utilization review, ensuring it respects patient rights, avoids discrimination, and requires human oversight in medical necessity decisions.
Collection: Legislation
Status date: Feb. 6, 2025
Status: Introduced
Primary sponsor: Jeff Yarbro
(sole sponsor)
Last action: Passed on Second Consideration, refer to Senate Commerce and Labor Committee (Feb. 12, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This legislation explicitly addresses artificial intelligence used in health insurance, which will affect various aspects related to social impact such as fairness and discrimination. The bill emphasizes requirements for health insurance issuers to ensure that AI does not discriminate against enrollees and does not cause harm, thus making it highly relevant to 'Social Impact'. The 'Data Governance' category is also relevant because it mandates that AI systems used for healthcare must adhere to strict standards regarding privacy and accuracy of patient data. 'System Integrity' is relevant due to the need for transparency in the decision-making processes of AI tools deployed in health insurance. Lastly, 'Robustness' is less relevant here because while it may touch upon performance improvements, the bill primarily emphasizes compliance and oversight rather than benchmarks or auditing specifically. Overall, the legislation directly relates to societal implications of AI, data management in AI systems, and the need for integrity in AI applications within the healthcare insurance sector.
Sector:
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily deals with the regulation of artificial intelligence in the health insurance sector, where AI's role in utilization review and management is of central importance. It specifically addresses how these AI applications impact enrollees, healthcare providers, and regulatory compliance, thus making it extremely relevant to 'Healthcare'. There's an explicit connection to health insurance practices, their governance, and patient treatment that aligns directly with healthcare regulations. Additionally, it has moderate relevance to 'Private Enterprises, Labor, and Employment' as it affects how insurance companies operate and make decisions driven by AI; however, it doesn't delve into broader labor implications. Other sectors like 'Government Agencies and Public Services' might see some reference given that this involves state legislation, but the primary focus remains on health insurance. Hence, the primary categorization sits solidly within healthcare.
Keywords (occurrence): artificial intelligence (9) automated (1) algorithm (8) show keywords in context
Description: Creates the Light Detection and Ranging Technology Security Act. Provides that all State infrastructure located within or serving Illinois shall be constructed so as not to include any light detection and ranging (LIDAR) equipment manufactured in or by, including any equipment whose critical or necessary components are manufactured in or by, a company domiciled within a country of concern, or a company owned by a company domiciled in a country of concern. Provides that all State infrastructur...
Summary: The Light Detection and Ranging Technology Security Act prohibits the use of LIDAR equipment from companies based in specific countries of concern for Illinois state infrastructure, enhancing national security.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: Jason Plummer
(sole sponsor)
Last action: Referred to Assignments (Feb. 7, 2025)
Data Governance
System Integrity
Data Robustness (see reasoning)
The LIDAR TECHNOLOGY SECURITY Act primarily deals with the regulation and security of LIDAR technology within Illinois infrastructure. The text explicitly mentions LIDAR equipment, which embodies aspects of automation and secured technology utilized in critical infrastructure. However, it does not provide a comprehensive discussion on the broader societal impacts of AI, such as bias or consumer protection, making the relevance to Social Impact somewhat limited. It touches on the management of LIDAR data and its implications on national security, which can relate to Data Governance, but does not delve into core issues such as data rights or privacy concerns typically defined under that category. System Integrity is relevant as the Act enforces transparency in procurement processes to avoid foreign tech in critical infrastructure. The specific mandates regarding the procurement and operational standards for autonomous vehicles create a moderate connection to Robustness, as it indicates a regulation of technical standards for LIDAR use. However, the main focus is on exclusion based on national security concerns rather than establishing benchmarks or standards for AI performance per se.
Sector:
Government Agencies and Public Services (see reasoning)
The legislation explicitly addresses critical infrastructure and security concerns regarding LIDAR technology, making it most relevant to Government Agencies and Public Services. Since the Act outlines how state procurements must adhere to specific guidelines about LIDAR systems based on national security, it directly impacts government operations. It does not significantly address the Judicial System, Healthcare, or Private Enterprises directly, nor does it deal with matters of academic research or the work of nonprofits. There are some implications for international cooperation considering the mention of countries of concern but not a direct alignment with the International Cooperation and Standards sector. Thus, the focus is primarily on regulations specific to how government agencies utilize and procure technology for public service.
Keywords (occurrence): automated (1) autonomous vehicle (2) show keywords in context
Description: Amend KRS 42.722 to define terms relating to artificial intelligence; amend KRS 42.726 to require the Commonwealth Office of Technology to establish and implement policy standards for the use of artificial intelligence; create a new section of KRS 42.720 to 42.742to create the Artificial Intelligence Governance Committee; task the committee with the establishment of responsible, ethical, and transparent procedures for the allowable use, development, and approval of artificial intelligence for...
Summary: This bill establishes guidelines and governance for the ethical and secure use of artificial intelligence systems in Kentucky, emphasizing data protection and transparency while creating a central committee to oversee compliance.
Collection: Legislation
Status date: Feb. 18, 2025
Status: Introduced
Primary sponsor: Josh Bray
(sole sponsor)
Last action: to State Government (H) (Feb. 26, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The legislation contains several provisions that explicitly relate to the social implications of AI, data governance, system integrity, and the robustness of AI systems. The establishment of an Artificial Intelligence Governance Committee showcases efforts to address ethical considerations, responsible use, and transparency in AI systems, which align closely with the 'Social Impact' category. The regulations regarding data protection, oversight of high-risk AI systems, and measures for ensuring security and privacy demonstrate strong relevance to 'Data Governance.' The bill outlines explicit requirements for human oversight and establishes standards for high-risk AI applications, making it relevant to 'System Integrity.' The focus on developing benchmarks and standards for AI systems, including the accountability measures set forth for evaluation and approval, strongly supports the 'Robustness' category. As such, the text addresses multiple components across these categories significantly, demonstrating a comprehensive approach to regulating AI-related technologies.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The text encompasses various sectors related to the implementation of AI, particularly within government operations as articulated through the roles of the Commonwealth Office of Technology and the creation of an Artificial Intelligence Governance Committee. It discusses the ethical use of AI by governmental entities and sets standards that would likely apply to public agencies, thereby making it highly relevant to 'Government Agencies and Public Services.' Moreover, it aims to create safeguards for personal data used in these systems, suggesting direct relevance to the 'Judicial System' as legal compliance is tied to data governance in public services. However, it does not specifically address other sectors such as healthcare, private enterprises, or political contexts, making it less relevant to those areas. Therefore, 'Government Agencies and Public Services' is highlighted as most fitting.
Keywords (occurrence): artificial intelligence (65) machine learning (4) neural network (1) synthetic media (4) foundation model (1) algorithm (1) show keywords in context
Description: Provides for notice requirements where an insurer authorized to write accident and health insurance in this state, a corporation organized pursuant to article forty-three of this chapter, or a health maintenance organization certified pursuant to article forty-four of the public health law uses artificial intelligence-based algorithms in the utilization review process.
Summary: This bill mandates New York insurers to disclose their use of artificial intelligence in utilization reviews, submit AI algorithms to prevent bias, and establishes penalties for violations.
Collection: Legislation
Status date: Jan. 9, 2025
Status: Introduced
Primary sponsor: Pamela Hunter
(sole sponsor)
Last action: referred to insurance (Jan. 9, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly discusses the use of artificial intelligence-based algorithms in the context of health insurance and utilization review, which is crucial in assessing the social impacts of AI, particularly concerning bias mitigation and consumer protection. The text includes provisions for transparency about AI use, ensuring that consumers are informed about AI practices that may affect them. Hence, Social Impact is rated 5 due to the focus on accountability and consumer awareness regarding AI applications. The Data Governance category is also highly relevant as it discusses the requirement to submit algorithms and data sets to the department, emphasizing the need for oversight to minimize biases, thus scoring 4. System Integrity is relevant as well, as it mandates human oversight (clinical peer reviewers) in the utilization review process and the need for compliance with evidence-based standards, receiving a score of 4 as well. Robustness is less relevant here, as the legislation doesn’t directly address performance benchmarks for AI but instead focuses more on usage, so it scores a 2.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text pertains primarily to Healthcare, as it discusses the use of AI algorithms within the context of health insurance and utilization review processes. Thereby, the Healthcare sector is rated a 5. Government Agencies and Public Services are somewhat relevant as the act mentions the department's role in managing the compliance and oversight of these algorithms, leading to a score of 3. Private Enterprises, Labor, and Employment pertains somewhat, but AI usage in this text is not primarily oriented towards employment issues or business concerns, so it scores 2. The other sectors, such as Politics and Elections, Judicial System, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified are not relevant, receiving scores of 1.
Keywords (occurrence): artificial intelligence (4) algorithm (1) show keywords in context