4828 results:
Description: Concerning offenses involving child sex dolls.
Summary: The bill establishes criminal offenses and penalties regarding the possession, manufacture, trafficking, and distribution of child sex dolls in Washington State, aiming to protect minors from sexual exploitation.
Collection: Legislation
Status date: Jan. 13, 2025
Status: Introduced
Primary sponsor: Tina Orwall
(2 total sponsors)
Last action: Passed to Rules Committee for second reading. (Feb. 7, 2025)
This text primarily addresses offenses involving child sex dolls, focusing explicitly on their creation, distribution, and possession. The mention of 'artificial intelligence' specifically relates to the use of AI in the process of 'digitization,' implying a potential intersection with AI's role in creating fabricated depictions. However, the overall legislative intent is to penalize the trafficking and possession of child sex dolls rather than a focused analysis on the implications of AI technology itself. Thus, its relevance to categories like Social Impact, Data Governance, System Integrity, and Robustness is minimal, as these categories primarily deal with broader issues surrounding AI technology's societal implications, data management, system security, and performance benchmarks. Therefore, while there is a surface-level connection to AI, the primary focus of this legislation does not center around AI's impact or governance but rather on protection against exploitation and criminal activity.
Sector: None (see reasoning)
The legislation addresses offenses related to child sex dolls, with a brief mention of AI in the context of digital creation. Given this limited mention, the relevance to various sectors is also low. Specifically, it does not address political implications (Politics and Elections), has little to do with operational use in government agencies (Government Agencies and Public Services), and does not engage with the judicial system concerning AI application (Judicial System). Furthermore, there is no explicit mention of implications for healthcare, private enterprises, or academic settings. The closest relevance could be pertained to the regulation aspect in relation to private enterprises, but the underlying focus remains on criminality rather than sectoral integration of AI in businesses. Therefore, the scores reflect minimal relevance across all sectors.
Keywords (occurrence): artificial intelligence (1) automated (1) show keywords in context
Description: Modifying elements in the crimes of sexual exploitation of a child, unlawful transmission of a visual depiction of a child and breach of privacy to prohibit certain acts related to visual depictions in which the person depicted is indistinguishable from a real child, morphed from a real child's image or generated without any actual child involvement, provide an exception for cable services in the crime of breach of privacy and prohibit dissemination of certain items that appear to depict or p...
Summary: This bill modifies definitions and penalties related to the sexual exploitation of children, unlawful transmission of visual depictions, and privacy breaches, specifically addressing artificial intelligence and digitally manipulated images resembling children.
Collection: Legislation
Status date: Feb. 25, 2025
Status: Engrossed
Primary sponsor: Judiciary
(sole sponsor)
Last action: Senate Committee Report recommending bill be passed as amended by Committee on Judiciary (March 7, 2025)
Societal Impact
Data Governance (see reasoning)
The text specifically addresses the implications of artificial intelligence in the context of visual depictions, particularly concerning criminal offenses related to sexual exploitation. It discusses modifying definitions of crimes to include images altered or generated by AI and establishes nuanced legal ramifications for such depictions. This indicates a direct concern about the social impact of AI as it relates to crime, exploitation, and the protection of minors, thus linking strongly to the category of Social Impact. The inclusion of measures regarding privacy and ethics within the context of AI further aligns it with discussions on Data Governance, addressing how data (in this case, visual depictions) is managed and used in these new legal considerations. However, while the text touches upon issues related to system integrity and robustness in terms of preventing harm and outlining legal structures, it does not delve into technical specifics about system validation or integrity benchmarks, resulting in lower relevance scores for those categories.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The legislation's focus on criminal offenses that involve AI-generated content, specifically concerning the potential harm to minors, positions it strongly within the realm of government regulation as it pertains to public safety. It reflects societal concerns and legal responses to the risks posed by emerging technologies, which is indicative of the Government Agencies and Public Services sector. Given the nature of the offenses and the involvement of minors, there is a moderate connection to the Judicial System sector as well, though it focuses more on enforcement than on the legal framework surrounding AI application. It does not fit squarely into other sectors like Healthcare or Private Enterprises, as it is not addressing those specific domains. Thus, the scores reflect this concentrated focus.
Keywords (occurrence): artificial intelligence (5) automated (1) show keywords in context
Description: An act to amend Section 1367.01 of the Health and Safety Code, and to amend Section 10123.135 of the Insurance Code, relating to health care coverage.
Summary: Assembly Bill 512 amends California health care laws to shorten prior authorization timelines for health care services to 48 hours for standard requests and 24 hours for urgent requests, aiming to improve patient care accessibility and timely decision-making.
Collection: Legislation
Status date: Feb. 10, 2025
Status: Introduced
Primary sponsor: John Harabedian
(sole sponsor)
Last action: From committee chair, with author's amendments: Amend, and re-refer to Com. on Health. Read second time and amended. (April 11, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly mentions the use of artificial intelligence and algorithms in the context of health care service plans, particularly for utilization review and management functions. This makes it highly relevant to the Social Impact category, as it discusses potential impacts of AI on health care decision-making and patient care. The text outlines requirements for AI systems to ensure they do not discriminate, supplant health care providers' decision-making, or cause harm, directly addressing the broader societal implications of AI in health care. Similarly, the Data Governance category is relevant because it emphasizes the need for compliance with privacy laws and fair application of AI, as well as maintaining the confidentiality of medical information. The System Integrity category is applicable due to mentions of audits and compliance reviews of AI tools to ensure their effective and secure functioning, while the Robustness category is relevant as it discusses the performance evaluation of AI systems in health care contexts. Therefore, all categories are scored positively due to their relevance to the AI aspects represented in the text.
Sector:
Healthcare (see reasoning)
The text significantly relates to the Healthcare sector, as it directly addresses health care coverage and the role of algorithms in making healthcare decisions. It also speaks to the regulation of AI within health care services, ensuring that AI tools do not make decisions that compromise patient's health. This clear focus on health care regulation, coupled with explicit references to AI systems in health care, renders it highly relevant to this sector. All other sectors, including Politics and Elections, Government Agencies and Public Services, Judicial System, Private Enterprises, Labor, and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, Hybrid, Emerging, and Unclassified, either have no clear mention or connection to the text as it is framed entirely within the health care sector.
Keywords (occurrence): artificial intelligence (32) algorithm (30) show keywords in context
Description: Relating to regulation of the use of artificial intelligence systems in this state; providing civil penalties.
Summary: The Texas Responsible Artificial Intelligence Governance Act regulates AI systems, demands consumer disclosure, prohibits harmful practices, and imposes civil penalties for violations to promote ethical AI development and protect individuals.
Collection: Legislation
Status date: March 14, 2025
Status: Introduced
Primary sponsor: Giovanni Capriglione
(sole sponsor)
Last action: Committee report distributed (April 10, 2025)
Description: Amends the Illinois Vehicle Code. Provides that the University of Illinois Chicago Urban Transportation Center shall conduct a study that includes the following: (1) a comprehensive review of the City of Chicago's website multi-year crash data on North and South DuSable Lake Shore Drive; (2) the available research on potential effectiveness of cameras powered by artificial intelligence in improving compliance and reducing crashes and road fatalities on North and South DuSable Lake Shore Drive...
Summary: The bill introduces automated speed enforcement systems in safety zones around schools and parks in Illinois, aiming to enhance road safety by recording speeding violations and imposing civil penalties on vehicle owners. It mandates use of proceeds for public safety initiatives.
Collection: Legislation
Status date: April 10, 2025
Status: Engrossed
Primary sponsor: Sara Feigenholtz
(3 total sponsors)
Last action: Referred to Rules Committee (April 11, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly mentions the use of cameras powered by Artificial Intelligence (AI) to improve compliance and reduce road fatalities. This directly links to the category of Social Impact, as it discusses potential improvements to public safety and the implications of technology use in areas frequented by people, especially minors near schools. Furthermore, automated speed enforcement systems relate to System Integrity as they involve transparency and operational regulations governing the use of AI technologies. The study proposed by the University of Illinois Chicago Urban Transportation Center, exploring the effects and implications of using AI in traffic monitoring also ties into Data Governance, as effective data management and analysis are crucial for studying crash patterns. The legislation promotes accountability and safety through technology, thus touching numerous aspects of AI application beyond mere functionality and focusing on societal implications and ethical use in urban safety. However, it lacks a direct commitment to auditing standards or performance benchmarks, making Robustness less relevant to the text. Therefore, the relevant categories were scored based on the explicit connection to AI and its societal implications.
Sector:
Government Agencies and Public Services (see reasoning)
In terms of sector applicability, there is a clear reference to the use of AI technologies in a public safety context, thereby making it largely relevant to Government Agencies and Public Services, which deal with public safety regulations and operational standards. The text does not particularly address or relate to Politics and Elections, Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic Institutions, International Cooperation, Nonprofits, or Hybrid sectors. The focus is primarily on transportation and safety regulations in urban planning and development, thus receiving a relevant score mainly under Government Agencies. Other sectors do not find a strong connection based on the content of the text.
Keywords (occurrence): artificial intelligence (3) automated (25) show keywords in context
Description: Elections; political campaign advertisements; synthetic media; penalty. Prohibits electioneering communications containing synthetic media, as those terms are defined in the bill, from being published or broadcast without containing the following conspicuously displayed statement: "This message contains synthetic media that has been altered from its original source or artificially generated and may present conduct or speech that did not occur." The bill creates a civil penalty not to exceed $...
Summary: The bill establishes regulations and penalties for political campaign advertisements using synthetic media in Virginia, requiring clear disclosure of such alterations to prevent misleading information during elections.
Collection: Legislation
Status date: March 7, 2025
Status: Enrolled
Primary sponsor: Scott Surovell
(2 total sponsors)
Last action: Bill text as passed Senate and House (SB775ER) (March 7, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly addresses the dissemination of artificial audio and visual media in elections, indicating a significant social impact by trying to mitigate misinformation and deception in political campaigns. It discusses penalties for using synthetic media misleadingly, directly correlating with accountability measures and consumer protections, as it aims to protect voters from being misled by AI-generated content. As such, it is highly relevant to the Social Impact category. The Data Governance category is relevant as it also involves regulations on disclosure in the context of AI-generated media, ensuring that voters are provided with accurate information about the nature of the media they encounter. However, it does not delve deeply into data management aspects, which may limit its relevance. The System Integrity category pertain to the requirement of clear disclosures and the regulations for online platforms regarding synthetic media, focusing more on the integrity of the media being presented in political contexts than on inherent security or control issues. The Robustness category is less relevant here since there is little mention of benchmarks, auditing, or performance measures for AI systems in the text.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The text is highly relevant to the Politics and Elections sector as it specifically discusses regulations concerning the use of AI-generated content in electoral contexts. The legislation aims to combat misinformation and help maintain electoral integrity through the regulation of synthetic media, making it extremely relevant to this sector. The Government Agencies and Public Services sector could be moderately relevant, as it involves the regulation and enforcement of these new media guidelines by governmental bodies, but it primarily focuses on political campaign contexts. The Judicial System sector does not appear to directly relate here, as the text primarily addresses legislation rather than its application in legal proceedings. Healthcare, Private Enterprises, Labor, and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified sectors do not seem to relate to the text at all, as it doesn't address the use or regulation of AI outside of the electoral context.
Keywords (occurrence): synthetic media (5) show keywords in context
Description: Prohibit distributing deepfakes under the Nebraska Political Accountability and Disclosure Act
Summary: The bill prohibits the distribution of deceptive deepfakes targeting political candidates 90 days before elections, providing exceptions for disclosures and certain media types, while allowing candidates to seek legal relief.
Collection: Legislation
Status date: Jan. 22, 2025
Status: Introduced
Primary sponsor: John Cavanaugh
(sole sponsor)
Last action: Referred to Government, Military and Veterans Affairs Committee (Jan. 24, 2025)
Societal Impact (see reasoning)
The text primarily addresses the distribution of deepfakes and synthetic media within the electoral process, focusing on preventing misinformation and protecting political candidates' reputations. It directly discusses identified terms like 'deepfake' and 'synthetic media,' demonstrating the impact of AI on society, particularly in political contexts. This indicates a significant relevance to the Social Impact category, as it touches on misinformation's effects on public trust and electoral integrity. It does not discuss data governance, system integrity, or robustness directly, so those categories receive lower scores.
Sector:
Politics and Elections (see reasoning)
The text is highly relevant to the Politics and Elections sector, given its focus on the regulation of deepfakes in political campaigns and electoral processes. It highlights legal considerations specific to election-related misinformation, which places it squarely within this sector. Although it may touch on aspects relevant to other sectors, they are not explicitly addressed or central to the text's purpose, leading to low scores for those sectors.
Keywords (occurrence): artificial intelligence (1) deepfake (7) synthetic media (4) show keywords in context
Description: Synthetic media; penalty. Expands the applicability of provisions related to defamation, slander, and libel to include synthetic media, defined in the bill. The bill makes it a Class 1 misdemeanor for any person to use any synthetic media for the purpose of committing any criminal offense involving fraud, constituting a separate and distinct offense with punishment separate and apart from any punishment received for the commission of the primary criminal offense. The bill also authorizes the ...
Summary: The bill introduces penalties for using synthetic media to commit fraud or other criminal offenses in Virginia, allowing for civil actions and establishing a work group to study enforcement related to such technology.
Collection: Legislation
Status date: Feb. 7, 2024
Status: Engrossed
Primary sponsor: Michelle Maldonado
(5 total sponsors)
Last action: Continued to 2025 in Courts of Justice (11-Y 2-N) (Feb. 19, 2024)
Societal Impact
Data Governance (see reasoning)
The text explicitly mentions synthetic media, generative artificial intelligence, and penalties related to their misuse in committing fraud. This is highly relevant to Social Impact, as it addresses the potential ramifications of synthetic media on personal rights and fraud, reflecting societal issues like misinformation and defamation. It also touches on accountability measures for the production and use of AI technologies with a direct societal effect. Data Governance is somewhat relevant since the definition of synthetic media intersects with data accuracy and potential restrictions on data usage, although the text doesn't delve deeply into those governance aspects. System Integrity receives a lower relevance score, as it doesn't deal much with security or operational integrity of AI systems, but rather with legal definitions and implications. Robustness only somewhat pertains as it lacks focus on performance benchmarking or auditing of AI systems. Thus, Social Impact likely takes precedence, considering the societal consequences outlined in the text.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The legislation primarily concerns the use of synthetic media in the context of legal actions, specifically pertaining to defamation and fraud. This ties closely to the Judicial System, which deals with legal implications and case management regarding crimes committed using AI technologies. Furthermore, the text touches on findings and recommendations which could guide future legislative actions, suggesting relevance to Government Agencies and Public Services as they may be involved in the enforcement of such regulations and legal standards. The focus on synthetic media's misuse in fraudulent circumstances may also implicate Private Enterprises, Labor, and Employment indirectly, particularly regarding employment practices influenced by such technologies. Academic and Research Institutions may have a slight relevance due to the potential study of AI impacts raised in the text, but it is not direct. Overall, the strongest categories here are Judicial System and Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (7) synthetic media (7) foundation model (1) show keywords in context
Description: Creates the Wellness and Oversight for Psychological Resources Act. Defines terms. Provides that an individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services to the public in the State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional. Provides that a licensed professional may use an artificial intelligence system only to the extent the use of the artificial intelligence system m...
Summary: The Wellness and Oversight for Psychological Resources Act ensures therapy services in Illinois are provided only by qualified professionals, protecting consumers from unlicensed providers, including unregulated AI systems.
Collection: Legislation
Status date: Jan. 27, 2025
Status: Introduced
Primary sponsor: Bob Morgan
(8 total sponsors)
Last action: Added Co-Sponsor Rep. Anne Stava-Murray (March 20, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text focuses primarily on the regulation of therapy services that involve the use of artificial intelligence (AI). It outlines how licensed professionals may utilize AI systems for administrative and supplementary support in the therapy context, ensuring that client interactions remain under professional oversight. This highlights the legislative intent to integrate AI with accountability and safety, addressing concerns around consumer protection and the ethical use of AI in mental health services. Given the specific references to AI and the structured governance around its use, the 'Social Impact' category is very relevant due to consumer protections and mental health implications. The 'Data Governance' category is also quite relevant as it involves confidentiality and the management of client data, particularly concerning consent and the administrative roles of AI in therapy. 'System Integrity' pertains moderately as it concerns oversight and responsibility concerning AI outputs and interactions. The 'Robustness' category appears to be of limited relevance, as there is no focus on performance benchmarks or auditing compliance for AI systems beyond their use in therapy settings.
Sector:
Healthcare (see reasoning)
This legislation primarily falls under the Healthcare sector as it directly regulates the application and oversight of AI in therapy and psychotherapy services within a clinical setting. It seeks to ensure that only qualified professionals provide treatment, which is critical for patient care and safety. While there are indirect implications for government oversight and public services, the core focus on therapy and mental health clearly positions it within Healthcare. Other sectors such as Politics and Elections or Nonprofits and NGOs are not directly relevant as this text does not address electoral processes or the use of AI by NGOs.
Keywords (occurrence): artificial intelligence (9) show keywords in context
Description: An act to add Chapter 8 (commencing with Section 17370) to Part 2 of Division 7 of the Business and Professions Code, relating to business regulations.
Summary: The California Preventing Algorithmic Collusion Act of 2025 aims to regulate pricing algorithms to prevent collusion by prohibiting their use or distribution with competitor data, enforcing penalties for violations.
Collection: Legislation
Status date: Feb. 6, 2025
Status: Introduced
Primary sponsor: Melissa Hurtado
(2 total sponsors)
Last action: From committee with author's amendments. Read second time and amended. Re-referred to Com. on JUD. (April 10, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text pertains to the regulation of pricing algorithms, explicitly mentioning the terms 'pricing algorithm' and 'computational process derived from machine learning or other artificial intelligence techniques.' This indicates a focus on how these technologies might operate in the commercial context and what regulations are set to prevent misuse and ensure accountability. The legislation's provisions on reporting, transparency, and prohibiting the use of competitor data highlight concerns about social consequences of AI, particularly regarding fairness, competition, and consumer protection, all of which fit within the Social Impact category. Additionally, the requirement for companies to provide detailed information about their algorithms corresponds to Data Governance, as it seeks to ensure transparency and accuracy in data management related to AI systems. System Integrity is also relevant, as the text addresses certification and accountability for actions involving AI systems, emphasizing oversight and the need for proper checks and balances. Robustness, which focuses more on performance benchmarks and compliance auditing specific to AI's technical performance, is less relevant here, as the text does not delve into performance standards but rather emphasizes equitable use and regulatory compliance.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
This bill specifically addresses issues that arise from AI technologies deployed in business environments, particularly how they can contribute to competition or harm consumer interests. The focus on algorithms used for pricing indicates a clear link to Private Enterprises, Labor, and Employment. The bill's provisions can impact the way businesses operate regarding algorithmic regulation, especially for those with significant annual revenues, indicating a broader concern for corporate governance and responsible practices in AI application. While the bill does indeed deal with aspects that may involve government oversight, such as reporting to the Attorney General and the legal ramifications of improper algorithmic use, it remains primarily anchored in the realm of private enterprise rather than government agencies or public services. Thus, the most relevant sectors are primarily centered around Private Enterprises, Labor, and Employment.
Keywords (occurrence): artificial intelligence (1) machine learning (1) algorithm (35) show keywords in context
Description: As enacted, enacts the "Modernization of Towing, Immobilization, and Oversight Normalization (MOTION) Act." - Amends TCA Title 4; Title 5; Title 6; Title 7; Title 39; Title 47; Title 48; Title 55; Title 56; Title 62; Title 66 and Title 67.
Summary: This bill amends multiple sections of Tennessee law to modernize regulations on towing and parking practices, establishing the "MOTION Act" to enhance oversight and protect consumers against unfair towing and booting practices.
Collection: Legislation
Status date: May 31, 2024
Status: Passed
Primary sponsor: Jake McCalmon
(22 total sponsors)
Last action: Comp. became Pub. Ch. 1017 (May 31, 2024)
The text primarily addresses revisions to parking regulations in Tennessee and does not explicitly mention AI, algorithms, or any related technology associated with the categories of Social Impact, Data Governance, System Integrity, or Robustness. The single mention of an 'automatic license plate reader' describes a tool that utilizes an algorithm but does not engage with any concepts directly related to AI ethics or governance as outlined in the categories. Overall, the core content of the act focuses on parking enforcement rather than the implications of AI technology.
Sector: None (see reasoning)
The text does not address specific sectors such as Politics and Elections, Government Agencies and Public Services, or any others that involve the use or regulation of AI technology. Instead, it proposes amendments relevant to parking enforcement and vehicle management, which do not inherently involve AI applications in any sector. The mention of an 'automatic license plate reader' does not align with the broader discussions typically associated with the defined sectors.
Keywords (occurrence): automated (1) show keywords in context
Description: To amend the Energy Independence and Security Act of 2007 to direct research, development, demonstration, and commercial application activities in support of supercritical geothermal and closed-loop geothermal systems in supercritical various conditions, and for other purposes.
Summary: The Supercritical Geothermal Research and Development Act aims to enhance research, development, and commercialization of supercritical and closed-loop geothermal systems to improve geothermal energy utilization in various conditions.
Collection: Legislation
Status date: June 7, 2024
Status: Introduced
Primary sponsor: Frank Lucas
(2 total sponsors)
Last action: Subcommittee Hearings Held (July 23, 2024)
Data Governance
System Integrity (see reasoning)
The text does include references to AI through terms such as 'machine learning algorithms,' showing a connection to the use of AI in optimizing and enhancing geothermal research and applications. However, the primary focus of the legislation appears to be on geothermal energy rather than extensive AI-related social impacts or regulatory frameworks. The mention of machine learning suggests some relevance to the Data Governance and System Integrity categories, but it is not the primary thrust of the bill. Furthermore, without broader implications on data governance or system integrity, the scoring for Social Impact and Robustness may remain lower.
Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)
The focus of this legislation is primarily on geothermal energy research and development, with only tangential mentions of AI. The references to AI are included within the context of enhancing geothermal technology rather than addressing specific sectors significantly. Therefore, while there is a slight connection to the Government Agencies and Public Services sector due to its regulatory nature, the overall impact on sectors like Healthcare or Private Enterprises does not seem relevant. The understanding is limited to applications within the energy sector, and its broader impact does not clearly translate to other sectors.
Keywords (occurrence): machine learning (1) show keywords in context
Description: Allowing bargaining over matters related to the use of artificial intelligence.
Summary: The bill allows collective bargaining regarding the adoption and modification of artificial intelligence technologies affecting employee wages or performance evaluations at Washington's higher education institutions. It aims to protect employee interests in an evolving technological landscape.
Collection: Legislation
Status date: March 8, 2025
Status: Engrossed
Primary sponsor: Lisa Parshley
(47 total sponsors)
Last action: First reading, referred to Labor & Commerce. (March 11, 2025)
Societal Impact (see reasoning)
The provided text primarily focuses on legislation that pertains to the usage of artificial intelligence within the context of collective bargaining agreements. The terms and references to 'artificial intelligence' and related technologies are directly related to the legislation's aim of ensuring that the adoption and modification of AI technologies are subject to collective bargaining if they impact employee wages, hours, or working conditions. Therefore, the relevance of the categories can be evaluated based on the implications of AI on social impact, data governance, system integrity, and robustness in the context of labor relations. Overall, while there are mentions of technology, the act structurally aligns more with social implications rather than data governance, system integrity, or robustness. The legislation's relevance to societal aspects of AI usage, fairness, and employee rights can be considered very relevant.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text explicitly addresses the role of artificial intelligence in the context of collective bargaining, which is primarily relevant to the workforce, labor relations, and government interactions with employees. The mentions of decisions about adopting AI affecting employee conditions directly tie to the implications and interactions within the Labor market. However, the broad aspects like healthcare and other specific sectors are not directly discussed, thus scoring lower in those respects. This text does suggest some level of engagement with public services through government agencies overseeing employment matters, which gives it slight relevance there. Ultimately, the legislation's core focus on the intersection between AI and labor relations is evident.
Keywords (occurrence): artificial intelligence (10) machine learning (1) show keywords in context
Description: A bill to enhance bilateral defense cooperation between the United States and Israel, and for other purposes.
Summary: The United States-Israel Defense Partnership Act of 2025 aims to strengthen defense cooperation between the U.S. and Israel, enhancing joint initiatives, technology development, and addressing mutual security threats, particularly those related to unmanned systems.
Collection: Legislation
Status date: Feb. 12, 2025
Status: Introduced
Primary sponsor: Dan Sullivan
(19 total sponsors)
Last action: Read twice and referred to the Committee on Foreign Relations. (Feb. 12, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly addresses the integration of artificial intelligence in defense technologies, particularly in relation to counter-unmanned systems and emerging technologies. It also highlights development, testing, and evaluation processes related to AI. The language used reveals a strong connection to AI-related legislative discussions, particularly concerning its impacts on national security and military innovation. Thus, the categories of Social Impact, Data Governance, System Integrity, and Robustness all have significant relevance to the bill's focus on AI in defense contexts, particularly through the lens of technological development and security considerations.
Sector:
Government Agencies and Public Services
Academic and Research Institutions
International Cooperation and Standards (see reasoning)
The text discusses defense partnership initiatives and technological enhancements involving AI, particularly in military contexts. It is directly relevant to sectors like Government Agencies and Public Services due to the involvement of the Department of Defense and the program's intent to bolster national security. Additionally, it touches on Academic and Research Institutions through proposals for joint research and development initiatives between the US and Israeli entities. The nature of the defense emphasis means that it does not directly address sectors like Healthcare, Private Enterprises, etc., receiving lower relevance scores.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Relating to the use of artificial intelligence-based algorithms in utilization review conducted for certain health benefit plans.
Summary: This bill mandates health insurance issuers to disclose the use of AI algorithms in utilization reviews, ensuring they minimize bias and comply with clinical guidelines. It aims to enhance transparency and accountability.
Collection: Legislation
Status date: April 9, 2025
Status: Engrossed
Primary sponsor: Nathan Johnson
(sole sponsor)
Last action: Received from the Senate (April 10, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This legislation is centered around the use of artificial intelligence-based algorithms in health benefit plans, particularly in the context of utilization review. It entails requirements for transparency and accountability in the implementation of AI algorithms, aiming to minimize bias and ensure compliance with clinical guidelines. Therefore, it strongly ties to Social Impact due to its focus on bias reduction and ethical usage, as well as Data Governance due to mandates on data management and compliance with evidence-based guidelines. System Integrity is relevant because it discusses the requirements for oversight and regular reporting of the AI algorithms in use, while Robustness is less relevant as the text does not delve significantly into performance benchmarks or auditing requirements beyond compliance. Overall, this legislation has clear implications for societal issues related to healthcare and fairness, consistent data management practices, and systematic integrity in AI implementations.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text directly pertains to the healthcare sector by outlining regulations concerning the use of AI algorithms in the review process of health benefit plans. It mandates transparency, bias reduction, and adherence to clinical guidelines, which are critical in the healthcare context. Additionally, the legislation's focus on health benefit plans implies regulations that affect how healthcare services are delivered, further anchoring its relevance to the Healthcare sector. While it does touch upon accountability and oversight, which could align with Government Agencies, the primary focus is on healthcare providers and insurers. Therefore, Healthcare receives a high score, while others remain lower due to lack of direct implications.
Keywords (occurrence): artificial intelligence (9) algorithm (10) show keywords in context
Description: Establishes the artificial intelligence training data transparency act requiring developers of generative artificial intelligence models or services to post on the developer's website information regarding the data used by the developer to train the generative artificial intelligence model or service, including a high-level summary of the datasets used in the development of such system or service.
Summary: The Artificial Intelligence Training Data Transparency Act mandates developers to publicly disclose data sources used to train generative AI models, ensuring transparency and user awareness about potential biases and data origins.
Collection: Legislation
Status date: March 6, 2025
Status: Introduced
Primary sponsor: Alex Bores
(sole sponsor)
Last action: referred to science and technology (March 6, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly focuses on the use of artificial intelligence (AI) in the context of generative AI models, particularly emphasizing transparency around training data used by developers. This aligns strongly with issues of accountability and bias related to the impact of AI systems on society, which falls under the Social Impact category. The requirements set forth about posting data transparency echo a recognition of the societal implications that AI systems hold, including potential discrimination and misinformation. Hence, Social Impact receives a high relevance score. The Data Governance category is also very relevant since the legislation governs the management of data utilized in AI models, addressing concerns about data integrity, attribution, claim to ownership, and personal data handling. System Integrity is moderately relevant due to the inclusion of accountability measures in model development and the need for documentation to establish trust in AI systems. Robustness receives a low relevancy because the bill does not discuss performance benchmarks or auditing compliance mechanisms, which are central to this category.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The legislation concerns the development and transparency requirements for generative AI models, which positions it primarily within the realm of Private Enterprises due to the commercial nature of AI development. However, it also implicates aspects of Government Agencies and Public Services, as the transparency requirements would affect public accountability and governance regarding AI systems in services offered to New Yorkers. Academic and Research Institutions could also see relevance if the education sector evaluates the implications of such transparency for AI research. Politics and Elections is not directly addressed in the text, meaning it receives a low score. The Judicial System is also not covered, as this legislation does not target legal processes or AI's role therein. Nonprofits and NGOs might benefit from transparency but not in a seminal way defined by this act. The remaining sectors receive low relevance scores due to their indirect connections to the primary themes of the text.
Keywords (occurrence): artificial intelligence (19) automated (1) show keywords in context
Description: Altering the selection of the membership and chair of the Maryland Cybersecurity Council; and requiring the Council, working with certain entities, to assess and address cybersecurity threats and associated risks from artificial intelligence and quantum computing.
Summary: The bill modifies the Maryland Cybersecurity Council's membership and leadership structure, requiring the Council to address cybersecurity threats from artificial intelligence and quantum computing.
Collection: Legislation
Status date: April 7, 2025
Status: Enrolled
Primary sponsor: Brian Feldman
(7 total sponsors)
Last action: Passed Enrolled (April 7, 2025)
Description: To (1) limit the use of electronic monitoring by an employer, and (2) establish various requirements concerning the use of artificial intelligence systems by employers.
Summary: The bill establishes protections for employees regarding artificial intelligence and electronic monitoring, ensuring transparency, limiting invasive surveillance, and regulating the deployment of high-risk AI systems in the workplace.
Collection: Legislation
Status date: March 6, 2025
Status: Introduced
Primary sponsor: Labor and Public Employees Committee
(sole sponsor)
Last action: File Number 546 (April 7, 2025)
Description: To implement the Governor's budget recommendations.
Summary: The bill establishes a framework for managing state data and AI technologies. It designates a Chief Data Officer, promotes data sharing, and creates an AI regulatory sandbox to enhance innovation and economic development in Connecticut.
Collection: Legislation
Status date: Feb. 6, 2025
Status: Introduced
Last action: File Number 606 (April 9, 2025)
Summary: The bill involves the Rules Committee’s functions and leadership, focusing on bipartisan efforts in improving electoral processes, security, and representation within Congress while addressing various operational challenges.
Collection: Congressional Record
Status date: Dec. 16, 2024
Status: Issued
Source: Congress
The text primarily discusses a wide range of legislative activities and experiences of members of the Rules Committee but lacks substantial mention or focus on AI topics. Although there is a brief reference to artificial intelligence concerning elections, this mention does not delve into specific impacts or regulations guiding the use of AI. Hence, the relevance of AI within legislation related to Social Impact, Data Governance, System Integrity, and Robustness is minimal. The only notable mention was towards the end where AI was indicated as a focus for legislation relating to elections, but even this reference did not elaborate on its implications on society, governance, or robustness in AI systems.
Sector:
Politics and Elections (see reasoning)
The text discusses the Rules Committee's work heavily revolving around election legislation and operational improvements in the Senate but lacks broader references across other sectors such as healthcare, judicial systems, etc. While it touches on elections and mentions artificial intelligence briefly, it does not present extensive details about AI specifically within political campaigns or any legislative driven implications. Due to the nature and content primarily reflecting governance, the sectors of 'Politics and Elections' bears the most relevance, while others are minimally touched upon or entirely absent.
Keywords (occurrence): artificial intelligence (1) show keywords in context