4799 results:
Description: As enacted, enacts the "Modernization of Towing, Immobilization, and Oversight Normalization (MOTION) Act." - Amends TCA Title 4; Title 5; Title 6; Title 7; Title 39; Title 47; Title 48; Title 55; Title 56; Title 62; Title 66 and Title 67.
Summary: This bill amends multiple sections of Tennessee law to modernize regulations on towing and parking practices, establishing the "MOTION Act" to enhance oversight and protect consumers against unfair towing and booting practices.
Collection: Legislation
Status date: May 31, 2024
Status: Passed
Primary sponsor: Jake McCalmon
(22 total sponsors)
Last action: Comp. became Pub. Ch. 1017 (May 31, 2024)
The text primarily addresses revisions to parking regulations in Tennessee and does not explicitly mention AI, algorithms, or any related technology associated with the categories of Social Impact, Data Governance, System Integrity, or Robustness. The single mention of an 'automatic license plate reader' describes a tool that utilizes an algorithm but does not engage with any concepts directly related to AI ethics or governance as outlined in the categories. Overall, the core content of the act focuses on parking enforcement rather than the implications of AI technology.
Sector: None (see reasoning)
The text does not address specific sectors such as Politics and Elections, Government Agencies and Public Services, or any others that involve the use or regulation of AI technology. Instead, it proposes amendments relevant to parking enforcement and vehicle management, which do not inherently involve AI applications in any sector. The mention of an 'automatic license plate reader' does not align with the broader discussions typically associated with the defined sectors.
Keywords (occurrence): automated (1) show keywords in context
Description: Prohibit distributing deepfakes under the Nebraska Political Accountability and Disclosure Act
Summary: The bill prohibits the distribution of deceptive deepfakes targeting political candidates 90 days before elections, providing exceptions for disclosures and certain media types, while allowing candidates to seek legal relief.
Collection: Legislation
Status date: Jan. 22, 2025
Status: Introduced
Primary sponsor: John Cavanaugh
(sole sponsor)
Last action: Referred to Government, Military and Veterans Affairs Committee (Jan. 24, 2025)
Societal Impact (see reasoning)
The text primarily addresses the distribution of deepfakes and synthetic media within the electoral process, focusing on preventing misinformation and protecting political candidates' reputations. It directly discusses identified terms like 'deepfake' and 'synthetic media,' demonstrating the impact of AI on society, particularly in political contexts. This indicates a significant relevance to the Social Impact category, as it touches on misinformation's effects on public trust and electoral integrity. It does not discuss data governance, system integrity, or robustness directly, so those categories receive lower scores.
Sector:
Politics and Elections (see reasoning)
The text is highly relevant to the Politics and Elections sector, given its focus on the regulation of deepfakes in political campaigns and electoral processes. It highlights legal considerations specific to election-related misinformation, which places it squarely within this sector. Although it may touch on aspects relevant to other sectors, they are not explicitly addressed or central to the text's purpose, leading to low scores for those sectors.
Keywords (occurrence): artificial intelligence (1) deepfake (7) synthetic media (4) show keywords in context
Description: Synthetic media; penalty. Expands the applicability of provisions related to defamation, slander, and libel to include synthetic media, defined in the bill. The bill makes it a Class 1 misdemeanor for any person to use any synthetic media for the purpose of committing any criminal offense involving fraud, constituting a separate and distinct offense with punishment separate and apart from any punishment received for the commission of the primary criminal offense. The bill also authorizes the ...
Summary: The bill introduces penalties for using synthetic media to commit fraud or other criminal offenses in Virginia, allowing for civil actions and establishing a work group to study enforcement related to such technology.
Collection: Legislation
Status date: Feb. 7, 2024
Status: Engrossed
Primary sponsor: Michelle Maldonado
(5 total sponsors)
Last action: Continued to 2025 in Courts of Justice (11-Y 2-N) (Feb. 19, 2024)
Societal Impact
Data Governance (see reasoning)
The text explicitly mentions synthetic media, generative artificial intelligence, and penalties related to their misuse in committing fraud. This is highly relevant to Social Impact, as it addresses the potential ramifications of synthetic media on personal rights and fraud, reflecting societal issues like misinformation and defamation. It also touches on accountability measures for the production and use of AI technologies with a direct societal effect. Data Governance is somewhat relevant since the definition of synthetic media intersects with data accuracy and potential restrictions on data usage, although the text doesn't delve deeply into those governance aspects. System Integrity receives a lower relevance score, as it doesn't deal much with security or operational integrity of AI systems, but rather with legal definitions and implications. Robustness only somewhat pertains as it lacks focus on performance benchmarking or auditing of AI systems. Thus, Social Impact likely takes precedence, considering the societal consequences outlined in the text.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The legislation primarily concerns the use of synthetic media in the context of legal actions, specifically pertaining to defamation and fraud. This ties closely to the Judicial System, which deals with legal implications and case management regarding crimes committed using AI technologies. Furthermore, the text touches on findings and recommendations which could guide future legislative actions, suggesting relevance to Government Agencies and Public Services as they may be involved in the enforcement of such regulations and legal standards. The focus on synthetic media's misuse in fraudulent circumstances may also implicate Private Enterprises, Labor, and Employment indirectly, particularly regarding employment practices influenced by such technologies. Academic and Research Institutions may have a slight relevance due to the potential study of AI impacts raised in the text, but it is not direct. Overall, the strongest categories here are Judicial System and Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (7) synthetic media (7) foundation model (1) show keywords in context
Description: Allowing bargaining over matters related to the use of artificial intelligence.
Summary: The bill allows collective bargaining regarding the adoption and modification of artificial intelligence technologies affecting employee wages or performance evaluations at Washington's higher education institutions. It aims to protect employee interests in an evolving technological landscape.
Collection: Legislation
Status date: March 8, 2025
Status: Engrossed
Primary sponsor: Lisa Parshley
(47 total sponsors)
Last action: First reading, referred to Labor & Commerce. (March 11, 2025)
Societal Impact (see reasoning)
The provided text primarily focuses on legislation that pertains to the usage of artificial intelligence within the context of collective bargaining agreements. The terms and references to 'artificial intelligence' and related technologies are directly related to the legislation's aim of ensuring that the adoption and modification of AI technologies are subject to collective bargaining if they impact employee wages, hours, or working conditions. Therefore, the relevance of the categories can be evaluated based on the implications of AI on social impact, data governance, system integrity, and robustness in the context of labor relations. Overall, while there are mentions of technology, the act structurally aligns more with social implications rather than data governance, system integrity, or robustness. The legislation's relevance to societal aspects of AI usage, fairness, and employee rights can be considered very relevant.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text explicitly addresses the role of artificial intelligence in the context of collective bargaining, which is primarily relevant to the workforce, labor relations, and government interactions with employees. The mentions of decisions about adopting AI affecting employee conditions directly tie to the implications and interactions within the Labor market. However, the broad aspects like healthcare and other specific sectors are not directly discussed, thus scoring lower in those respects. This text does suggest some level of engagement with public services through government agencies overseeing employment matters, which gives it slight relevance there. Ultimately, the legislation's core focus on the intersection between AI and labor relations is evident.
Keywords (occurrence): artificial intelligence (10) machine learning (1) show keywords in context
Description: To amend the Energy Independence and Security Act of 2007 to direct research, development, demonstration, and commercial application activities in support of supercritical geothermal and closed-loop geothermal systems in supercritical various conditions, and for other purposes.
Summary: The Supercritical Geothermal Research and Development Act aims to enhance research, development, and commercialization of supercritical and closed-loop geothermal systems to improve geothermal energy utilization in various conditions.
Collection: Legislation
Status date: June 7, 2024
Status: Introduced
Primary sponsor: Frank Lucas
(2 total sponsors)
Last action: Subcommittee Hearings Held (July 23, 2024)
Data Governance
System Integrity (see reasoning)
The text does include references to AI through terms such as 'machine learning algorithms,' showing a connection to the use of AI in optimizing and enhancing geothermal research and applications. However, the primary focus of the legislation appears to be on geothermal energy rather than extensive AI-related social impacts or regulatory frameworks. The mention of machine learning suggests some relevance to the Data Governance and System Integrity categories, but it is not the primary thrust of the bill. Furthermore, without broader implications on data governance or system integrity, the scoring for Social Impact and Robustness may remain lower.
Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)
The focus of this legislation is primarily on geothermal energy research and development, with only tangential mentions of AI. The references to AI are included within the context of enhancing geothermal technology rather than addressing specific sectors significantly. Therefore, while there is a slight connection to the Government Agencies and Public Services sector due to its regulatory nature, the overall impact on sectors like Healthcare or Private Enterprises does not seem relevant. The understanding is limited to applications within the energy sector, and its broader impact does not clearly translate to other sectors.
Keywords (occurrence): machine learning (1) show keywords in context
Description: Nursing Practice Changes
Summary: House Bill 178 amends New Mexico's Nursing Practice Act to clarify nursing scopes of practice, enhance the Board of Nursing's powers, modify licensing processes, and ensure confidentiality in disciplinary actions.
Collection: Legislation
Status date: Jan. 28, 2025
Status: Introduced
Primary sponsor: Janelle Anyanonu
(10 total sponsors)
Last action: HHHC: Reported by committee with Do Pass recommendation with amendment(s) (Feb. 13, 2025)
Societal Impact
Data Governance (see reasoning)
The text explicitly mentions 'artificial intelligence' and defines it as a broad category of digital technologies involving algorithms that drive software and robotics behavior, highlighting its relevance to nursing practice. It also includes a mandate for the board to establish standards for the use of AI in nursing, which signifies a focus on the implications of AI in healthcare and nursing practice. This directly ties the legislation to the Social Impact category as it addresses the integration and implications of AI in a healthcare context. Data Governance is moderately relevant as it may imply considerations of data management and accuracy within AI systems used in nursing but lacks specifics in the text. System Integrity is slightly relevant because the mention of AI standards may infer some aspects of oversight, but does not explicitly address security or transparency. Lastly, Robustness is also slightly relevant since the text includes new benchmarks but doesn't focus on certification or auditing of AI systems.
Sector:
Healthcare (see reasoning)
The text directly addresses the use of AI in nursing, clearly placing it within the healthcare sector. The definition of AI and the requirement to develop standards reflect a focused application of AI in clinical settings, impacting nursing practices. Therefore, Healthcare is assigned a high relevance score. Other sectors such as Government Agencies and Public Services might receive slight relevance because the board of nursing functions somewhat like a government agency, but the focus remains predominantly on healthcare. The legislation does not address AI in contexts like Politics and Elections, Judicial System, or Academic and Research Institutions, thereby scoring them as not relevant.
Keywords (occurrence): artificial intelligence (2) show keywords in context
Description: Enacts into law major components of legislation necessary to implement the state public protection and general government budget for the 2024-2025 state fiscal year; establishes the crime of assault on a retail worker (Part A); establishes the crime of fostering the sale of stolen goods as a class A misdemeanor (Part B); adds to the list of specified offenses that constitutes a hate crime (Part C); authorizes the governor to close correctional facilities upon notice to the legislature (Part D...
Summary: The bill implements components of New York's 2024-2025 budget, introducing new crimes like assault on retail workers and fostering the sale of stolen goods, while enhancing safety measures.
Collection: Legislation
Status date: Jan. 17, 2024
Status: Introduced
Primary sponsor: Budget
(sole sponsor)
Last action: SUBSTITUTED BY A8805C (April 18, 2024)
The provided text primarily focuses on implementing state legislation aimed at public protection and adjustments to the penal code. The text does not engage with AI systems, their impacts on society, or legislation directly related to AI governance or integrity. As such, it appears to be completely unrelated to AI-specific issues, making it irrelevant for all categories concerning AI.
Sector: None (see reasoning)
The text outlines various legal amendments and public protection measures but lacks any discussion or reference to AI-related use cases or regulations within specific sectors. This absence of AI content likewise renders the text non-relevant to the identified sectors, leading to a score of 1 across all sectors.
Keywords (occurrence): automated (2) show keywords in context
Description: Use of tenant screening software that uses nonpublic competitor data to set rent prohibited, and use of software that is biased against protected classes prohibited.
Summary: This bill prohibits the use of tenant screening software that relies on nonpublic competitor data for setting rent and bans algorithms biased against protected classes, aiming to enhance fair housing practices in Minnesota.
Collection: Legislation
Status date: Feb. 19, 2025
Status: Introduced
Primary sponsor: Michael Howard
(sole sponsor)
Last action: Introduction and first reading, referred to Housing Finance and Policy (Feb. 19, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This bill addresses two significant issues of AI usage: the prohibition of tenant screening software that employs nonpublic competitor data to set rents and the restriction on algorithms or AI utilized for background screening that may have a biased impact on protected classes. Consequently, it is extremely relevant to Social Impact due to its emphasis on preventing discrimination and bias within AI systems, thereby safeguarding social equity. The Data Governance category is also highly relevant as it concerns the use of data (public and nonpublic) in algorithms and the implications of bias in data used for algorithmic decision-making. The relevance to System Integrity is moderate since the bill does touch upon accountability and responsible AI usage in a specific context. Robustness receives a lower relevance score as it does not focus on benchmarking or auditing for performance, but directly ensures the protection of vulnerable classes against biased AI decisions.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation primarily impacts the housing sector due to its focus on tenant screening algorithms and the implications for renters. The bill intersects with topics in Government Agencies and Public Services through potential regulation enforcement regarding how public services are delivered in the housing market. However, it does not directly address the political process or electoral integrity, nor does it focus on AI's role in the judicial system or healthcare. As a result, the most relevant sector for categorization is Private Enterprises, Labor, and Employment, as the bill pertains to landlord practices in the rental market. The presence of AI in tenant applications also aligns with academic discussions about algorithm bias. Overall, the strongest connections lie with Private Enterprises and Government Agencies.
Keywords (occurrence): algorithm (2) show keywords in context
Description: Creates the Artificial Intelligence Safety and Security Protocol Act. Provides that a developer shall produce, implement, follow, and conspicuously publish a safety and security protocol that includes specified information. Provides that, no less than every 90 days, a developer shall produce and conspicuously publish a risk assessment report that includes specified information. Provides that, at least once every calendar year, a developer shall retain a reputable third-party auditor to produc...
Summary: The Artificial Intelligence Safety and Security Protocol Act mandates developers to establish, publish, and regularly assess safety protocols for AI systems, aiming to mitigate risks and enhance public safety.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: Daniel Didech
(sole sponsor)
Last action: Referred to Rules Committee (Feb. 18, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly addresses various aspects of artificial intelligence, particularly in the context of safety and security protocols for developers. This direct focus on AI systems and their management indicates a high relevance across almost all categories. The emphasis on risk assessment, third-party audits, and safety protocols implies significant implications for societal impacts, data governance, system integrity, and robustness. Each section discusses critical risks that AI could pose, necessitating human oversight and transparent protocols, thus correlating well with the themes within the categories. As such, all categories are expected to receive a high relevance score based on the content of the text.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Hybrid, Emerging, and Unclassified (see reasoning)
The proposed legislation targets AI developers directly and outlines compliance protocols that pertain to the AI sector comprehensively. It does not focus on specific sectors like healthcare, politics, or nonprofits, but it broadly applies to any sector where AI is utilized, especially in public safety and security contexts. Given that it includes elements of governance, the management of data, and public accountability, a neutral score is appropriate since the law acts more as a general framework applicable to any AI-related situation rather than being specifically confined to one sector. Hence, scores for specific sectors will be lower compared to the overarching categories.
Keywords (occurrence): artificial intelligence (8) foundation model (9) show keywords in context
Description: Relating to the disclosure of information with regard to artificial intelligence.
Summary: The bill mandates disclosure of information by companies using artificial intelligence services, detailing model usage and third-party inputs, and prohibits discrimination against whistleblowers. It aims to increase transparency regarding AI operations.
Collection: Legislation
Status date: Dec. 19, 2024
Status: Introduced
Primary sponsor: Bryan Hughes
(sole sponsor)
Last action: Referred to Business & Commerce (Feb. 3, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text discusses the disclosure of information related to artificial intelligence, specifically the obligations to inform individuals about AI models used in services. It addresses the societal impact of AI by advocating transparency, which could lead to better accountability and trust among users. This aligns with the Social Impact category. Additionally, the requirements for disclosure, data management, and cooperation with regulatory authorities pertain to Data Governance. The focus on ensuring compliance and the integrity of the AI systems through oversight relates to System Integrity. However, the text does not prioritize performance benchmarks or auditing measures that would be associated with Robustness. Therefore, these categories are scored accordingly.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation primarily addresses the regulation of AI use within private enterprises that serve individuals, particularly those generating substantial revenue. While it implicates business practices, it does not specifically target areas like political campaigns or judicial applications. The focus on service delivery pertains significantly to Government Agencies and Public Services. However, the legislation does not explicitly mention sectors like Healthcare, Academic Institutions, or others specified, leading to limited relevance with a few sectors.
Keywords (occurrence): artificial intelligence (9) show keywords in context
Description: Establish the Commonwealth Artificial Intelligence Consortium Task Force to design the needs, collect data, develop artificial intelligence solutions, foster innovation and competitiveness, promote artificial intelligence literacy, and ensure trusted artificial intelligence development and governance; establish task force membership; require the task force to meet as needed; require the task force to submit its findings and recommendations to the Legislative Research Commission by November 21...
Summary: The bill establishes the Commonwealth Artificial Intelligence Consortium Task Force in Kentucky to foster collaboration among stakeholders, develop AI solutions tailored to local needs, and promote innovation and literacy in AI.
Collection: Legislation
Status date: March 6, 2025
Status: Introduced
Primary sponsor: Amanda Mays Bledsoe
(sole sponsor)
Last action: to Committee on Committees (S) (March 6, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly addresses the establishment of a task force focused on artificial intelligence (AI) and its implications for the Commonwealth. It discusses the potential of AI to revolutionize industries, improve lives, drive economic growth, and enhance services, which speaks to the social impact category. The mention of promoting AI literacy and ensuring trusted development aligns with governance considerations under social impact. The task force is also tasked with collecting data and developing AI solutions, which can relate to data governance, especially regarding the management of data used in AI systems. The focus on industry collaboration and innovation touches upon aspects of system integrity. However, there are no specific mentions of benchmarking or auditing AI systems, which could tie into robustness. Overall, categories related to social impact and data governance are highly relevant due to the focus on how AI affects society and how data should be managed in AI contexts.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The text emphasizes forming a consortium task force that collaborates among state and local governments, educational institutions, healthcare, and industry stakeholders to discuss AI's implementation and its impact. This indicates relevance to several sectors. Healthcare is specifically mentioned, ensuring it addresses rural healthcare challenges. Government agencies are directly involved in the legislative process to establish this task force, showing its implications for governance. Additionally, there is a focus on innovation and competitiveness which pertains to private enterprises. Hence, sectors such as Government Agencies and Public Services, Healthcare, and Private Enterprises are highly relevant based on the direct implications of AI in these domains. However, sectors such as Politics and Elections, Judicial System, Academic and Research Institutions, Nonprofits and NGOs, and International Cooperation may have lesser relevance because the text does not explicitly address these areas. Therefore, the scores reflect this context.
Keywords (occurrence): artificial intelligence (16) machine learning (1) show keywords in context
Description: A BILL to be entitled an Act to amend Part 2 of Article 6 of Chapter 2 of Title 20 of the Official Code of Georgia Annotated, relating to competencies and core curriculum under the "Quality Basic Education Act," so as to provide that, beginning in the 2031-2032 school year, a computer science course shall be a high school graduation requirement; to provide for certain computer science courses to be substituted for units of credit graduation requirements in certain other subject areas; to prov...
Summary: The Quality Basic Education Act mandates computer science as a high school graduation requirement starting in 2031, addressing critical workforce needs and promoting technology education in Georgia.
Collection: Legislation
Status date: Feb. 18, 2025
Status: Introduced
Primary sponsor: Bethany Ballard
(5 total sponsors)
Last action: House Hopper (Feb. 18, 2025)
Societal Impact (see reasoning)
The text explicitly addresses the need to include computer science in the education curriculum, emphasizing the importance of programming, algorithmic processes, artificial intelligence, and the development of logical critical thinking skills. It tackles issues such as the low percentage of high school graduates taking computer science courses, which directly pertains to the impact of AI on educational standards and workforce readiness. Thus, the discussions around AI's role in education and the need for skills relevant to the current job market support strong relevance to 'Social Impact'. However, while it does mention computer science concepts such as AI, it does not delve into data management, system integrity, or benchmarks that are central to 'Data Governance,' 'System Integrity,' or 'Robustness,' resulting in a lower relevance for these categories.
Sector:
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The focus on education reform through the mandatory inclusion of computer science courses implies a significant impact on 'Academic and Research Institutions' as it highlights necessary skills for future generations. The emphasis on computer science also indicates potential implications for 'Private Enterprises, Labor, and Employment', as it relates to preparing students for a tech-driven job market. The relevance to the other sectors like 'Healthcare', 'Government Agencies and Public Services', etc., is minimal as they do not directly engage with the content of this text. Therefore, scores reflect this focus on education and labor preparedness.
Keywords (occurrence): artificial intelligence (1) algorithm (1) show keywords in context
Description: As enacted, enacts the "Tennessee Artificial Intelligence Advisory Council Act." - Amends TCA Title 4.
Summary: The bill establishes the Tennessee Artificial Intelligence Advisory Council to create an action plan for effective AI use in state government, enhancing service delivery and economic growth while ensuring responsible practices.
Collection: Legislation
Status date: May 29, 2024
Status: Passed
Primary sponsor: Patsy Hazlewood
(3 total sponsors)
Last action: Effective date(s) 05/21/2024 (May 29, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text discusses the creation of the Tennessee Artificial Intelligence Advisory Council, which focuses on guiding the state's use of artificial intelligence to improve government services and leverage AI for economic benefits. The emphasis on ethical use, economic implications, and transparency aligns closely with the impact of AI on society and individuals (Social Impact), as well as the governance and accuracy in the handling of AI and its data (Data Governance). Furthermore, there are references to expectations of governance frameworks and evaluation of AI risks, which speak to systemic integrity. The document does not delve deeply into benchmarking performance or compliance standards, thus Robustness is less relevant in comparison to the other categories.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The bill is highly relevant to several sectors, particularly the Government Agencies and Public Services as it directly pertains to the use of AI in government. The focus is on how AI can improve the efficiencies of state and local government services and goes deeper into workforce development and economic implications of AI, suggesting a somewhat relevant connection to Private Enterprises, Labor, and Employment. There are also touchpoints on education and research related to AI, which lend some relevance to Academic and Research Institutions. However, other sectors like Politics and Elections, Healthcare, and Judicial System do not apply directly to the text’s content, thus receiving a lower score.
Keywords (occurrence): artificial intelligence (22) show keywords in context
Description: Creates the Illinois High-Impact AI Governance Principles and Disclosure Act. Makes findings. Defines terms. Requires the Department of Innovation and Technology to adopt rules regulating businesses that use AI systems to ensure compliance with the 5 principles of AI governance. Lists the 5 principles of AI governance. Requires the Department to adopt rules to ensure that a business that uses an AI system publishes a report on the business's website, with certain requirements. Provides for a ...
Summary: The Illinois High-Impact AI Governance Principles and Disclosure Act establishes regulations for businesses using AI, focusing on safety, transparency, accountability, fairness, and contestability, while requiring public disclosure of compliance.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: Janet Yang Rohr
(sole sponsor)
Last action: Referred to Rules Committee (Feb. 18, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text explicitly pertains to AI governance principles, highlighting the societal concerns around AI, including biases, transparency, and accountability. The principles outlined, such as Safety, Transparency, and Fairness, align well with the objectives of the Social Impact category, as they focus on protecting individuals and communities from potential harms caused by AI systems. The requirement for public disclosures and the establishment of civil penalties for violations indicate strong considerations for both accountability and consumer protections in the usage of AI. Therefore, it is very relevant to the Social Impact category. Data governance is moderately relevant because the text mandates compliance with AI governance principles and the need for public disclosures regarding the design and operation of AI systems, which indirectly relates to data management and accuracy, although it does not directly address data collection or permissions. The emphasis on accountability and transparency suggests a relevance to System Integrity, as this ensures secure practices in AI operations. Robustness is less relevant since the text does not delve into performance benchmarks or auditing structures for AI systems, resulting in a lower score for this category.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The legislation has implications across multiple sectors, primarily focusing on the Government Agencies and Public Services, as it is about regulating AI use and governance in business contexts that likely involves public interaction and oversight. It indirectly connects to the Judicial System due to the mention of accountability and civil penalties for compliance, which may necessitate legal processes. However, it does not explicitly address political-related aspects, healthcare, or academic implications. The mention of businesses indicates a connection to Private Enterprises, Labor, and Employment as well, but it is more about governance rather than employment conditions or competitive practices. Therefore, the most relevant sectors relate primarily to Government Agencies and Public Services and somewhat to the Judicial System and Private Enterprises.
Keywords (occurrence): artificial intelligence (2) machine learning (1) show keywords in context
Description: Creates a commission on AI to be a central resource on the use of AI in this state. Directs the SCIO to hire a Chief Artificial Intelligence Officer. (Flesch Readability Score: 65.7). Establishes the Oregon Commission on Artificial Intelligence to serve as a central resource to monitor the use of artificial intelligence technologies and systems in this state and report on long-term policy implications. Directs the commission to provide an annual report to the Legislative Assembly. Allows the ...
Summary: The bill establishes the Oregon Commission on Artificial Intelligence to monitor AI use, assess its impacts, and make policy recommendations to foster innovation while ensuring safety and equity for Oregonians.
Collection: Legislation
Status date: Feb. 18, 2025
Status: Introduced
Primary sponsor: Daniel Nguyen
(2 total sponsors)
Last action: First reading. Referred to Speaker's desk. (Feb. 18, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text relates to the establishment of a commission focused on the oversight and integration of artificial intelligence (AI) technologies within the state of Oregon. It discusses the need for equitable policies, monitoring the societal impacts of AI, and ensuring protection against risks such as discrimination and privacy violations. Given the range of implications addressed, notably concerning ethical concerns, equity, data protection, and the risks posed by AI systems, the bill is highly relevant to the Social Impact category. It mentions assessing economic opportunities and impacts on jobs, which solidifies its relevance to Data Governance as well, particularly regarding the management and protection of individual rights. Furthermore, the focus on ensuring transparency and safety in the deployment of AI points to implications for System Integrity. Notably, the bill does not specifically mention performance benchmarks or auditing standards, so it’s not as directly tied to the Robustness category. Overall, the text is most critical for the Social Impact, Data Governance, and System Integrity categories based on the AI-related content present.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The proposed legislation explicitly involves the creation of a commission that oversees AI technologies and systems, affecting various sectors. The scope of the bill extends to several areas, including the impact of AI on labor, privacy rights, ethics in technology, and equity considerations. Therefore, it has wide-ranging implications that influence a number of sectors, most notably Government Agencies and Public Services due to the regulatory oversight it will provide. The healthcare sector is touched upon through privacy and ethical considerations, but it is not the primary focus. As such, while it could tangentially relate to Healthcare, it is more directly relevant to Government Agencies and Public Services. Additionally, the bill may touch upon Private Enterprises through workforce impacts but lacks sufficient detail about specific corporate regulations, placing it at a lower relevance. The intersections with academic and research communities are also implied as the bill discusses education on AI, but again, it is not a primary focus. The other sectors, such as Politics and Elections, Judicial System, International Cooperation, Nonprofits, and Hybrid/Emerging, seem less relevant based on the content provided.
Keywords (occurrence): artificial intelligence (30) show keywords in context
Description: An act to add Section 38760 to the Vehicle Code, relating to vehicles.
Summary: The bill requires manufacturers of autonomous vehicles in California to report collisions and disengagements when operating in autonomous mode, enhancing transparency and safety regulations for such vehicles.
Collection: Legislation
Status date: Aug. 28, 2024
Status: Enrolled
Primary sponsor: Matt Haney
(2 total sponsors)
Last action: Senate amendments concurred in. To Engrossing and Enrolling. (Ayes 65. Noes 4.). (Aug. 28, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text focuses on autonomous vehicles and their regulation, particularly in the context of incident reporting. Key terms related to AI, such as 'autonomous mode', indicate relevance to AI's social impact due to the implications on safety, liability, and discrimination, particularly against vulnerable road users. The reporting and oversight requirements suggest a framework for accountability and safety in AI operations, affecting individuals and society as a whole. Understanding how AI technologies can cause potential harm or benefit to users aligns with the Social Impact category, thereby indicating a strong relevance. The Data Governance category is also relevant, as it discusses the collection and management of data related to incidents, including mandates for transparent reporting. System Integrity is considered relevant because the provisions describe specifications for operational performance and the requirement for manual override in problematic situations. However, the focus is primarily anecdotal and regulatory, without delving into internal security measures for the AI systems themselves, which limits its relevance in this category. The Robustness category is less applicable here since the text does not specifically address performance benchmarks for AI systems, and instead focuses on reporting mechanisms, thus limiting its relevance in this category as well.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily addresses legislation concerning autonomous vehicles, directly relevant to several sectors. In the Politics and Elections sector, there is limited relevance, as the text does not discuss AI's role in elections or political campaigns. However, Government Agencies and Public Services is highly relevant as it speaks to how the DMV and other agencies must manage reports and data for the incident reporting of autonomous vehicles. The Judicial System is slightly relevant as it touches on accountability, though it primarily focuses on vehicle regulation rather than judicial applications. The Healthcare sector is not applicable, as there is no mention of healthcare applications. Within the Private Enterprises, Labor, and Employment sector, it reflects the implications for manufacturers and their operational obligations, but does not strongly address employment or corporate governance perspectives. Academic and Research Institutions have minor relevance as the legislation does not engage educational contexts specifically, even though innovations may come from research. International Cooperation and Standards does not receive an ample mention in this text, thus scoring low. Nonprofits and NGOs have little relevance unless involved in advocacy or disability issues related to the legislation, while Hybrid, Emerging, and Unclassified could apply given the innovative nature of autonomous vehicles, yet again lacks a strong basis here.
Keywords (occurrence): automated (2) autonomous vehicle (41) show keywords in context
Description: Artificial Intelligence Act
Summary: The Artificial Intelligence Act mandates documentation, risk assessment, and transparency for high-risk AI systems to prevent algorithmic discrimination, ensuring accountability for developers and deployers in New Mexico.
Collection: Legislation
Status date: Jan. 21, 2025
Status: Introduced
Primary sponsor: Christine Chandler
(4 total sponsors)
Last action: HCPAC: Reported by committee with Do Pass recommendation (Feb. 3, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text is the 'Artificial Intelligence Act' and directly pertains to various aspects of AI regulation. It addresses algorithmic discrimination, requiring developers to be accountable for their AI outputs, and mandates risk management policies which influence social dynamics. For Data Governance, it emphasizes the need for complete documentation regarding data used in AI systems, addressing any potential biases or infringements, aligning with consumer privacy and accurate data management standards. System Integrity is a key focus as it outlines obligations for transparency in AI usage and oversight policies. Robustness is present as the Act sets frameworks for impact assessment and performance evaluation of AI, ensuring adherence to necessary benchmarks for safety and effectiveness. Each category pertains to the themes present in the text, reflecting the broader implications of the legislation on society, data handling, system reliability, and standardization in AI performance.
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions
International Cooperation and Standards (see reasoning)
This Act has extensive implications across multiple sectors. In 'Politics and Elections', it sets the stage for how AI can be regulated within electoral contexts, safeguarding against algorithmic biases that can influence outcomes. 'Government Agencies and Public Services' is relevant, as it establishes regulatory frameworks that could affect AI deployment by public institutions. The 'Judicial System' is implicated due to provisions for citizens to seek civil action based on AI-related grievances, reflecting a concern for legal accountability in AI use. 'Healthcare' is significantly addressed, given the definitions and implications surrounding AI in delivering health services, ensuring ethical application. The Act also speaks to 'Private Enterprises, Labor, and Employment' by enforcing standards that affect corporate governance and labor practices in the face of AI implementation. 'Academic and Research Institutions' would also be directly relevant due to the emphasis on transparency and rigorous testing protocols that can inform research and advancements in AI. International cooperation issues may arise due to the multi-state implications of implementing such standards. Thus, the Act is of considerable relevance across most sectors, particularly those intersecting with AI's influence on society.
Keywords (occurrence): artificial intelligence (79) show keywords in context
Description: Camera usage prohibited for traffic safety enforcement, and previous appropriation cancelled.
Summary: The bill prohibits the use of traffic safety cameras for enforcing traffic laws in Minnesota, cancels funding for related programs, and repeals existing regulations on such systems.
Collection: Legislation
Status date: March 12, 2025
Status: Introduced
Primary sponsor: Drew Roach
(6 total sponsors)
Last action: Introduction and first reading, referred to Transportation Finance and Policy (March 12, 2025)
System Integrity (see reasoning)
The text primarily addresses regulations related to the use of traffic safety cameras, specifically prohibiting their use and outlining associated appropriations and definitions. Although there are mentions of 'automated license plate readers' and a 'traffic safety camera system' that could imply relevance to AI, the context does not deeply explore how these systems utilize AI technology, algorithms, or machine learning. Therefore, while it touches upon automation and data capture within the laws, the overarching focus is on prohibitory regulations rather than the social impact of AI, data governance, system integrity, or robustness in AI systems in a comprehensive manner.
Sector: None (see reasoning)
This legislation does not particularly relate to any specific sector that employs AI distinctively as defined in the sector descriptions, as the focus is on traffic safety enforcement mechanisms rather than broader applications across different sectors. The mention of cameras and automated systems could initially suggest relevance to public services or law enforcement, but the bill largely negates their use rather than delineating guidelines or standards for AI application in these sectors. The core intention is regulatory in nature, centering on prohibition.
Keywords (occurrence): automated (3)
Description: An act to amend Section 1384 of the Health and Safety Code, and to amend Section 10127.19 of the Insurance Code, relating to health care coverage.
Summary: Assembly Bill 682 mandates health care service plans and insurers in California to report detailed monthly claims data, including denials and reasons for them. It aims to enhance transparency and accountability in health care coverage.
Collection: Legislation
Status date: Feb. 14, 2025
Status: Introduced
Primary sponsor: Liz Ortega
(2 total sponsors)
Last action: From printer. May be heard in committee March 17. (Feb. 15, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly discusses the use of Artificial Intelligence (AI) in the processing and adjudication of health care claims within the scope of health care coverage reporting. It requires health care service plans to report the number of claims processed using AI. This connection suggests implications for consumer protections and accountability in the context of AI. Given that AI can impact individuals through automated decisions in health care, issues relating to fairness, bias, and consumer protections are pertinent. Hence, the Social Impact category has significant relevance. The Data Governance category is also relevant due to its focus on reporting accuracy and the inclusion of claims processing data that may involve AI, addressing data collection protocols. The System Integrity category is relevant as it involves measures of transparency and oversight of AI use in claims processing. However, the Robustness category appears less relevant since the text primarily focuses on reporting rather than the performance benchmarks or certification of AI systems. Overall, the text mainly pertains to social implications, governance of data, and system integrity related to health care AI applications.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text is highly relevant to the Healthcare sector since it specifically deals with health care coverage, reporting requirements, and the incorporation of AI into claims processing and adjudication. The legislation aims to regulate health care service plans and insurers, thus directly impacting the management and delivery of health services. Given that the use of AI specifically mentioned refers to its application in health care claims, the relevance to this sector is quite pronounced. Other sectors like Political and Elections, Government Agencies and Public Services, and Private Enterprises and Labor could have tangential relevance but lack explicit references in the text. Therefore, the Healthcare sector receives a high score.
Keywords (occurrence): artificial intelligence (4) show keywords in context
Description: Relative to prohibiting the unlawful distribution of misleading synthetic media.
Summary: The bill prohibits the unlawful distribution of misleading synthetic media, defining penalties for unauthorized and misleading use, particularly related to elections, to protect individuals and electoral integrity.
Collection: Legislation
Status date: Dec. 11, 2023
Status: Introduced
Primary sponsor: Linda Massimilla
(11 total sponsors)
Last action: Refer for Interim Study: Motion Adopted Voice Vote 03/14/2024 House Journal 8 P. 5 (March 14, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The legislation centers on the unlawful distribution of misleading synthetic media and explicitly links the definition of synthetic media to artificial intelligence algorithms. This directly relates to the 'Social Impact' category as it addresses potential harm from misleading AI-generated content and its implications for public trust and election integrity. It also connects to 'Data Governance' since unauthorized usage of AI to create misleading content can involve management of data rights and personal consent. The aspect of accountability and penalties in the bill aligns with 'System Integrity,' as it seeks to establish clear rules for AI systems that could mislead individuals in significant ways, which involves transparency and control. The robustness of these measures signifies a compliance effort with standards in AI content distribution. Overall, this legislation addresses both the societal consequences of AI media and accountability within AI governance.
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system (see reasoning)
The text is closely related to the sector of politics and elections, as it explicitly speaks about misleading synthetic media that can influence election outcomes. It addresses the deployment of AI in creating media that could harm electoral integrity, reflecting legislative intent in regulating AI's role in politics. Furthermore, it implicates government agencies and public services as the enforcement and compliance measures would likely involve public bodies. However, less direct relevance to other sectors such as healthcare or private enterprises suggests that while the bill intersects with several sectors, its core focus remains on political implications and public governance.
Keywords (occurrence): artificial intelligence (1) synthetic media (22) show keywords in context