5015 results:


Description: Relating to the disclosure of information with regard to artificial intelligence.
Summary: The bill requires Texas-based companies using artificial intelligence to disclose specific information about their AI models, including functions, third-party inputs, and any modifications. It aims to enhance transparency and accountability in AI services.
Collection: Legislation
Status date: April 24, 2025
Status: Engrossed
Primary sponsor: Bryan Hughes (sole sponsor)
Last action: Received from the Senate (April 25, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text discusses the disclosure of information related to artificial intelligence, specifically the obligations to inform individuals about AI models used in services. It addresses the societal impact of AI by advocating transparency, which could lead to better accountability and trust among users. This aligns with the Social Impact category. Additionally, the requirements for disclosure, data management, and cooperation with regulatory authorities pertain to Data Governance. The focus on ensuring compliance and the integrity of the AI systems through oversight relates to System Integrity. However, the text does not prioritize performance benchmarks or auditing measures that would be associated with Robustness. Therefore, these categories are scored accordingly.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The legislation primarily addresses the regulation of AI use within private enterprises that serve individuals, particularly those generating substantial revenue. While it implicates business practices, it does not specifically target areas like political campaigns or judicial applications. The focus on service delivery pertains significantly to Government Agencies and Public Services. However, the legislation does not explicitly mention sectors like Healthcare, Academic Institutions, or others specified, leading to limited relevance with a few sectors.


Keywords (occurrence): artificial intelligence (9) show keywords in context

Description: Establish the Commonwealth Artificial Intelligence Consortium Task Force to design the needs, collect data, develop artificial intelligence solutions, foster innovation and competitiveness, promote artificial intelligence literacy, and ensure trusted artificial intelligence development and governance; establish task force membership; require the task force to meet as needed; require the task force to submit its findings and recommendations to the Legislative Research Commission by November 21...
Summary: The bill establishes the Commonwealth Artificial Intelligence Consortium Task Force in Kentucky to foster collaboration among stakeholders, develop AI solutions tailored to local needs, and promote innovation and literacy in AI.
Collection: Legislation
Status date: March 6, 2025
Status: Introduced
Primary sponsor: Amanda Mays Bledsoe (sole sponsor)
Last action: to Committee on Committees (S) (March 6, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly addresses the establishment of a task force focused on artificial intelligence (AI) and its implications for the Commonwealth. It discusses the potential of AI to revolutionize industries, improve lives, drive economic growth, and enhance services, which speaks to the social impact category. The mention of promoting AI literacy and ensuring trusted development aligns with governance considerations under social impact. The task force is also tasked with collecting data and developing AI solutions, which can relate to data governance, especially regarding the management of data used in AI systems. The focus on industry collaboration and innovation touches upon aspects of system integrity. However, there are no specific mentions of benchmarking or auditing AI systems, which could tie into robustness. Overall, categories related to social impact and data governance are highly relevant due to the focus on how AI affects society and how data should be managed in AI contexts.


Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

The text emphasizes forming a consortium task force that collaborates among state and local governments, educational institutions, healthcare, and industry stakeholders to discuss AI's implementation and its impact. This indicates relevance to several sectors. Healthcare is specifically mentioned, ensuring it addresses rural healthcare challenges. Government agencies are directly involved in the legislative process to establish this task force, showing its implications for governance. Additionally, there is a focus on innovation and competitiveness which pertains to private enterprises. Hence, sectors such as Government Agencies and Public Services, Healthcare, and Private Enterprises are highly relevant based on the direct implications of AI in these domains. However, sectors such as Politics and Elections, Judicial System, Academic and Research Institutions, Nonprofits and NGOs, and International Cooperation may have lesser relevance because the text does not explicitly address these areas. Therefore, the scores reflect this context.


Keywords (occurrence): artificial intelligence (16) machine learning (1) show keywords in context

Description: To amend sections 3517.153, 3517.154, 3517.155, 3517.993, and 3599.40 and to enact section 3517.24 of the Revised Code to regulate the dissemination of deceptive and fraudulent synthetic media for the purpose of influencing the results of an election.
Summary: This bill aims to regulate the use and dissemination of deceptive synthetic media designed to influence election outcomes in Ohio, ensuring transparency and accountability in election-related communications.
Collection: Legislation
Status date: June 12, 2025
Status: Introduced
Primary sponsor: Joseph Miller (10 total sponsors)
Last action: Introduced (June 12, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The legislation clearly addresses the implications of synthetic media, with a specific focus on its potential to mislead individuals during elections, which aligns with the Social Impact category. The text discusses consumer protections regarding deceptive media and accountability measures for those who disseminate such media, indicating that it is relevant to societal harm and misinformation caused by AI-generated content. The Data Governance category is also relevant since it addresses requirements for transparency in synthetic media, focusing on maintaining the integrity of data presented to the public by mandating disclosures about AI manipulation. System Integrity is somewhat relevant as it emphasizes the need for oversight in the release of AI-generated synthetic media, but it is less focused on security and technical standards. Robustness is not particularly relevant as the legislation does not discuss performance benchmarking of AI systems but focuses on regulation of deceptive practices. Overall, the strongest relevance is found in the Social Impact and Data Governance categories, followed by System Integrity.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

This legislation directly pertains to the sector of Politics and Elections, as it regulates the use of AI in the context of electoral processes by addressing the dissemination of deceptive synthetic media in political campaigns. The implications of synthetic media for misleading voters are crucial to ensuring fair electoral practices. The relevance to Government Agencies and Public Services is moderate, as the enforcement and oversight mechanisms involve state bodies like the Ohio elections commission, but these mechanisms are fundamentally tied to electoral integrity and not general public services. The legislation does not specifically address the Judicial System, Healthcare, Private Enterprises, Labor, and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, or Hybrid, Emerging, and Unclassified sectors. Thus, the most significant relevance is in the Politics and Elections sector.


Keywords (occurrence): artificial intelligence (2) synthetic media (12) show keywords in context

Description: A BILL to be entitled an Act to amend Part 2 of Article 6 of Chapter 2 of Title 20 of the Official Code of Georgia Annotated, relating to competencies and core curriculum under the "Quality Basic Education Act," so as to provide that, beginning in the 2031-2032 school year, a computer science course shall be a high school graduation requirement; to provide for certain computer science courses to be substituted for units of credit graduation requirements in certain other subject areas; to prov...
Summary: The Quality Basic Education Act mandates computer science as a high school graduation requirement starting in 2031, addressing critical workforce needs and promoting technology education in Georgia.
Collection: Legislation
Status date: Feb. 18, 2025
Status: Introduced
Primary sponsor: Bethany Ballard (5 total sponsors)
Last action: House Hopper (Feb. 18, 2025)

Category:
Societal Impact (see reasoning)

The text explicitly addresses the need to include computer science in the education curriculum, emphasizing the importance of programming, algorithmic processes, artificial intelligence, and the development of logical critical thinking skills. It tackles issues such as the low percentage of high school graduates taking computer science courses, which directly pertains to the impact of AI on educational standards and workforce readiness. Thus, the discussions around AI's role in education and the need for skills relevant to the current job market support strong relevance to 'Social Impact'. However, while it does mention computer science concepts such as AI, it does not delve into data management, system integrity, or benchmarks that are central to 'Data Governance,' 'System Integrity,' or 'Robustness,' resulting in a lower relevance for these categories.


Sector:
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

The focus on education reform through the mandatory inclusion of computer science courses implies a significant impact on 'Academic and Research Institutions' as it highlights necessary skills for future generations. The emphasis on computer science also indicates potential implications for 'Private Enterprises, Labor, and Employment', as it relates to preparing students for a tech-driven job market. The relevance to the other sectors like 'Healthcare', 'Government Agencies and Public Services', etc., is minimal as they do not directly engage with the content of this text. Therefore, scores reflect this focus on education and labor preparedness.


Keywords (occurrence): artificial intelligence (1) algorithm (1) show keywords in context

Description: To implement the recommendations of the Department of Transportation concerning the Connecticut Plan Coordinate System, an autonomous vehicle pilot program, crosswalks, light rail transit signals, highway service signs, federal surface transportation urban program funding, rail facilities and transit districts.
Summary: The bill implements various transportation recommendations, focusing on regulations for transportation network companies, traffic enforcement, laser projection at aircraft, and improving small harbor projects, enhancing overall transportation safety and infrastructure in Connecticut.
Collection: Legislation
Status date: June 10, 2025
Status: Passed
Primary sponsor: Transportation Committee (4 total sponsors)
Last action: Public Act 25-65 (June 10, 2025)

Category:
Societal Impact (see reasoning)

The text addresses automated traffic enforcement safety devices and includes an autonomous vehicle pilot program, which are directly related to AI technologies in traffic management. The mention of 'automated' suggests a use of AI for traffic control and enforcement, relevant to the Social Impact category due to implications for public safety and the influence on transportation systems. The need for safety, accountability, and the potential effects of these technologies on society resonate with topics such as fairness, public trust, and automated decision-making. However, specific references to data governance, system integrity, or robustness are minimal. Therefore, it connects primarily with the social impacts of AI, making it very relevant to that category while moderately relevant to others based on the implications of automated systems.


Sector:
Government Agencies and Public Services (see reasoning)

This text primarily engages with Government Agencies and Public Services through its focus on the Department of Transportation and state legislation concerning transportation networks and safety. The regulation of traffic signals and other controlled environments connects to governmental oversight and operational improvements. It does not closely relate to the other sectors since there are no explicit mentions of political campaigns, healthcare provisions, or specific provisions for judicial systems, private enterprises, or NGOs. Therefore, Government Agencies and Public Services holds the most relevance among the sectors named.


Keywords (occurrence): automated (2) show keywords in context

Description: As introduced, creates a civil and criminal action for individuals who are the subject of an intimate digital depiction that is disclosed without the individual's consent under certain circumstances. - Amends TCA Title 28 and Title 39, Chapter 17.
Summary: The "Preventing Deepfake Images Act" amends Tennessee law to prohibit unauthorized use of deepfake images or likeness, establishing civil and criminal penalties for violations without consent, particularly in intimate contexts.
Collection: Legislation
Status date: May 15, 2025
Status: Passed
Primary sponsor: William Lamberth (29 total sponsors)
Last action: Comp. became Pub. Ch. 466 (May 15, 2025)

Category:
Societal Impact (see reasoning)

The text explicitly focuses on the unauthorized use of deepfake technology, which falls directly under concerns about AI and its societal implications. The legislation addresses issues such as consent, the potential for emotional and reputational harm caused by AI-generated content, and encourages accountability among individuals and entities that create or distribute such content. Due to its strong association with AI's impact on society, it is particularly relevant to the 'Social Impact' category. The legislation does not specifically relate to data governance, system integrity, or robustness, as these aspects are not directly addressed in the context of deepfakes or AI-generated content manipulation.


Sector: None (see reasoning)

This legislation pertains to the use of deepfake technology and its implications for individual rights, primarily affecting the personal and social realms. It does not specifically address political processes, government services, judicial systems, healthcare applications, employment practices, academic contexts, international standards, or nonprofit use. Although it could tangentially connect to politics due to mention of legislative proceedings, the core function of the bill revolves around personal safety and consent rather than these sectors. Therefore, it aligns best with the concept of personal rights rather than any formalized sector listed.


Keywords (occurrence): artificial intelligence (1) deepfake (2) show keywords in context

Description: Creates the Illinois High-Impact AI Governance Principles and Disclosure Act. Makes findings. Defines terms. Requires the Department of Innovation and Technology to adopt rules regulating businesses that use AI systems to ensure compliance with the 5 principles of AI governance. Lists the 5 principles of AI governance. Requires the Department to adopt rules to ensure that a business that uses an AI system publishes a report on the business's website, with certain requirements. Provides for a ...
Summary: The Illinois High-Impact AI Governance Principles and Disclosure Act establishes regulations for businesses using AI, focusing on safety, transparency, accountability, fairness, and contestability, while requiring public disclosure of compliance.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: Janet Yang Rohr (sole sponsor)
Last action: Referred to Rules Committee (Feb. 18, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

This text explicitly pertains to AI governance principles, highlighting the societal concerns around AI, including biases, transparency, and accountability. The principles outlined, such as Safety, Transparency, and Fairness, align well with the objectives of the Social Impact category, as they focus on protecting individuals and communities from potential harms caused by AI systems. The requirement for public disclosures and the establishment of civil penalties for violations indicate strong considerations for both accountability and consumer protections in the usage of AI. Therefore, it is very relevant to the Social Impact category. Data governance is moderately relevant because the text mandates compliance with AI governance principles and the need for public disclosures regarding the design and operation of AI systems, which indirectly relates to data management and accuracy, although it does not directly address data collection or permissions. The emphasis on accountability and transparency suggests a relevance to System Integrity, as this ensures secure practices in AI operations. Robustness is less relevant since the text does not delve into performance benchmarks or auditing structures for AI systems, resulting in a lower score for this category.


Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)

The legislation has implications across multiple sectors, primarily focusing on the Government Agencies and Public Services, as it is about regulating AI use and governance in business contexts that likely involves public interaction and oversight. It indirectly connects to the Judicial System due to the mention of accountability and civil penalties for compliance, which may necessitate legal processes. However, it does not explicitly address political-related aspects, healthcare, or academic implications. The mention of businesses indicates a connection to Private Enterprises, Labor, and Employment as well, but it is more about governance rather than employment conditions or competitive practices. Therefore, the most relevant sectors relate primarily to Government Agencies and Public Services and somewhat to the Judicial System and Private Enterprises.


Keywords (occurrence): artificial intelligence (2) machine learning (1) show keywords in context

Description: As enacted, enacts the "Tennessee Artificial Intelligence Advisory Council Act." - Amends TCA Title 4.
Summary: The bill establishes the Tennessee Artificial Intelligence Advisory Council to create an action plan for effective AI use in state government, enhancing service delivery and economic growth while ensuring responsible practices.
Collection: Legislation
Status date: May 29, 2024
Status: Passed
Primary sponsor: Patsy Hazlewood (3 total sponsors)
Last action: Effective date(s) 05/21/2024 (May 29, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text discusses the creation of the Tennessee Artificial Intelligence Advisory Council, which focuses on guiding the state's use of artificial intelligence to improve government services and leverage AI for economic benefits. The emphasis on ethical use, economic implications, and transparency aligns closely with the impact of AI on society and individuals (Social Impact), as well as the governance and accuracy in the handling of AI and its data (Data Governance). Furthermore, there are references to expectations of governance frameworks and evaluation of AI risks, which speak to systemic integrity. The document does not delve deeply into benchmarking performance or compliance standards, thus Robustness is less relevant in comparison to the other categories.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

The bill is highly relevant to several sectors, particularly the Government Agencies and Public Services as it directly pertains to the use of AI in government. The focus is on how AI can improve the efficiencies of state and local government services and goes deeper into workforce development and economic implications of AI, suggesting a somewhat relevant connection to Private Enterprises, Labor, and Employment. There are also touchpoints on education and research related to AI, which lend some relevance to Academic and Research Institutions. However, other sectors like Politics and Elections, Healthcare, and Judicial System do not apply directly to the text’s content, thus receiving a lower score.


Keywords (occurrence): artificial intelligence (22) show keywords in context

Description: An act to add and repeal Section 12817 to the Government Code, relating to artificial intelligence.
Summary: Senate Bill No. 579 establishes a working group to assess the role, benefits, and risks of artificial intelligence in mental health, ensuring ethical use and producing reports for legislative guidance by 2030.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Steve Padilla (sole sponsor)
Last action: Read second time and amended. Re-referred to Com. on APPR. (March 26, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly addresses mental health in relation to artificial intelligence, focusing on how AI can improve mental health outcomes, as well as assessing ethical standards and potential risks of using AI in mental health settings. This direct emphasis on the societal implications and individual impact of AI technologies places it strongly within the 'Social Impact' category. Additionally, the act involves the evaluation and management of data and frameworks concerning AI tools in mental health, which suggests relevance to 'Data Governance' as well. The mention of appointing a working group implies a level of oversight regarding system integrity, but this is less explicit than the implications for social impact and data governance. While there are elements that could touch on robustness, such as references to best practices, they are not as prominent. Overall, the text lends itself best to the 'Social Impact' category for its clear focus on individual well-being and ethical implications of AI in mental health.


Sector:
Healthcare
Academic and Research Institutions (see reasoning)

The legislation targets the use of artificial intelligence in mental health, which directly relates to the healthcare sector. By evaluating AI's role in treatment and diagnosis, addressing potential risks, and proposing training frameworks for mental health professionals, it highlights its importance in healthcare settings. The focus on stakeholder engagement and input suggests the bill's aim to inform healthcare practices and regulatory measures. Although there are components that could overlap with potential implications for government agencies, the primary emphasis remains within the healthcare sector, thus solidifying its classification in that field.


Keywords (occurrence): artificial intelligence (15) automated (1) show keywords in context

Description: An act to add Section 38760 to the Vehicle Code, relating to vehicles.
Summary: The bill requires manufacturers of autonomous vehicles in California to report collisions and disengagements when operating in autonomous mode, enhancing transparency and safety regulations for such vehicles.
Collection: Legislation
Status date: Aug. 28, 2024
Status: Enrolled
Primary sponsor: Matt Haney (2 total sponsors)
Last action: Senate amendments concurred in. To Engrossing and Enrolling. (Ayes 65. Noes 4.). (Aug. 28, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

This text focuses on autonomous vehicles and their regulation, particularly in the context of incident reporting. Key terms related to AI, such as 'autonomous mode', indicate relevance to AI's social impact due to the implications on safety, liability, and discrimination, particularly against vulnerable road users. The reporting and oversight requirements suggest a framework for accountability and safety in AI operations, affecting individuals and society as a whole. Understanding how AI technologies can cause potential harm or benefit to users aligns with the Social Impact category, thereby indicating a strong relevance. The Data Governance category is also relevant, as it discusses the collection and management of data related to incidents, including mandates for transparent reporting. System Integrity is considered relevant because the provisions describe specifications for operational performance and the requirement for manual override in problematic situations. However, the focus is primarily anecdotal and regulatory, without delving into internal security measures for the AI systems themselves, which limits its relevance in this category. The Robustness category is less applicable here since the text does not specifically address performance benchmarks for AI systems, and instead focuses on reporting mechanisms, thus limiting its relevance in this category as well.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text primarily addresses legislation concerning autonomous vehicles, directly relevant to several sectors. In the Politics and Elections sector, there is limited relevance, as the text does not discuss AI's role in elections or political campaigns. However, Government Agencies and Public Services is highly relevant as it speaks to how the DMV and other agencies must manage reports and data for the incident reporting of autonomous vehicles. The Judicial System is slightly relevant as it touches on accountability, though it primarily focuses on vehicle regulation rather than judicial applications. The Healthcare sector is not applicable, as there is no mention of healthcare applications. Within the Private Enterprises, Labor, and Employment sector, it reflects the implications for manufacturers and their operational obligations, but does not strongly address employment or corporate governance perspectives. Academic and Research Institutions have minor relevance as the legislation does not engage educational contexts specifically, even though innovations may come from research. International Cooperation and Standards does not receive an ample mention in this text, thus scoring low. Nonprofits and NGOs have little relevance unless involved in advocacy or disability issues related to the legislation, while Hybrid, Emerging, and Unclassified could apply given the innovative nature of autonomous vehicles, yet again lacks a strong basis here.


Keywords (occurrence): automated (2) autonomous vehicle (41) show keywords in context

Description: Artificial Intelligence Act
Summary: The Artificial Intelligence Act mandates documentation, risk assessment, and transparency for high-risk AI systems to prevent algorithmic discrimination, ensuring accountability for developers and deployers in New Mexico.
Collection: Legislation
Status date: Jan. 21, 2025
Status: Introduced
Primary sponsor: Christine Chandler (4 total sponsors)
Last action: HCPAC: Reported by committee with Do Pass recommendation (Feb. 3, 2025)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text is the 'Artificial Intelligence Act' and directly pertains to various aspects of AI regulation. It addresses algorithmic discrimination, requiring developers to be accountable for their AI outputs, and mandates risk management policies which influence social dynamics. For Data Governance, it emphasizes the need for complete documentation regarding data used in AI systems, addressing any potential biases or infringements, aligning with consumer privacy and accurate data management standards. System Integrity is a key focus as it outlines obligations for transparency in AI usage and oversight policies. Robustness is present as the Act sets frameworks for impact assessment and performance evaluation of AI, ensuring adherence to necessary benchmarks for safety and effectiveness. Each category pertains to the themes present in the text, reflecting the broader implications of the legislation on society, data handling, system reliability, and standardization in AI performance.


Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions
International Cooperation and Standards (see reasoning)

This Act has extensive implications across multiple sectors. In 'Politics and Elections', it sets the stage for how AI can be regulated within electoral contexts, safeguarding against algorithmic biases that can influence outcomes. 'Government Agencies and Public Services' is relevant, as it establishes regulatory frameworks that could affect AI deployment by public institutions. The 'Judicial System' is implicated due to provisions for citizens to seek civil action based on AI-related grievances, reflecting a concern for legal accountability in AI use. 'Healthcare' is significantly addressed, given the definitions and implications surrounding AI in delivering health services, ensuring ethical application. The Act also speaks to 'Private Enterprises, Labor, and Employment' by enforcing standards that affect corporate governance and labor practices in the face of AI implementation. 'Academic and Research Institutions' would also be directly relevant due to the emphasis on transparency and rigorous testing protocols that can inform research and advancements in AI. International cooperation issues may arise due to the multi-state implications of implementing such standards. Thus, the Act is of considerable relevance across most sectors, particularly those intersecting with AI's influence on society.


Keywords (occurrence): artificial intelligence (79) show keywords in context

Description: Camera usage prohibited for traffic safety enforcement, and previous appropriation cancelled.
Summary: The bill prohibits the use of traffic safety cameras for enforcing traffic laws in Minnesota, cancels funding for related programs, and repeals existing regulations on such systems.
Collection: Legislation
Status date: March 12, 2025
Status: Introduced
Primary sponsor: Drew Roach (6 total sponsors)
Last action: Introduction and first reading, referred to Transportation Finance and Policy (March 12, 2025)

Category:
System Integrity (see reasoning)

The text primarily addresses regulations related to the use of traffic safety cameras, specifically prohibiting their use and outlining associated appropriations and definitions. Although there are mentions of 'automated license plate readers' and a 'traffic safety camera system' that could imply relevance to AI, the context does not deeply explore how these systems utilize AI technology, algorithms, or machine learning. Therefore, while it touches upon automation and data capture within the laws, the overarching focus is on prohibitory regulations rather than the social impact of AI, data governance, system integrity, or robustness in AI systems in a comprehensive manner.


Sector: None (see reasoning)

This legislation does not particularly relate to any specific sector that employs AI distinctively as defined in the sector descriptions, as the focus is on traffic safety enforcement mechanisms rather than broader applications across different sectors. The mention of cameras and automated systems could initially suggest relevance to public services or law enforcement, but the bill largely negates their use rather than delineating guidelines or standards for AI application in these sectors. The core intention is regulatory in nature, centering on prohibition.


Keywords (occurrence): automated (3)

Description: Relative to classified workers.
Summary: The bill urges federal legislation to secure rights for classified workers, ensuring safe working conditions, competitive wages, job security, and access to benefits and professional development opportunities.
Collection: Legislation
Status date: March 3, 2025
Status: Introduced
Primary sponsor: Sabrina Cervantes (3 total sponsors)
Last action: Read second time and amended. Ordered to third reading. (March 27, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text primarily addresses the rights and conditions of classified workers, focusing on their compensation, working conditions, job security, and the impact of electronic monitoring, data, algorithms, and artificial intelligence technology on their jobs. While it mentions AI and technology, it does so mainly in the context of seeking worker rights and protections related to these technologies, as well as not addressing how AI specifically affects their roles or the systems they work with. Thus, the relevance of the categories is assessed as follows: The Social Impact category is relevant as it relates to workers' rights and safety, which is crucial given AI's potential impact on these areas. Data Governance is relevant given the mention of algorithms and data collection in the workplace, although it's not a primary focus. System Integrity has some relevance due to the mention of monitoring and the need for safeguards but is not deeply explored. Robustness is not applicable since the text does not focus on performance benchmarks or compliance for AI systems.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text discusses classified workers in educational settings, emphasizing their rights and how AI and technology may impact their work environments. The sector of Government Agencies and Public Services is relevant as it concerns employees working in public education systems. Private Enterprises, Labor, and Employment is relevant due to the discussion of worker rights and employment conditions; however, it is more focused on public sector workers. Other sectors like Politics and Elections or Healthcare do not find a strong connection in the context of this resolution.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Description: An act to amend Section 1384 of the Health and Safety Code, and to amend Section 10127.19 of the Insurance Code, relating to health care coverage.
Summary: Assembly Bill 682 mandates health care service plans and insurers in California to report detailed monthly claims data, including denials and reasons for them. It aims to enhance transparency and accountability in health care coverage.
Collection: Legislation
Status date: Feb. 14, 2025
Status: Introduced
Primary sponsor: Liz Ortega (2 total sponsors)
Last action: From printer. May be heard in committee March 17. (Feb. 15, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly discusses the use of Artificial Intelligence (AI) in the processing and adjudication of health care claims within the scope of health care coverage reporting. It requires health care service plans to report the number of claims processed using AI. This connection suggests implications for consumer protections and accountability in the context of AI. Given that AI can impact individuals through automated decisions in health care, issues relating to fairness, bias, and consumer protections are pertinent. Hence, the Social Impact category has significant relevance. The Data Governance category is also relevant due to its focus on reporting accuracy and the inclusion of claims processing data that may involve AI, addressing data collection protocols. The System Integrity category is relevant as it involves measures of transparency and oversight of AI use in claims processing. However, the Robustness category appears less relevant since the text primarily focuses on reporting rather than the performance benchmarks or certification of AI systems. Overall, the text mainly pertains to social implications, governance of data, and system integrity related to health care AI applications.


Sector:
Government Agencies and Public Services
Healthcare (see reasoning)

The text is highly relevant to the Healthcare sector since it specifically deals with health care coverage, reporting requirements, and the incorporation of AI into claims processing and adjudication. The legislation aims to regulate health care service plans and insurers, thus directly impacting the management and delivery of health services. Given that the use of AI specifically mentioned refers to its application in health care claims, the relevance to this sector is quite pronounced. Other sectors like Political and Elections, Government Agencies and Public Services, and Private Enterprises and Labor could have tangential relevance but lack explicit references in the text. Therefore, the Healthcare sector receives a high score.


Keywords (occurrence): artificial intelligence (4) show keywords in context

Description: Relative to prohibiting the unlawful distribution of misleading synthetic media.
Summary: The bill prohibits the unlawful distribution of misleading synthetic media, defining penalties for unauthorized and misleading use, particularly related to elections, to protect individuals and electoral integrity.
Collection: Legislation
Status date: Dec. 11, 2023
Status: Introduced
Primary sponsor: Linda Massimilla (11 total sponsors)
Last action: Refer for Interim Study: Motion Adopted Voice Vote 03/14/2024 House Journal 8 P. 5 (March 14, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The legislation centers on the unlawful distribution of misleading synthetic media and explicitly links the definition of synthetic media to artificial intelligence algorithms. This directly relates to the 'Social Impact' category as it addresses potential harm from misleading AI-generated content and its implications for public trust and election integrity. It also connects to 'Data Governance' since unauthorized usage of AI to create misleading content can involve management of data rights and personal consent. The aspect of accountability and penalties in the bill aligns with 'System Integrity,' as it seeks to establish clear rules for AI systems that could mislead individuals in significant ways, which involves transparency and control. The robustness of these measures signifies a compliance effort with standards in AI content distribution. Overall, this legislation addresses both the societal consequences of AI media and accountability within AI governance.


Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system (see reasoning)

The text is closely related to the sector of politics and elections, as it explicitly speaks about misleading synthetic media that can influence election outcomes. It addresses the deployment of AI in creating media that could harm electoral integrity, reflecting legislative intent in regulating AI's role in politics. Furthermore, it implicates government agencies and public services as the enforcement and compliance measures would likely involve public bodies. However, less direct relevance to other sectors such as healthcare or private enterprises suggests that while the bill intersects with several sectors, its core focus remains on political implications and public governance.


Keywords (occurrence): artificial intelligence (1) synthetic media (22) show keywords in context

Description: Lower Healthcare Costs
Summary: The bill aims to reduce healthcare costs and increase price transparency in North Carolina, requiring providers to disclose pricing information to help consumers make informed choices and promote competition.
Collection: Legislation
Status date: March 27, 2025
Status: Engrossed
Primary sponsor: Jim Burgin (22 total sponsors)
Last action: Engrossed (March 27, 2025)

Category: None (see reasoning)

The text is primarily focused on lowering healthcare costs and increasing price transparency within the healthcare system in North Carolina. While it discusses transparency and efficiency in healthcare delivery, it does not explicitly reference Artificial Intelligence (AI) or related technologies like algorithms, automated decision systems, or data processing techniques related to AI. Thus, the relevance to the categories of Social Impact, Data Governance, System Integrity, and Robustness is minimal. The discussion on healthcare transparency does suggest potential indirect applications of data analysis and management but does not align strongly with the specified AI categories.


Sector:
Healthcare (see reasoning)

The text is most relevant to the Healthcare sector, as it directly addresses issues pertaining to healthcare costs, price transparency, and regulatory measures for health service facilities. However, it does not mention AI specifically in relation to healthcare applications or technologies that could be categorized under the legislation affecting healthcare. Given the lack of AI-related content, it scores a moderate relevance in the healthcare context but does not cross into relevance for other sectors outlined.


Keywords (occurrence): artificial intelligence (1) algorithm (1) show keywords in context

Description: An act relating to the use of synthetic media in elections
Summary: This bill requires the disclosure of deceptive synthetic media related to elections within 90 days of voting, aiming to protect electoral integrity and inform voters about manipulated content.
Collection: Legislation
Status date: March 20, 2025
Status: Engrossed
Primary sponsor: Ruth Hardy (7 total sponsors)
Last action: Read 3rd time & passed (March 20, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text primarily discusses the use of synthetic media in the electoral process, with a specific emphasis on the disclosure of deceptive and fraudulent uses of such media. This clearly pertains to the social impact of AI, as it addresses potential misinformation and the integrity of elections, which are significant societal issues. Data governance is relevant as the bill discusses the management of information integrity in synthetic media, emphasizing accountability and transparency in communications that can mislead voters. System integrity is involved due to the need for regulatory measures that ensure transparent disclosure concerning the use of AI-generated content in political communications, while robustness is less applicable as the text deals primarily with disclosure rather than performance metrics or benchmarks. Overall, the relevance of social impact is highest, followed closely by data governance and system integrity, while robustness is less emphasized.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The text explicitly concerns the regulation of synthetic media within an electoral context, making it directly relevant to the Politics and Elections sector. It also relates to Government Agencies and Public Services, as the enforcement of this legislation would fall under state or federal authorities that oversee elections, ensuring compliance with these regulations. Other sectors such as Healthcare, Judicial System, and Private Enterprises, Labor, and Employment, don't apply in this context, as they do not deal with synthetic media in elections. The relevance of Politics and Elections is thus very high, followed by a moderate association with Government Agencies and Public Services.


Keywords (occurrence): artificial intelligence (1) synthetic media (16) show keywords in context

Description: An act to add Chapter 40 (commencing with Section 22949.82) to Division 8 of the Business and Professions Code, relating to consumer protection.
Summary: The Fair Online Pricing Act prohibits businesses from setting prices based on specific device data, requiring clear disclosures for prices determined by algorithmic systems. It aims to enhance consumer protection in online pricing practices.
Collection: Legislation
Status date: May 28, 2025
Status: Engrossed
Primary sponsor: Aisha Wahab (sole sponsor)
Last action: From committee with author's amendments. Read second time and amended. Re-referred to Com. on P. & C.P. (June 19, 2025)

Keywords (occurrence): artificial intelligence (2) show keywords in context

Description: Schools; subject matter standards; computer science courses; curriculum; rules; effective date; emergency.
Summary: House Bill 1304 mandates the inclusion of a computer science course in public school curriculums, ensures such courses count towards graduation requirements, and requires the State Department of Education to establish related rules.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Dick Lowe (2 total sponsors)
Last action: Referred to Common Education (Feb. 4, 2025)

Category:
Societal Impact (see reasoning)

The text focuses on establishing standards for computer science courses within school curriculums. Since it explicitly refers to computer science, which is closely related to AI, there is relevant content about the impact of technology education in shaping future competencies in AI and its applications. However, the text does not directly address ethical or social implications of AI, data governance, or systemic integrity in AI development, which diminishes its relevance to those categories.


Sector:
Academic and Research Institutions (see reasoning)

The text primarily discusses educational standards and curricula related to computer science. While it supports the education sector's adaptation to emerging technologies, it does not directly pertain to how AI is regulated within the education framework or its implications on other sectors. Thus, while it touches on technology's role in education, its relevance to the specific sectors considered is minimal.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Description: Requiring the National Center for School Mental Health at the University of Maryland School of Medicine, in consultation with the State Department of Education, to develop and publish a student technology and social media resource guide by the 2027-2028 school year; requiring the Governor to include an appropriation of $100,000 for fiscal year 2027 and $125,000 for fiscal years 2028 and 2029 in the annual budget bill; and requiring the Center to report on the expenditure of funds on or before...
Summary: House Bill 1316 mandates the creation of a youth-centric technology and social media resource guide for public school students, aimed at promoting safe and informed technology usage.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: Sarah Wolek (4 total sponsors)
Last action: Second Reading Passed with Amendments (March 15, 2025)

Category:
Societal Impact (see reasoning)

The text relates to technology and social media in an educational context, with a specific mention of 'Artificial Intelligence products'. However, it primarily focuses on resource development for students regarding technology use rather than on broader societal AI implications, data governance, system integrity, or benchmarking. Therefore, while AI is acknowledged, its relevance is limited to the context of mental health and education rather than extensive impacts or governance structures. Consequently, the categories of Social Impact and Data Governance appear relevant, but the others are not significant based on the text’s emphasis on education and mental health rather than on those broader factors. On multiple passes, the emphasis on technology usage in an educational setting rather than AI governance led to mid-level significance.


Sector:
Government Agencies and Public Services (see reasoning)

The text makes a pertinent mention of AI products within an educational resource guide, indicating a potential relevance to the education sector, especially as it seeks to educate students on safe technology usage. However, the focus is more on student resources and mental health rather than legislative actions that govern or regulate AI specifically. Overall, references to AI are limited to specific applications without a broad application across multiple areas of governance or legislation. Hence, the scoring reflects limited relevance in the context of sectors. Additionally, it indicates a marginal connection to educational guidelines rather than a robust policy framework for AI's integration.


Keywords (occurrence): algorithm (1)
Feedback form