5015 results:


Description: Crimes and punishments; sexual obscenity; making certain acts unlawful; effective date.
Summary: The bill amends Oklahoma's obscenity laws to criminalize nonconsensual dissemination of private sexual images and artificially generated sexual depictions, establishing penalties for violations.
Collection: Legislation
Status date: April 30, 2025
Status: Enrolled
Primary sponsor: Toni Hasenbeck (4 total sponsors)
Last action: Sent to Governor (April 30, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

This text deals with the regulation of artificial intelligence in relation to obscenity and nonconsensual dissemination of private sexual images, specifically focusing on the implications of generative artificial intelligence. This legislation addresses the societal impacts of using AI-generated content in harmful ways, which ties directly into the Social Impact category. There are references to responsible use and labeling of AI, implying a need for accountability and ethical considerations regarding outputs, aligning with the Data Governance category as well. Additionally, the obligations imposed on dissemination processes introduce an aspect of System Integrity, ensuring that there are mechanisms for oversight and compliance in AI-generated content. However, there is limited emphasis on performance benchmarks or robustness of AI systems, which makes the Robustness category less relevant.


Sector:
Judicial system (see reasoning)

The text primarily addresses the implications of AI in the context of obscenity and creation of synthetic content, particularly within the realm of personal rights and privacy concerns. It discusses the legal frameworks surrounding the dissemination of both real and AI-generated sexual depictions. This aligns directly with the Judicial System sector, as it introduces legal definitions and consequences related to the use of technology in crimes against individuals. Conversely, while the text likely has implications for the healthcare and public services sectors regarding the wellbeing of individuals, it does not directly address their specific regulatory frameworks, leading to lower relevance scores for those sectors. Overall, the core concerns appear most relevant to the Judicial System.


Keywords (occurrence): artificial intelligence (4) automated (1) show keywords in context

Description: Criminalizing and creating a private right of action for the facilitation, encouragement, offer, solicitation, or recommendation of certain acts or actions through a responsive generative communication to a child, and relative to the termination of tenancy at the expiration of the tenancy or lease term.
Summary: The bill criminalizes and establishes a private right of action against entities that use AI communication to solicit harmful actions from children, ensuring accountability and protection for minors.
Collection: Legislation
Status date: March 28, 2025
Status: Engrossed
Primary sponsor: Sharon Carson (6 total sponsors)
Last action: Ought to Pass with Amendment 2025-2567h: Motion Adopted Regular Calendar 202-168 06/05/2025 House Journal 16 (June 5, 2025)

Keywords (occurrence): artificial intelligence (6) large language model (2) show keywords in context

Description: Generally revise usage of artificial intelligence in certain health insurance
Summary: The bill regulates the use of artificial intelligence by health insurance issuers, ensuring compliance with medical necessity criteria and protecting against discrimination, while promoting transparency and oversight.
Collection: Legislation
Status date: Feb. 18, 2025
Status: Introduced
Primary sponsor: Jill Cohenour (sole sponsor)
Last action: (H) Fiscal Note Requested (Feb. 18, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text is predominantly centered on the regulation of artificial intelligence usage in health insurance contexts. It explicitly outlines restrictions and requirements for health insurance issuers that employ AI and algorithmic systems, particularly in utilization review or management related to medical necessity. This focus places it directly under the Social Impact category, as it addresses consumer protections and discrimination concerns related to AI utilization in healthcare. The Data Governance category also applies as it emphasizes the compliance of AI with legal standards and proper handling of patient data. System Integrity and Robustness are less directly relevant since the text does not specifically tackle security protocols or performance benchmarks for AI systems; instead, it focuses on compliance and operational standards in the specific context of health insurance. Thus, Social Impact and Data Governance are scored higher due to their pivotal role in shaping the AI landscape in healthcare, while System Integrity and Robustness receive lower scores for their limited applicability here.


Sector:
Healthcare (see reasoning)

This bill is highly relevant to healthcare as it explicitly discusses the usage of AI within health insurance practices. It outlines the responsibilities and obligations that health insurance issuers have regarding AI, including ensuring equitable treatment, appropriate data use, and disallowing AI-driven bias. Given that the text specifically addresses AI applications in health insurance utilization review, it receives a high score under the Healthcare sector. It touches on regulations affecting interactions with AI, but it does not address any political, judicial, or research-oriented aspects, hence the lower scores in other sectors.


Keywords (occurrence): artificial intelligence (17) algorithm (14) show keywords in context

Description: To direct the Secretary of Agriculture to establish centers of excellence for agricultural security research, extension, and education, and for other purposes.
Summary: The American Agricultural Security Act of 2024 aims to establish centers of excellence for agricultural security research, education, and extension to enhance the U.S. agricultural sector's resilience against threats.
Collection: Legislation
Status date: May 17, 2024
Status: Introduced
Primary sponsor: Don Bacon (3 total sponsors)
Last action: Referred to the House Committee on Agriculture. (May 17, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text highlights the establishment of centers of excellence for research, extension, and education in agricultural security, with a specific inclusion of 'artificial intelligence' in the realm of 'digital agriculture.' This signals the recognition of AI's role in enhancing agricultural practices. Consequently, the relevance of Social Impact stems from potential societal benefits tied to AI in agriculture, including workforce development and community engagement. For Data Governance, while the text does not explicitly address data management or privacy associated with AI, the application of AI in agriculture would require careful data policies and governance. System Integrity is moderately relevant due to the emphasis on cybersecurity, which intersects with AI's capabilities in safeguarding agricultural data and processes. Robustness is less relevant as the text does not focus on performance benchmarks for AI systems directly. Overall, Social Impact and Data Governance receive higher relevance scores, whereas System Integrity is acknowledged as relevant due to the cybersecurity emphasis.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

The mention of 'artificial intelligence' in the context of digital agriculture makes it relevant to the sector of Private Enterprises, Labor, and Employment, as AI technologies are poised to influence agricultural labor practices and industry standards. Government Agencies and Public Services see relevance through the involvement of governmental departments and educational institutions in establishing excellence centers. Additionally, there might be connections to Academic and Research Institutions due to the emphasis on research and education activities. However, the text does not delve deeply into political implications, the judicial framework around AI, or specific healthcare applications, resulting in lower scores for those sectors. Overall, Private Enterprises, Labor, and Employment has significant applicability, supported by Government Agencies and Public Services and Academic and Research Institutions receiving moderate scores.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Description: Relating to the disclosure and use of artificial intelligence.
Summary: The bill establishes regulations for transparency in the use of artificial intelligence, requiring users to provide explanations for AI decisions, maintain best practices, and avoid bias, effective September 1, 2025.
Collection: Legislation
Status date: March 14, 2025
Status: Introduced
Primary sponsor: Salman Bhojani (sole sponsor)
Last action: Filed (March 14, 2025)

Category:
Societal Impact
System Integrity (see reasoning)

This text pertains directly to artificial intelligence, specifically with a focus on transparency, accountability, and implications for society in the context of AI usage. Elements such as detecting AI usage, explaining AI-based decisions, and preventing bias and discrimination within AI systems show a strong emphasis on the social impact of AI technologies. Additionally, this legislation addresses best practices and standards, indicating a degree of concern for system integrity. However, it does not delve deeply into data governance or performance benchmarks, focusing more on the transparency and ethical usage of AI.


Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)

The legislation outlines the use of AI in various sectors including business, social media, and political advertising. This crosses into several relevant sectors: the regulation of AI in advertising (which ties into Politics and Elections), the provision of goods and services through AI (which relates to Government Agencies and Public Services), and concerns over bias impacting individuals which can connect to Judicial System and healthcare implications. The clarity of AI's use and the related accountability to users is also crucial for Private Enterprises, Labor, and Employment to ensure ethical practices. Overall, it covers a broad array of impacts across multiple sectors, though not deeply rooted in any single area.


Keywords (occurrence): artificial intelligence (10) show keywords in context

Description: AN ACT relating to health; prohibiting certain uses of artificial intelligence in public schools; requiring the Department of Education to develop a policy concerning certain uses of artificial intelligence; imposing certain restrictions relating to the marketing and programming of artificial intelligence systems; prohibiting certain persons from representing themselves as qualified to provide mental or behavioral health care; imposing certain restrictions relating to the use of artificial in...
Summary: This bill establishes restrictions on the use of artificial intelligence in public schools and mental health services, prohibiting certain AI applications to protect students and ensure qualified care, while developing related policies.
Collection: Legislation
Status date: June 1, 2025
Status: Enrolled
Primary sponsor: Jovan Jackson (2 total sponsors)
Last action: Enrolled and delivered to Governor. (June 1, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text addresses specific prohibitions on the use of artificial intelligence in public schools, particularly in the context of mental health support from school counselors, psychologists, and social workers. This indicates a direct impact of AI on social structures and individual experiences within educational settings, which ties into the social impact category by highlighting concerns related to mental health and the autonomy of educational professionals in dealing with such issues. Additionally, it imposes restrictions on how AI can be marketed and programmed, suggesting a regulatory approach to AI's role and influence, further supporting its relevance to the social implications of AI technology. Therefore, the Social Impact category should be scored highly due to its explicit discussion on mental health and AI. The Data Governance category could be considered relevant due to the implications of managing sensitive mental health information in the context of AI usage, but it's not the text's primary focus. The System Integrity category is of moderate relevance as it discusses oversight of AI functions within the educational context but lacks extensive regulatory detail. The Robustness category is less relevant as the text does not address performance benchmarks or compliance frameworks for AI systems. Overall, the text predominantly emphasizes social impact, with moderate relevance to system integrity and slight relevance to data governance.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

The text has a strong connection to the educational sector as it discusses the use of AI in public schools and the implications for mental health services provided to students. It clearly specifies limitations on AI's application within educational institutions, which ties closely to regulations affecting school operations. The relevance to politics and elections is low, as there is no mention of AI's role in political campaigns or electoral processes. Government Agencies and Public Services category holds moderate importance as it shapes AI regulation within public education systems. The remaining sectors, such as Judicial System, Healthcare, and others do not find direct relevance in the context given, especially as the legislation refers primarily to public school settings. In summary, the strongest linkage is with the Academic and Research Institutions sector due to the bill's focus on school environments, with moderate mention of Government Agencies and Public Services, and low relevance for all other sectors.


Keywords (occurrence): artificial intelligence (31) show keywords in context

Description: An Act to Ensure Human Oversight in Medical Insurance Payment Decisions
Summary: The bill mandates that health insurance carriers cannot deny claims based solely on artificial intelligence decisions, requiring human physician oversight in review processes starting January 1, 2026.
Collection: Legislation
Status date: March 6, 2025
Status: Introduced
Primary sponsor: Joseph Martin (9 total sponsors)
Last action: Received by the Secretary of the Senate on March 6, 2025 and REFERRED to the Committee on HEALTH COVERAGE, INSURANCE AND FINANCIAL SERVICES pursuant to Joint Rule 308.2 (March 6, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text directly addresses the use of artificial intelligence in the context of medical insurance decisions, focusing on how AI can or cannot influence coverage determinations. This brings forth implications regarding social impact, particularly concerning health outcomes and consumer protections. The requirement for human oversight and physician involvement exemplifies the importance of data governance regarding the accuracy and fairness of AI-driven decisions. The legislation's provisions also speak to system integrity by ensuring transparency in how AI is used in the decision-making process. However, it does not emphasize robustness in the development of benchmarks or performance metrics for AI applications. Overall, the legislation is highly relevant to all four categories, especially focusing on social impact, data governance, and system integrity.


Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)

The text pertains particularly to the healthcare sector, as it details regulations on AI usage in medical insurance specifically. It also indirectly relates to government agencies since it mandates reporting and oversight by the state's Bureau of Insurance. While one could stretch its relevance to other sectors (like private enterprises through insurance carriers), its primary focus remains solidly within healthcare regulations. The explicit references to healthcare providers and the specifics of managing medical claims denote a strong alignment with the healthcare sector, whereas other sectors receive limited relevance.


Keywords (occurrence): artificial intelligence (8) show keywords in context

Description: An act to add Title 23 (commencing with Section 3273.72) to Part 4 of Division 3 of the Civil Code, relating to social media platforms.
Summary: Senate Bill No. 771 holds social media platforms liable for civil penalties if they violate personal rights laws related to hate crimes and discrimination, aiming to protect vulnerable communities.
Collection: Legislation
Status date: June 4, 2025
Status: Engrossed
Primary sponsor: Henry Stern (sole sponsor)
Last action: From committee with author's amendments. Read second time and amended. Re-referred to Com. on P. & C.P. (June 19, 2025)

Keywords (occurrence): artificial intelligence (3) algorithm (2) show keywords in context

Description: Artificial Intelligence Developer Act established; civil penalty. Creates operating standards for developers and deployers, as those terms are defined in the bill, relating to artificial intelligence, including (i) avoiding certain risks, (ii) protecting against discrimination, (iii) providing disclosures, and (iv) conducting impact assessments and provides that the Office of the Attorney General shall enforce the provisions of the bill. The provisions of the bill related to operating standar...
Summary: The Artificial Intelligence Developer Act establishes regulations for developers and deployers of high-risk AI systems in Virginia, aiming to prevent algorithmic discrimination and ensure transparency through disclosures and risk assessments, with civil penalties for violations.
Collection: Legislation
Status date: Jan. 9, 2024
Status: Introduced
Primary sponsor: Michelle Maldonado (sole sponsor)
Last action: Continued to 2025 with substitute in Communications, Technology and Innovation by voice vote (Feb. 5, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text addresses various operating standards for developers and deployers of high-risk artificial intelligence systems. It discusses implications for algorithmic discrimination, risk management, impact assessments, and transparency, indicating a direct relevance to AI's social impact and system integrity. The legislation establishes a framework for addressing the risks AI poses to consumers, particularly regarding fairness and accountability, making it very relevant to the Social Impact category. Additionally, the focus on assessing and managing risks related to AI systems aligns with the System Integrity category. While data governance aspects are touched upon, the emphasis is mostly on operational standards and risk management rather than data collection and management regulations. Robustness focuses on performance and auditing standards, which are less covered in this text. Therefore, Social Impact and System Integrity categories garner higher relevance scores.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The legislation strongly centers on the implications of AI systems in society, especially regarding consumer protections, algorithmic discrimination, and regulations on developers and deployers of these systems. Hence, it is highly relevant to the sector of Government Agencies and Public Services as it falls within the realm of public safety and consumer protection involving government oversight. Additionally, it has implications for Private Enterprises due to the expectations set on developers and deployers in the business context. While AI's role in the Judicial System and Healthcare sectors could connect, the text does not specifically address these areas, leading to lower relevance for those sectors. Therefore, the Government Agencies and Public Services sector scores high, while Private Enterprises receives a moderate score due to its implications in regulating business practices.


Keywords (occurrence): artificial intelligence (65) machine learning (2) foundation model (2) show keywords in context

Description: Ai Legislative Task Force
Summary: The bill establishes a Joint Legislative Task Force in Alaska to assess artificial intelligence's impact, evaluate its applications, address ethical concerns, and recommend policies for its responsible use.
Collection: Legislation
Status date: Feb. 26, 2025
Status: Introduced
Primary sponsor: George Rauscher (2 total sponsors)
Last action: REFERRED TO FINANCE (April 11, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text primarily discusses the establishment of a task force focused on artificial intelligence (AI) and its implications on various sectors and legislative oversight. The relevance of the categories is evaluated as follows: For Social Impact, the text explicitly mentions ethical considerations and the potential societal impacts of AI,making it extremely relevant. Data Governance is also very relevant as it addresses concerns about data privacy and security related to AI. System Integrity is moderately relevant given the focus on oversight and regulatory aspects but lacks direct mentions of security measures. Robustness is less relevant since the document doesn't significantly address performance benchmarks or auditing processes for AI systems, though some points about the responsible use of AI are implied.


Sector:
Government Agencies and Public Services (see reasoning)

The legislation outlines the establishment of a task force concerning AI in various sectors, making it relevant across multiple sectors. For Politics and Elections, it is slightly relevant as it pertains to legislative activities regarding AI. Government Agencies and Public Services is highly relevant due to the focus on AI's applications in state government operations and public services. The healthcare sector is slightly relevant as healthcare is mentioned as a sector where AI could be integrated, but it doesn't delve deeply into healthcare-specific challenges or regulations. Other sectors like Judicial System, Private Enterprises, Labor, Academic and Research Institutions, International Cooperation and Standards, and Nonprofits and NGOs are not explicitly addressed in this text, leading to low relevance scores. Overall, the highest relevancy is noted in Government Agencies and Public Services.


Keywords (occurrence): artificial intelligence (10) machine learning (1) show keywords in context

Description: Creating the Right to Compute Act and requiring shutdowns of AI controlled critical infrastructure
Summary: The bill establishes the Right to Compute Act, safeguarding citizens' rights to use computational resources and mandating risk management policies for AI-controlled critical infrastructure to address public health and safety concerns.
Collection: Legislation
Status date: April 16, 2025
Status: Passed
Primary sponsor: Daniel Zolnikov (sole sponsor)
Last action: (S) Signed by Governor (April 16, 2025)

Keywords (occurrence): artificial intelligence (12) machine learning (3) show keywords in context

Description: Criminalize disclosure of certain explicit AI-generated media
Summary: The bill establishes penalties for disclosing explicit AI-generated media without consent, aimed at protecting individuals from emotional distress and harassment while defining explicit synthetic media and related offenses.
Collection: Legislation
Status date: April 24, 2025
Status: Enrolled
Primary sponsor: Laura Smith (sole sponsor)
Last action: (S) Sent to Enrolling (April 24, 2025)

Keywords (occurrence): artificial intelligence (1) synthetic media (18) show keywords in context

Description: Creates the Louisiana Atmospheric Protection Act (EG NO IMPACT See Note)
Summary: The Louisiana Atmospheric Protection Act prohibits weather modification activities, establishes penalties for violations, and creates the Atmospheric Protection Fund for collected fines, enhancing environmental oversight and enforcement.
Collection: Legislation
Status date: April 4, 2025
Status: Introduced
Primary sponsor: Kimberly Coates (2 total sponsors)
Last action: Scheduled for floor debate on 06/02/2025. (May 29, 2025)

Keywords (occurrence): machine learning (4) show keywords in context

Description: An act to add Chapter 25.1 (commencing with Section 22757.20) to Division 8 of the Business and Professions Code, relating to artificial intelligence.
Summary: The LEAD for Kids Act aims to regulate AI systems used by or affecting children in California, establishing standards for their development, risk assessment, and safeguarding personal information to mitigate harm.
Collection: Legislation
Status date: June 2, 2025
Status: Engrossed
Primary sponsor: Rebecca Bauer-Kahan (2 total sponsors)
Last action: In Senate. Read first time. To Com. on RLS. for assignment. (June 3, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly establishes regulations specifically governing AI systems intended for children. Due to its focus on adverse impacts, risk assessments, and protections for minors, the relevance to the Social Impact category is extremely high. The regulation of AI systems underscores the importance of managing psychological and material harm caused by these technologies, directly aligning it with issues of fairness, accountability, and consumer protection in AI applications. The Data Governance category is also highly relevant, as the act discusses criteria for AI system classification, risk evaluation related to personal information, and establishes compliance requirements to ensure children's data privacy. The System Integrity category is moderately relevant, as it touches on oversight mechanisms but is not as focused on the inherent security or transparency of AI systems. Robustness is slightly relevant, mainly because it infers performance benchmarks without specifically detailing any new benchmarks or audit standards for AI systems. Overall, this act is focused on ethical development and ensuring safety for children using AI technology, making it relevant for the Social Impact and Data Governance categories primarily.


Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

This legislation is particularly focused on children's interactions with AI technology, impacting several sectors related to their welfare. The most direct connection is with Government Agencies and Public Services, as it mandates the establishment of the LEAD for Kids Standards Board and outlines responsibilities for developers and deployers regulated by state authorities. The Healthcare sector is somewhat less relevant, though indirectly related to children's health impacts due to AI technology. The Private Enterprises, Labor, and Employment sector is relevant since it discusses developer obligations and business practices concerning AI products intended for children. Academic and Research Institutions relate to the act in terms of gathering relevant expertise for standards development. Other sectors like Politics and Elections, Judicial System, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified sectors do not directly pertain to the focus of this legislation, resulting in lower relevance scores. This bill primarily influences sectors that are involved directly with child welfare and business practices governing AI technology aimed at minors.


Keywords (occurrence): artificial intelligence (24) chatbot (6) show keywords in context

Description: High-risk artificial intelligence; development, deployment, and use; civil penalties. Creates requirements for the development, deployment, and use of high-risk artificial intelligence systems, defined in the bill, and civil penalties for noncompliance, to be enforced by the Attorney General. The bill has a delayed effective date of July 1, 2026.
Summary: The bill regulates high-risk artificial intelligence systems in Virginia, defining algorithmic discrimination, establishing operational standards for developers and deployers, and imposing civil penalties for non-compliance. It aims to protect consumers from discriminatory outcomes.
Collection: Legislation
Status date: March 7, 2025
Status: Enrolled
Primary sponsor: Michelle Maldonado (24 total sponsors)
Last action: Fiscal Impact Statement from Department of Planning and Budget (HB2094) (March 7, 2025)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

This text establishes requirements and standards for the development, deployment, and use of high-risk artificial intelligence systems, emphasizing accountability for algorithmic discrimination, consumer protection, and operational standards. The references to 'high-risk artificial intelligence systems' and 'algorithmic discrimination' highlight the potential social impact and regulatory measures required to prevent discrimination and protect individuals. Furthermore, it outlines safety and responsibility frameworks for developers and deployers of AI, making it highly relevant to all categories specified. The need for documentation, risk management plans, and standards compliance directly impacts social welfare, data governance, system integrity, and the robustness of AI systems.


Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Hybrid, Emerging, and Unclassified (see reasoning)

The text mentions developers and deployers of high-risk AI, which could include various sectors like healthcare, public services, or private enterprises but does not specifically restrict itself to any single sector. The focus on algorithmic discrimination and consumer rights suggests relevance to various sectors, especially those directly interfacing with consumers (like healthcare and public services) and risk management in business environments. However, since the language is broad and does not focus exclusively on any one sector, the scores reflect general applicability rather than direct regulation within specific sectors.


Keywords (occurrence): artificial intelligence (138) machine learning (2) automated (1) algorithm (1) autonomous vehicle (1) show keywords in context

Description: Relating to an automated artificial intelligence review of library material purchased by public schools; providing an administrative penalty.
Summary: The bill mandates public schools to use automated AI to review library materials for sexual content before purchase, requiring parental consent for certain materials and imposing penalties for non-compliance.
Collection: Legislation
Status date: March 11, 2025
Status: Introduced
Primary sponsor: Hillary Hickland (sole sponsor)
Last action: Filed (March 11, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly discusses an automated artificial intelligence review process for evaluating library materials purchased by public schools. It details how AI will be used to assess whether materials should be flagged as sexually explicit or relevant. Therefore, this text strongly intersects with the social implications of AI technology in education and libraries, dealing with issues such as censorship, the role of AI in decision-making about educational content, and the concerns surrounding children's access to material. This lends significant relevance to the Social Impact category. The text also directly references oversight and transparency protocols related to the AI review process, implicating aspects of System Integrity, as it mandates that the AI system undergo human verification and audits. Meanwhile, Data Governance is relevant due to the emphasis on the accuracy and bias mitigation requirements in the use and management of data by the AI system. Robustness appears less relevant since the focus is primarily on the operational procedures rather than performance benchmarks or evaluations of AI technology. Based on this reasoning, scores will reflect the substantial relevance of the text to the Social Impact, Data Governance, and System Integrity categories.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

The text's main application is within public education, focusing on how AI is deployed in libraries associated with public schools. Since it outlines the responsibilities of schools and the administrative oversight involved in the implementation of AI reviews, this primarily pertains to Government Agencies and Public Services. The sector of Academic and Research Institutions also has relevance, as it considers educational implications, guidelines around library materials, and the involvement of educators. However, there are only marginal mentions related to Politics and Elections and other sectors like Healthcare, Private Enterprises, or NGOs are not relevant to this particular bill. Therefore, the scores reflect the primary relevance to Government Agencies and Public Services, with a secondary yet notable connection to Academic and Research Institutions.


Keywords (occurrence): artificial intelligence (10) automated (12) show keywords in context

Description: An Act relating to elections; relating to voters; relating to voting; relating to voter registration; relating to election administration; relating to the Alaska Public Offices Commission; relating to campaign contributions; relating to the crimes of unlawful interference with voting in the first degree, unlawful interference with an election, and election official misconduct; relating to synthetic media in electioneering communications; relating to campaign signs; relating to voter registrat...
Summary: The bill addresses various aspects of election administration in Alaska, including voter registration, residency requirements, and voter roll maintenance, while enhancing election integrity and security measures.
Collection: Legislation
Status date: May 12, 2025
Status: Engrossed
Primary sponsor: Rules (sole sponsor)
Last action: REFERRED TO FINANCE (May 13, 2025)

Category:
Societal Impact
System Integrity (see reasoning)

The text specifically mentions 'synthetic media in electioneering communications', which is directly related to AI technologies such as deepfakes and automated content generation. This indicates a clear concern regarding the implications of AI on the integrity of electoral processes and misinformation. Furthermore, it addresses voter registration procedures that are relevant to how AI might impact voting systems or voter privacy. This legislation within the realm of AI applications in public discourse assigns it a significant level of relevance towards social implications, specifically in terms of misinformation and public trust concerning electoral integrity.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The text directly pertains to the political process, specifically in the context of elections and voter regulation. This connection to electoral procedures and concerns around the use of AI in campaign communications is critical. Additionally, the mention of synthetic media suggests implications for election integrity and public trust. While some elements touch upon the administration and management of voter information, the primary focus aligns with political processes, thus making legislation especially pertinent to the sector of politics and elections.


Keywords (occurrence): artificial intelligence (2) synthetic media (7) show keywords in context

Description: Specifies that hurricane mitigation grants funded through My Safe Florida Home Program may be awarded only under certain circumstances; requires DFS to require that certain mitigation improvements be made as condition of reimbursing homeowner approved for grant; increases surpluses required for certain insurers applying for their original certificates of authority & maintaining their certificates of authority; specifies prohibitions for persons who were officers or directors of insolvent insu...
Summary: The bill establishes hurricane mitigation grants for homeowners, stipulating conditions for inspections and improvements. It also regulates insurers regarding financial stability and claim processes, aiming to enhance hurricane preparedness and property safety in Florida.
Collection: Legislation
Status date: Feb. 28, 2025
Status: Introduced
Primary sponsor: Insurance & Banking Subcommittee (3 total sponsors)
Last action: CS Filed (April 10, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The legislation includes specific references to the use of AI systems, algorithms, and machine learning in the context of processing insurance claims related to hurricane mitigation. It makes it clear that while these technologies can assist in claims processing, they cannot be the sole basis for claim denials, which ties directly to consumer protection and the accountability of AI usage. As such, the relevance to 'Social Impact' is substantial due to implications on fairness and consumer protection. The mention of algorithms and AI in the insurance context pertains to how data is managed and decisions are made, linking it closely to 'Data Governance'. The stipulation for human oversight to finalize claim decisions establishes a concern for 'System Integrity', emphasizing the necessity of safeguards in AI applications. Although 'Robustness' does touch on performance metrics, the text leans more heavily into social implications and governance rather than setting new benchmarks for AI performance specifically.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The legislation is strongly relevant to the 'Private Enterprises, Labor, and Employment' sector, specifically regarding the insurance industry and the use of AI in processing claims. The bill outlines how insurers can utilize AI technologies while imposing regulations and requirements on their use, which is indicative of the influence of AI in shaping business practices. It also indirectly relates to 'Government Agencies and Public Services' since the Department of Financial Services oversees these functions. However, there is limited mention of 'Healthcare', 'Judicial System', or 'Academic and Research Institutions', as the focus remains on insurance and mitigation rather than healthcare applications or legal processes. Additionally, the connection to 'Political and Elections' is weak, as it does not address AI’s role within that domain.


Keywords (occurrence): artificial intelligence (4) machine learning (8) algorithm (6) show keywords in context

Description: An Act To Create New Section 37-13-215, Mississippi Code Of 1972, To Provide That, Beginning With The Entering Ninth-grade Class Of 2027-2028, A Public High School Student Shall, Before Graduation, Be Required To Earn One Unit Of Credit In A High-school Computer Science Course, Or One Unit Of Credit In An Industry-aligned Career And Technical Education (cte) With Embedded Computer Science Course; To Provide The State Graduation Requirements That May Be Satisfied By Either Of These Courses; To...
Summary: The MS Future Innovators Act mandates that Mississippi high school students must complete a computer science or CTE course with embedded computer science before graduation, starting with the 2027-2028 class.
Collection: Legislation
Status date: Feb. 11, 2025
Status: Engrossed
Primary sponsor: Chris Johnson (2 total sponsors)
Last action: Referred To Education;Accountability, Efficiency, Transparency (Feb. 14, 2025)

Category:
Societal Impact (see reasoning)

The text explicitly mentions the inclusion of Artificial Intelligence (AI) in high school computer science curricula. This directly relates to the educational impact of AI and the necessity for young individuals to understand AI's implications for society. Therefore, relevance to the Social Impact category is strong. Data Governance is not applicable as the legislation does not focus on data management or privacy in AI systems. System Integrity is also not captured in the text as it does not address the security and transparency of AI systems, nor does it talk about regulations ensuring oversight. Robustness is not relevant since there is no focus on benchmarks, auditing, or regulatory compliance specific to AI systems. Overall, only the Social Impact aspect adequately applies due to the educational emphasis on emerging technologies like AI and its societal understandings.


Sector:
Academic and Research Institutions (see reasoning)

The text pertains to the educational sector by mandating high school students in Mississippi to learn about foundational computer science concepts, including AI. It does not mention any specific legislation on politics, government services, the judicial system, healthcare, business, academic research, international standards, or nonprofits, as its focus is solely on educational curricula. Hence, Education is the most relevant sector. Other sectors do not relate directly to the content of the legislation provided.


Keywords (occurrence): artificial intelligence (3) show keywords in context

Description: A BILL to be entitled an Act to amend Chapter 8 of Title 13 of the Official Code of Georgia Annotated, relating to illegal and void contracts generally, so as to prohibit certain agreements involving rental price-fixing as unenforceable contracts in general restraint of trade with respect to residential rental properties; to provide for a criminal penalty; to provide for statutory construction; to provide for a short title; to provide for an effective date and applicability; to provide for re...
Summary: The "End Rental Price-Fixing Act" prohibits price-fixing agreements among landlords regarding residential rental properties, classifying such contracts as unenforceable and imposing criminal penalties to protect market competition and residents' welfare.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Gabriel Sanchez (5 total sponsors)
Last action: House Hopper (Feb. 27, 2025)

Category:
Societal Impact (see reasoning)

The text discusses illegal rental price-fixing and the utilization of computational processes, including machine learning and artificial intelligence, to manipulate rental prices. This is primarily related to the potential impacts of AI on market fairness and accountability, addressing how automated systems can exacerbate issues like price-fixing in the housing market. Thus, it has significant relevance to the category of Social Impact, as it directly addresses the implications of these AI technologies on individuals and society. The Data Governance category is slightly relevant due to its mention of processes that analyze and process data for price recommendations. However, there is minimal relevance to System Integrity since the text does not focus on security or transparency measures for AI systems and similarly for Robustness, which concerns performance benchmarks that are not discussed here. Overall, the key aspects relate to the social implications of AI in economic contexts.


Sector: None (see reasoning)

The text primarily pertains to housing and economic regulations concerning rental properties and does not explicitly address sectors such as Politics and Elections, Government Agencies and Public Services, Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, or Nonprofits and NGOs. Although there might be indirect implications for private enterprises due to potential changes in market practices, this text does not provide any direct indication of AI’s application within those specific sectors. Thus, the only relevant category is Private Enterprises, Labor, and Employment, due to the mention of coordination among landlords in relation to rental price-setting, which may impact labor and employment dynamics in the housing market, but it's minimally so. Hence, it rates low on relevance across the board.


Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context
Feedback form