4826 results:
Description: Creates a commission on AI to be a central resource on the use of AI in this state. Directs the SCIO to hire a Chief Artificial Intelligence Officer. (Flesch Readability Score: 65.7). Establishes the Oregon Commission on Artificial Intelligence to serve as a central resource to monitor the use of artificial intelligence technologies and systems in this state and report on long-term policy implications. Directs the commission to provide an annual report to the Legislative Assembly. Allows the ...
Summary: The bill establishes the Oregon Commission on Artificial Intelligence to monitor AI use, assess its impacts, and make policy recommendations to foster innovation while ensuring safety and equity for Oregonians.
Collection: Legislation
Status date: Feb. 18, 2025
Status: Introduced
Primary sponsor: Daniel Nguyen
(2 total sponsors)
Last action: First reading. Referred to Speaker's desk. (Feb. 18, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text relates to the establishment of a commission focused on the oversight and integration of artificial intelligence (AI) technologies within the state of Oregon. It discusses the need for equitable policies, monitoring the societal impacts of AI, and ensuring protection against risks such as discrimination and privacy violations. Given the range of implications addressed, notably concerning ethical concerns, equity, data protection, and the risks posed by AI systems, the bill is highly relevant to the Social Impact category. It mentions assessing economic opportunities and impacts on jobs, which solidifies its relevance to Data Governance as well, particularly regarding the management and protection of individual rights. Furthermore, the focus on ensuring transparency and safety in the deployment of AI points to implications for System Integrity. Notably, the bill does not specifically mention performance benchmarks or auditing standards, so it’s not as directly tied to the Robustness category. Overall, the text is most critical for the Social Impact, Data Governance, and System Integrity categories based on the AI-related content present.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The proposed legislation explicitly involves the creation of a commission that oversees AI technologies and systems, affecting various sectors. The scope of the bill extends to several areas, including the impact of AI on labor, privacy rights, ethics in technology, and equity considerations. Therefore, it has wide-ranging implications that influence a number of sectors, most notably Government Agencies and Public Services due to the regulatory oversight it will provide. The healthcare sector is touched upon through privacy and ethical considerations, but it is not the primary focus. As such, while it could tangentially relate to Healthcare, it is more directly relevant to Government Agencies and Public Services. Additionally, the bill may touch upon Private Enterprises through workforce impacts but lacks sufficient detail about specific corporate regulations, placing it at a lower relevance. The intersections with academic and research communities are also implied as the bill discusses education on AI, but again, it is not a primary focus. The other sectors, such as Politics and Elections, Judicial System, International Cooperation, Nonprofits, and Hybrid/Emerging, seem less relevant based on the content provided.
Keywords (occurrence): artificial intelligence (30) show keywords in context
Description: An act to add and repeal Section 12817 to the Government Code, relating to artificial intelligence.
Summary: Senate Bill No. 579 establishes a working group to assess the role, benefits, and risks of artificial intelligence in mental health, ensuring ethical use and producing reports for legislative guidance by 2030.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Steve Padilla
(sole sponsor)
Last action: Read second time and amended. Re-referred to Com. on APPR. (March 26, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly addresses mental health in relation to artificial intelligence, focusing on how AI can improve mental health outcomes, as well as assessing ethical standards and potential risks of using AI in mental health settings. This direct emphasis on the societal implications and individual impact of AI technologies places it strongly within the 'Social Impact' category. Additionally, the act involves the evaluation and management of data and frameworks concerning AI tools in mental health, which suggests relevance to 'Data Governance' as well. The mention of appointing a working group implies a level of oversight regarding system integrity, but this is less explicit than the implications for social impact and data governance. While there are elements that could touch on robustness, such as references to best practices, they are not as prominent. Overall, the text lends itself best to the 'Social Impact' category for its clear focus on individual well-being and ethical implications of AI in mental health.
Sector:
Healthcare
Academic and Research Institutions (see reasoning)
The legislation targets the use of artificial intelligence in mental health, which directly relates to the healthcare sector. By evaluating AI's role in treatment and diagnosis, addressing potential risks, and proposing training frameworks for mental health professionals, it highlights its importance in healthcare settings. The focus on stakeholder engagement and input suggests the bill's aim to inform healthcare practices and regulatory measures. Although there are components that could overlap with potential implications for government agencies, the primary emphasis remains within the healthcare sector, thus solidifying its classification in that field.
Keywords (occurrence): artificial intelligence (15) automated (1) show keywords in context
Description: An act to add Section 38760 to the Vehicle Code, relating to vehicles.
Summary: The bill requires manufacturers of autonomous vehicles in California to report collisions and disengagements when operating in autonomous mode, enhancing transparency and safety regulations for such vehicles.
Collection: Legislation
Status date: Aug. 28, 2024
Status: Enrolled
Primary sponsor: Matt Haney
(2 total sponsors)
Last action: Senate amendments concurred in. To Engrossing and Enrolling. (Ayes 65. Noes 4.). (Aug. 28, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text focuses on autonomous vehicles and their regulation, particularly in the context of incident reporting. Key terms related to AI, such as 'autonomous mode', indicate relevance to AI's social impact due to the implications on safety, liability, and discrimination, particularly against vulnerable road users. The reporting and oversight requirements suggest a framework for accountability and safety in AI operations, affecting individuals and society as a whole. Understanding how AI technologies can cause potential harm or benefit to users aligns with the Social Impact category, thereby indicating a strong relevance. The Data Governance category is also relevant, as it discusses the collection and management of data related to incidents, including mandates for transparent reporting. System Integrity is considered relevant because the provisions describe specifications for operational performance and the requirement for manual override in problematic situations. However, the focus is primarily anecdotal and regulatory, without delving into internal security measures for the AI systems themselves, which limits its relevance in this category. The Robustness category is less applicable here since the text does not specifically address performance benchmarks for AI systems, and instead focuses on reporting mechanisms, thus limiting its relevance in this category as well.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily addresses legislation concerning autonomous vehicles, directly relevant to several sectors. In the Politics and Elections sector, there is limited relevance, as the text does not discuss AI's role in elections or political campaigns. However, Government Agencies and Public Services is highly relevant as it speaks to how the DMV and other agencies must manage reports and data for the incident reporting of autonomous vehicles. The Judicial System is slightly relevant as it touches on accountability, though it primarily focuses on vehicle regulation rather than judicial applications. The Healthcare sector is not applicable, as there is no mention of healthcare applications. Within the Private Enterprises, Labor, and Employment sector, it reflects the implications for manufacturers and their operational obligations, but does not strongly address employment or corporate governance perspectives. Academic and Research Institutions have minor relevance as the legislation does not engage educational contexts specifically, even though innovations may come from research. International Cooperation and Standards does not receive an ample mention in this text, thus scoring low. Nonprofits and NGOs have little relevance unless involved in advocacy or disability issues related to the legislation, while Hybrid, Emerging, and Unclassified could apply given the innovative nature of autonomous vehicles, yet again lacks a strong basis here.
Keywords (occurrence): automated (2) autonomous vehicle (41) show keywords in context
Description: Artificial Intelligence Act
Summary: The Artificial Intelligence Act mandates documentation, risk assessment, and transparency for high-risk AI systems to prevent algorithmic discrimination, ensuring accountability for developers and deployers in New Mexico.
Collection: Legislation
Status date: Jan. 21, 2025
Status: Introduced
Primary sponsor: Christine Chandler
(4 total sponsors)
Last action: HCPAC: Reported by committee with Do Pass recommendation (Feb. 3, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text is the 'Artificial Intelligence Act' and directly pertains to various aspects of AI regulation. It addresses algorithmic discrimination, requiring developers to be accountable for their AI outputs, and mandates risk management policies which influence social dynamics. For Data Governance, it emphasizes the need for complete documentation regarding data used in AI systems, addressing any potential biases or infringements, aligning with consumer privacy and accurate data management standards. System Integrity is a key focus as it outlines obligations for transparency in AI usage and oversight policies. Robustness is present as the Act sets frameworks for impact assessment and performance evaluation of AI, ensuring adherence to necessary benchmarks for safety and effectiveness. Each category pertains to the themes present in the text, reflecting the broader implications of the legislation on society, data handling, system reliability, and standardization in AI performance.
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions
International Cooperation and Standards (see reasoning)
This Act has extensive implications across multiple sectors. In 'Politics and Elections', it sets the stage for how AI can be regulated within electoral contexts, safeguarding against algorithmic biases that can influence outcomes. 'Government Agencies and Public Services' is relevant, as it establishes regulatory frameworks that could affect AI deployment by public institutions. The 'Judicial System' is implicated due to provisions for citizens to seek civil action based on AI-related grievances, reflecting a concern for legal accountability in AI use. 'Healthcare' is significantly addressed, given the definitions and implications surrounding AI in delivering health services, ensuring ethical application. The Act also speaks to 'Private Enterprises, Labor, and Employment' by enforcing standards that affect corporate governance and labor practices in the face of AI implementation. 'Academic and Research Institutions' would also be directly relevant due to the emphasis on transparency and rigorous testing protocols that can inform research and advancements in AI. International cooperation issues may arise due to the multi-state implications of implementing such standards. Thus, the Act is of considerable relevance across most sectors, particularly those intersecting with AI's influence on society.
Keywords (occurrence): artificial intelligence (79) show keywords in context
Description: Camera usage prohibited for traffic safety enforcement, and previous appropriation cancelled.
Summary: The bill prohibits the use of traffic safety cameras for enforcing traffic laws in Minnesota, cancels funding for related programs, and repeals existing regulations on such systems.
Collection: Legislation
Status date: March 12, 2025
Status: Introduced
Primary sponsor: Drew Roach
(6 total sponsors)
Last action: Introduction and first reading, referred to Transportation Finance and Policy (March 12, 2025)
System Integrity (see reasoning)
The text primarily addresses regulations related to the use of traffic safety cameras, specifically prohibiting their use and outlining associated appropriations and definitions. Although there are mentions of 'automated license plate readers' and a 'traffic safety camera system' that could imply relevance to AI, the context does not deeply explore how these systems utilize AI technology, algorithms, or machine learning. Therefore, while it touches upon automation and data capture within the laws, the overarching focus is on prohibitory regulations rather than the social impact of AI, data governance, system integrity, or robustness in AI systems in a comprehensive manner.
Sector: None (see reasoning)
This legislation does not particularly relate to any specific sector that employs AI distinctively as defined in the sector descriptions, as the focus is on traffic safety enforcement mechanisms rather than broader applications across different sectors. The mention of cameras and automated systems could initially suggest relevance to public services or law enforcement, but the bill largely negates their use rather than delineating guidelines or standards for AI application in these sectors. The core intention is regulatory in nature, centering on prohibition.
Keywords (occurrence): automated (3)
Description: An act to amend Section 1384 of the Health and Safety Code, and to amend Section 10127.19 of the Insurance Code, relating to health care coverage.
Summary: Assembly Bill 682 mandates health care service plans and insurers in California to report detailed monthly claims data, including denials and reasons for them. It aims to enhance transparency and accountability in health care coverage.
Collection: Legislation
Status date: Feb. 14, 2025
Status: Introduced
Primary sponsor: Liz Ortega
(2 total sponsors)
Last action: From printer. May be heard in committee March 17. (Feb. 15, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly discusses the use of Artificial Intelligence (AI) in the processing and adjudication of health care claims within the scope of health care coverage reporting. It requires health care service plans to report the number of claims processed using AI. This connection suggests implications for consumer protections and accountability in the context of AI. Given that AI can impact individuals through automated decisions in health care, issues relating to fairness, bias, and consumer protections are pertinent. Hence, the Social Impact category has significant relevance. The Data Governance category is also relevant due to its focus on reporting accuracy and the inclusion of claims processing data that may involve AI, addressing data collection protocols. The System Integrity category is relevant as it involves measures of transparency and oversight of AI use in claims processing. However, the Robustness category appears less relevant since the text primarily focuses on reporting rather than the performance benchmarks or certification of AI systems. Overall, the text mainly pertains to social implications, governance of data, and system integrity related to health care AI applications.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text is highly relevant to the Healthcare sector since it specifically deals with health care coverage, reporting requirements, and the incorporation of AI into claims processing and adjudication. The legislation aims to regulate health care service plans and insurers, thus directly impacting the management and delivery of health services. Given that the use of AI specifically mentioned refers to its application in health care claims, the relevance to this sector is quite pronounced. Other sectors like Political and Elections, Government Agencies and Public Services, and Private Enterprises and Labor could have tangential relevance but lack explicit references in the text. Therefore, the Healthcare sector receives a high score.
Keywords (occurrence): artificial intelligence (4) show keywords in context
Description: Relative to prohibiting the unlawful distribution of misleading synthetic media.
Summary: The bill prohibits the unlawful distribution of misleading synthetic media, defining penalties for unauthorized and misleading use, particularly related to elections, to protect individuals and electoral integrity.
Collection: Legislation
Status date: Dec. 11, 2023
Status: Introduced
Primary sponsor: Linda Massimilla
(11 total sponsors)
Last action: Refer for Interim Study: Motion Adopted Voice Vote 03/14/2024 House Journal 8 P. 5 (March 14, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The legislation centers on the unlawful distribution of misleading synthetic media and explicitly links the definition of synthetic media to artificial intelligence algorithms. This directly relates to the 'Social Impact' category as it addresses potential harm from misleading AI-generated content and its implications for public trust and election integrity. It also connects to 'Data Governance' since unauthorized usage of AI to create misleading content can involve management of data rights and personal consent. The aspect of accountability and penalties in the bill aligns with 'System Integrity,' as it seeks to establish clear rules for AI systems that could mislead individuals in significant ways, which involves transparency and control. The robustness of these measures signifies a compliance effort with standards in AI content distribution. Overall, this legislation addresses both the societal consequences of AI media and accountability within AI governance.
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system (see reasoning)
The text is closely related to the sector of politics and elections, as it explicitly speaks about misleading synthetic media that can influence election outcomes. It addresses the deployment of AI in creating media that could harm electoral integrity, reflecting legislative intent in regulating AI's role in politics. Furthermore, it implicates government agencies and public services as the enforcement and compliance measures would likely involve public bodies. However, less direct relevance to other sectors such as healthcare or private enterprises suggests that while the bill intersects with several sectors, its core focus remains on political implications and public governance.
Keywords (occurrence): artificial intelligence (1) synthetic media (22) show keywords in context
Description: Relative to classified workers.
Summary: The bill urges federal legislation to secure rights for classified workers, ensuring safe working conditions, competitive wages, job security, and access to benefits and professional development opportunities.
Collection: Legislation
Status date: March 3, 2025
Status: Introduced
Primary sponsor: Sabrina Cervantes
(3 total sponsors)
Last action: Read second time and amended. Ordered to third reading. (March 27, 2025)
Societal Impact
Data Governance (see reasoning)
The text primarily addresses the rights and conditions of classified workers, focusing on their compensation, working conditions, job security, and the impact of electronic monitoring, data, algorithms, and artificial intelligence technology on their jobs. While it mentions AI and technology, it does so mainly in the context of seeking worker rights and protections related to these technologies, as well as not addressing how AI specifically affects their roles or the systems they work with. Thus, the relevance of the categories is assessed as follows: The Social Impact category is relevant as it relates to workers' rights and safety, which is crucial given AI's potential impact on these areas. Data Governance is relevant given the mention of algorithms and data collection in the workplace, although it's not a primary focus. System Integrity has some relevance due to the mention of monitoring and the need for safeguards but is not deeply explored. Robustness is not applicable since the text does not focus on performance benchmarks or compliance for AI systems.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text discusses classified workers in educational settings, emphasizing their rights and how AI and technology may impact their work environments. The sector of Government Agencies and Public Services is relevant as it concerns employees working in public education systems. Private Enterprises, Labor, and Employment is relevant due to the discussion of worker rights and employment conditions; however, it is more focused on public sector workers. Other sectors like Politics and Elections or Healthcare do not find a strong connection in the context of this resolution.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Preventing the dissemination of deepfake materials of political candidates before an election.
Summary: HB 630-FN aims to prevent the distribution of deepfake materials of political candidates within 90 days of an election, ensuring truthful political advertising and protecting candidates from deceptive media.
Collection: Legislation
Status date: Jan. 16, 2025
Status: Introduced
Primary sponsor: Thomas Cormen
(7 total sponsors)
Last action: Minority Committee Report: Ought to Pass (Feb. 6, 2025)
Societal Impact
Data Governance (see reasoning)
This legislation explicitly addresses the impact of deepfakes and synthetic media in the context of political candidates, highlighting concerns about misinformation and consumer protections for voters. The use of deepfake technology poses psychological and material harm by misleading voters about candidates, which fits well within the Social Impact category. The legislation aims to establish fairness and accountability in political advertising by preventing deceptive practices, thus reinforcing the protection of public trust in electoral processes. A connection to accountability for developers of AI technology is also implicit, as individuals creating deepfakes could be held accountable under this law. Therefore, the relevance to Social Impact is strong. Regarding Data Governance, while the bill doesn't directly address data management within AI systems, it does touch upon the use of AI-generated media in political contexts. As such, it is somewhat relevant but not a primary focus, leading to a moderate score. System Integrity does not align well here, as the text primarily addresses content distribution rather than system security or transparency. The legislation does not delve into AI performance benchmarks or auditing compliance, making it irrelevant for Robustness. Overall, the strongest alignment is with Social Impact due to the focus on the effects of AI-generated media on society and elections.
Sector:
Politics and Elections (see reasoning)
The text specifically addresses the use of artificial intelligence in political contexts by focusing on preventing the dissemination of AI-generated deepfakes during elections. This clear association with the manipulation of media representation in politics strongly aligns with the Politics and Elections sector, making it extremely relevant. The legislation does not, however, address other sectors such as Government Agencies, the Judicial System, Healthcare, Private Enterprises, Academic Institutions, or NGOs in a direct manner. While there may be implications for the Judicial System regarding litigation due to the establishment of civil remedies for candidates, this is a marginal connection compared to the primary sector focus on political contexts. Therefore, only the Politics and Elections sector scores highly, with no significant relevance to the other sectors.
Keywords (occurrence): artificial intelligence (6) deepfake (15) synthetic media (8) show keywords in context
Description: Concerning Artificial Intelligence, Algorithms, And Other Automated Technologies; And To Regulate Certain Practices Of Healthcare Insurers.
Summary: This bill regulates the use of artificial intelligence and algorithms by healthcare insurers in Arkansas, ensuring transparency, privacy, and accountability in decision-making processes affecting patient care.
Collection: Legislation
Status date: Jan. 29, 2025
Status: Introduced
Primary sponsor: Lee Johnson
(2 total sponsors)
Last action: Read the first time, rules suspended, read the second time and referred to the Committee on INSURANCE & COMMERCE- HOUSE (Jan. 29, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly discusses the regulation of artificial intelligence (AI) algorithms, particularly in the context of healthcare insurers and their practices. It addresses disclosure, transparency, algorithmic biases, and supervision of AI usage in healthcare decision-making processes. These aspects directly impact society and individuals by establishing frameworks and protections regarding AI's implications in healthcare. Thus, it is relevant to the Social Impact category. Data governance is also tightly linked due to its focus on data management practices, compliance with privacy standards, and the prevention of biases during algorithm training. System integrity is relevant here due to the mandates for transparency and human oversight in AI-based decision-making within healthcare insurers. Robustness may be considered due to the establishment of quality assurance metrics and standards for AI algorithms in healthcare practices. However, the primary emphasis lies on its social implications and data governance. Overall, the bill strongly pertains to these categories because it addresses both the individual and systemic ramifications of AI in healthcare.
Sector:
Healthcare (see reasoning)
The text directly addresses the application of AI within healthcare settings, particularly its regulation and the practices of healthcare insurers. As it discusses algorithms tailored specifically for healthcare contexts, it is highly relevant to the Healthcare sector. Moreover, given its implications on decision-making processes in health benefit plans, it influences the practices adopted by healthcare providers and their interactions with health insurers. It does not directly pertain to other sectors like Politics and Elections or the Judicial System, as it lacks references to legislation affecting those areas. Thus, the primary scoring reflects the text's relevance to healthcare.
Keywords (occurrence): artificial intelligence (26) automated (12) algorithm (13) show keywords in context
Description: Relating to the regulation of autonomous vehicles; creating a criminal offense.
Summary: The bill regulates the operation and registration of autonomous vehicles in Texas, establishing permit requirements and penalties for violations, while creating the Autonomous Vehicle Commission for oversight.
Collection: Legislation
Status date: March 5, 2025
Status: Introduced
Primary sponsor: Terry Canales
(sole sponsor)
Last action: Filed (March 5, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The bill primarily addresses the regulation and operation of autonomous vehicles, focusing on their registration, operation, and compliance with traffic laws. It presents vocabulary that indicates the use of automated systems, specifically 'autonomous,' 'automated driving system,' and references to various levels of automation. This directly connects to the category of System Integrity, as it mandates clear regulations for their operation. Furthermore, it aligns with the Social Impact category, discussing how autonomous vehicles should interact with emergency services, which can pertain to societal safety and public trust. Data Governance is also relevant since it addresses registration and the permit system for autonomous vehicles, implicating standards for safety records and personal data management. Robustness is less relevant as the text does not focus on benchmarks or performance metrics for AI systems. Hence, System Integrity and Social Impact are crucial, with Data Governance following closely due to the registration details but less prominence than the first two categories.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The text significantly pertains to Government Agencies and Public Services, as it involves regulations directly affecting the transportation sector managed by state agencies, specifically related to the operation and permits of autonomous vehicles. It also has implications for Private Enterprises, Labor, and Employment, considering the business model of transportation network companies that leverage autonomous technology. The Judicial System is slightly relevant as the text mentions offenses related to the operation of these vehicles, which could affect legal considerations. No significant tie to the other sectors was identified, leading to most of the evaluations cascading around these primary sectors. Hence, the two most relevant are Government Agencies & Public Services and Private Enterprises & Labor.
Keywords (occurrence): automated (37) autonomous vehicle (28) show keywords in context
Description: Concerning limiting the use of automated analysis of intimate personal data to make inferences that impact a person's financial position.
Summary: The bill prohibits using surveillance data to set individualized prices and wages, aiming to prevent discrimination against consumers and workers based on their personal data analyzed by automated systems.
Collection: Legislation
Status date: Feb. 18, 2025
Status: Introduced
Primary sponsor: Lorena Garcia
(4 total sponsors)
Last action: Introduced In House - Assigned to Judiciary (Feb. 18, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly discusses the use of automated decision systems, particularly those derived from AI and machine learning, to set individualized prices and wages based on surveillance data. The implications on society concerning surveillance-based discrimination mark a clear intersection with the category of Social Impact. The legislation addresses issues of consumer protection and potential discrimination rooted in the automated processing of personal data, which is highly relevant to social fabric and equity. Data Governance is relevant due to the mention of automated systems and the requirement for the accuracy and fairness of the data they utilize. System Integrity is also pertinent because it deals with the security of these automated systems and their processes and the necessity for transparency in how data is used in decision-making related to wages and prices. The mention of establishing procedures for accuracy and the ability for individuals to challenge the data being used in automations further strengthens the relevance. Robustness, while it may apply in a broader context of AI performance and benchmarks, is less central compared to the other three categories due to its focus on performance metrics rather than direct regulatory implications within the text.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The legislation mainly pertains to consumer rights and protections and directly impacts how AI is used in financial and employment processes, making it particularly relevant to Private Enterprises, Labor, and Employment. This is evident through the focus on individualized pricing and wages, which are vital concerns in employment law and consumer protection. Government Agencies and Public Services may also be considered slightly relevant since the enforcement of the law involves the attorney general or district attorney, though this is not its primary focus. The bill may have implications for Judicial System if there are civil actions regarding non-conformity or violations of this act, but this is more of a secondary implication than a primary categorization. The other sectors, such as Politics and Elections, Healthcare, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified are not particularly applicable to the text, so they receive a score of 1. Overall, the relevance primarily lies in the intersection of automated decision systems with labor and enterprise regulations.
Keywords (occurrence): artificial intelligence (2) machine learning (2) automated (12) show keywords in context
Description: Amend The South Carolina Code Of Laws By Adding Article 9 To Chapter 5, Title 39 So As To Provide Definitions; To Provide That A Social Media Company May Not Permit Certain Minors To Be Account Holders; To Provide Requirements For Social Media Companies; To Provide That A Social Media Company Shall Provide Certain Parents Or Guardians With Certain Information; To Provide That A Social Media Company Shall Restrict Social Media Access To Minors During Certain Hours; To Provide For Consumer Comp...
Summary: The South Carolina Social Media Regulation Act aims to restrict minors' access to social media by implementing age verification, parental consent, and limiting features that promote excessive use.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Engrossed
Primary sponsor: Weston Newton
(17 total sponsors)
Last action: Scrivener's error corrected (Feb. 21, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This legislation is highly relevant to Social Impact, as it specifically addresses the use of AI-driven features in social media platforms, particularly concerning minors. It aims to protect minors from potential harms that can arise from engaging with social media, including compulsive usage, exposure to harmful content, and data privacy concerns. The provisions aimed at regulating how social media companies interact with minors inherently connect to concerns about AI-driven personalized recommendations and targeted advertising, making this category extremely relevant. Data Governance is also relevant, as the bill includes strict regulations on how personal data of minors should be collected, used, and shared, emphasizing the need for accuracy and transparency in data practices, particularly in AI systems that process minors' information. System Integrity has moderate relevance; while it doesn't focus on security protocols, it emphasizes protecting minors from exploitative practices, which can connect to broader notions of system integrity in AI design. Robustness has minimal relevance as there is no direct focus on benchmarks or performance metrics for AI systems established in this legislation. Overall, the bill addresses significant concerns regarding AI's impact on minors and data governance, thus categorizing it under Social Impact and Data Governance as the most relevant categories.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The bill has clear implications for the sector of Government Agencies and Public Services, particularly as it pertains to regulating and overseeing social media companies in their interactions with minors. While it does not explicitly address political campaigns or electoral processes (Politics and Elections), it does touch on the regulation of public services for youth safety from online platforms, making it relevant to public service delivery. The Judicial System is not explicitly mentioned, and thus it receives a lower score. In Healthcare, there's no direct focus on AI regulations in that sector, so that receives a lower score too. The bill indirectly affects Private Enterprises, Labor, and Employment, as it requires social media companies to adjust their operational strategies concerning minors, but this doesn't make it highly relevant. Academic and Research Institutions may fall under some implications related to minors but are not a primary focus, thus scoring low. International Cooperation and Standards are not addressed here either. Nonprofits and NGOs, given their potential interest in child protection, could also be slightly relevant. However, no explicit collaborations or regulations outlined in the bill warrant high scores. The emphasis on minors' security positions this bill primarily within Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (1) automated (3) recommendation system (1) show keywords in context
Description: Amend KRS 186.450 to allow persons who are at least 15 years of age to apply for a motor vehicle instruction permit; establish that an instruction permit is valid for four years; amend KRS 186.410, 186.452 and 159.051 to conform; EMERGENCY.
Summary: The bill amends Kentucky's motor vehicle instruction permit laws, lowering the age to apply to 15, establishing supervision requirements, and enhancing penalties for driving violations among minors.
Collection: Legislation
Status date: March 13, 2025
Status: Enrolled
Primary sponsor: Steven Rudy
(32 total sponsors)
Last action: delivered to Governor (March 13, 2025)
The text primarily concerns the issuance and regulations surrounding motor vehicle instruction permits and related requirements for minors in Kentucky. While there is a mention of 'automated driving system' within the context of licensing, it does not delve deeper into the implications of AI on society, nor does it address data governance, system integrity, or robustness related to AI. The involvement of AI is minimal, and does not necessitate specific legislation addressing broader impacts, data collection, or performance benchmarks. Therefore, the relevance to the categories is low.
Sector: None (see reasoning)
The text is focused on the regulations for instruction permits and does not engage extensively with the sectors defined. Although there is a mention of 'automated driving systems', it does not address political processes, public services, or any application of AI in healthcare, legal systems, employment, or education. Its primary focus is on driver's education regulations and licensing, thus rendering it irrelevant to the specified sectors.
Keywords (occurrence): automated (2) autonomous vehicle (1) show keywords in context
Description: Prohibiting the use of motor vehicle kill switches; providing exceptions; providing a minimum mandatory sentence for attempted murder of specified justice system personnel; providing correctional probation officers with the same firearms rights as law enforcement officers; prohibiting a person from depriving certain officers of digital recording devices or restraint devices, etc.
Summary: The bill prohibits the use of motor vehicle kill switches, establishes penalties for related offenses, enhances protections for law enforcement personnel, and sets requirements for testing infectious diseases in arrestees.
Collection: Legislation
Status date: Feb. 26, 2025
Status: Introduced
Primary sponsor: Criminal Justice
(2 total sponsors)
Last action: CS by Criminal Justice read 1st time (April 3, 2025)
Description: Amends the Consumer Fraud and Deceptive Business Practices Act. Provides that it is an unlawful practice for any person to engage in a commercial transaction or trade practice with a consumer in which: (1) the consumer is communicating or otherwise interacting with a chatbot, artificial intelligence agent, avatar, or other computer technology that engages in a textual or aural conversation; (2) the communication may mislead or deceive a reasonable consumer to believe that the consumer is comm...
Summary: The bill prohibits deceptive practices in commercial interactions where consumers may mistakenly believe they are communicating with a human rather than AI, requiring clear disclosure of AI usage.
Collection: Legislation
Status date: Feb. 6, 2025
Status: Introduced
Primary sponsor: Abdelnasser Rashid
(sole sponsor)
Last action: Referred to Rules Committee (Feb. 6, 2025)
Societal Impact
Data Governance (see reasoning)
The text directly addresses issues related to the social impact of AI, particularly concerning consumer rights and the ethical implications of AI systems such as chatbots deceiving users. It emphasizes transparency and fairness, highlighting the need for consumers to be aware that they are interacting with AI and not a human. This aligns the text significantly with social impact, since it deals with accountability and the ethical use of AI in commercial practices. Data governance is moderately relevant as it touches on managing data related to consumer interactions but is primarily focused on consumer deception rather than data management itself. System Integrity is less relevant, primarily because the text deals with ethical behaviors in communication rather than security or transparency of AI systems. Robustness is not relevant as it does not address system performance or benchmarks.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text relates strongly to the sector of Private Enterprises, Labor, and Employment, as it touches on consumer interactions within commercial contexts. It does not specifically address politics, government agencies, the judicial system, healthcare, academic institutions, international cooperation, nonprofits, or emerging sectors, making the relevance to these sectors lower. While there is a mention of consumers, the legislation does not focus on their role as employees or in broader organizational structures, thus narrowing the focus of applicable sectors.
Keywords (occurrence): artificial intelligence (4) chatbot (2) show keywords in context
Description: Concerning tools to protect minor users of social media.
Summary: The bill mandates social media companies in Colorado to implement protective measures for minor users, including age verification, user control settings, and privacy enhancements, aimed at safeguarding minors' mental health and data.
Collection: Legislation
Status date: Feb. 26, 2025
Status: Introduced
Primary sponsor: Jarvis Caldwell
(4 total sponsors)
Last action: Introduced In House - Assigned to Health & Human Services (Feb. 26, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text primarily addresses the impact of social media on minors, which ties closely to how AI can influence user experiences through algorithms. The mention of algorithms indicates concerns about the social impact of AI, particularly regarding mental health and safety of minors in social media contexts. While it does touch upon data privacy, its focus is more on the implications for youth rather than on robust governance of AI data practices. This legislation is particularly relevant for how AI-driven features influence minors and sets requirements for social media platforms, demonstrating a clear concern for societal outcomes, making it fit well in the 'Social Impact' category. The document does suggest a governance framework for data utilization with implications for data privacy and security, hence having some relevance to the 'Data Governance' category, but not as prominently as 'Social Impact'. The mention of requiring oversight and the principles behind algorithmic engagement suggests a minor connection to 'System Integrity'. 'Robustness' is not directly addressed since there are no specified benchmarks or performance measures mentioned in the text.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
This legislation directly affects the 'Government Agencies and Public Services' sector, particularly as it pertains to regulations around social media use for minors, suggesting a role for governmental bodies in establishing protections. It also has implications for 'Private Enterprises, Labor, and Employment' because it mandates requirements for social media companies, including the operation of algorithmic systems that could affect user experiences and company practices concerning minors. This legislation does not specifically align with the 'Judicial System', 'Healthcare', 'Academic and Research Institutions', or 'Politics and Elections' sectors. Given the focus on AI as utilized by social media companies, the relevance to 'Hybrid, Emerging, and Unclassified' is minimal compared to more defined sectors. Overall, it mainly concerns government regulation directly impacting the operations of social media platforms.
Keywords (occurrence): automated (1) recommendation system (2) algorithm (2) show keywords in context
Description: The purpose of this bill is to prohibit the use of synthetic media and artificial intelligence to influence an election.
Summary: The bill prohibits using synthetic media and artificial intelligence to influence elections in West Virginia, imposing penalties for violations, and mandates disclosures to ensure transparency regarding manipulated content.
Collection: Legislation
Status date: Feb. 14, 2025
Status: Introduced
Primary sponsor: Jack Woodrum
(3 total sponsors)
Last action: Laid over on 1st reading 2/28/2025 (Feb. 28, 2025)
Societal Impact
Data Governance (see reasoning)
The text of the bill explicitly relates to the use of synthetic media and artificial intelligence in the context of influencing elections. It outlines the definition and regulation of AI-generated content, focusing on its implications for misinformation and potential harm to democratic processes. This positions the bill primarily within the realm of Social Impact, as it addresses the societal issues posed by AI misuse. While the regulation of data in synthetic media is touched upon, its focus is more on prohibiting the misuse rather than comprehensive data governance. System Integrity and Robustness do not significantly apply as the bill is more concerned with the ethical implications and penalties related to AI usage. Overall, the primary emphasis is on the social implications of AI in political contexts.
Sector:
Politics and Elections (see reasoning)
The bill specifically addresses the regulation of synthetic media and AI in the context of influencing elections, aligning closely with the political sector. It sets out rules about what is permissible in political advertising and establishes penalties for violations, directly impacting the political landscape. The implications for Government Agencies and Public Services are minimal since it primarily deals with electoral processes rather than public services at large. Other sectors such as Judicial System, Healthcare, etc., are not relevant as they do not pertain to the content of the bill. Therefore, the clear relevance to Politics and Elections is strong, while other sectors receive much lower scores.
Keywords (occurrence): artificial intelligence (6) synthetic media (20) show keywords in context
Description: Concerning offenses involving child sex dolls.
Summary: The bill establishes criminal offenses and penalties regarding the possession, manufacture, trafficking, and distribution of child sex dolls in Washington State, aiming to protect minors from sexual exploitation.
Collection: Legislation
Status date: Jan. 13, 2025
Status: Introduced
Primary sponsor: Tina Orwall
(2 total sponsors)
Last action: Passed to Rules Committee for second reading. (Feb. 7, 2025)
This text primarily addresses offenses involving child sex dolls, focusing explicitly on their creation, distribution, and possession. The mention of 'artificial intelligence' specifically relates to the use of AI in the process of 'digitization,' implying a potential intersection with AI's role in creating fabricated depictions. However, the overall legislative intent is to penalize the trafficking and possession of child sex dolls rather than a focused analysis on the implications of AI technology itself. Thus, its relevance to categories like Social Impact, Data Governance, System Integrity, and Robustness is minimal, as these categories primarily deal with broader issues surrounding AI technology's societal implications, data management, system security, and performance benchmarks. Therefore, while there is a surface-level connection to AI, the primary focus of this legislation does not center around AI's impact or governance but rather on protection against exploitation and criminal activity.
Sector: None (see reasoning)
The legislation addresses offenses related to child sex dolls, with a brief mention of AI in the context of digital creation. Given this limited mention, the relevance to various sectors is also low. Specifically, it does not address political implications (Politics and Elections), has little to do with operational use in government agencies (Government Agencies and Public Services), and does not engage with the judicial system concerning AI application (Judicial System). Furthermore, there is no explicit mention of implications for healthcare, private enterprises, or academic settings. The closest relevance could be pertained to the regulation aspect in relation to private enterprises, but the underlying focus remains on criminality rather than sectoral integration of AI in businesses. Therefore, the scores reflect minimal relevance across all sectors.
Keywords (occurrence): artificial intelligence (1) automated (1) show keywords in context
Description: As introduced Bill 25-930 would require regulated entities to establish and make publicly available, a consumer health data privacy policy governing the collection, use, sharing, and sale of consumer health data with the consumer’s consent. It would establish additional protections and consumer authorizations for the sale of personal health data. It also establishes that regulated entities can only collect health data that is necessary for the purposes disclosed to the consumers and makes vio...
Summary: The Consumer Health Information Privacy Protection Act (CHIPPA) of 2024 mandates consumer consent for the collection and sharing of health data, ensuring transparency and accountability for entities handling such information.
Collection: Legislation
Status date: July 12, 2024
Status: Introduced
Primary sponsor: Phil Mendelson
(sole sponsor)
Last action: Referred to Committee on Health (Sept. 17, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The Consumer Health Information Privacy Protection Act (CHIPPA) clearly addresses the secure and responsible collection, use, and sharing of consumer health data, which naturally intersects with data governance. The legislation focuses on ensuring consent, data privacy, and transparency when it comes to managing health data, which is crucial in the context of AI collecting and processing personal health information. Furthermore, while the text contains some principles related to system integrity regarding ensuring consent and transparency, it does not explicitly mandate security protocols or oversight measures applicable to AI systems, leading to a lower relevance score for this category. The robustness category is less applicable as it does not address the performance benchmarks or auditing processes of AI systems directly, making it less relevant. The social impact category is more pertinent since the legislation seeks to protect consumers from potential harm from AI practices related to health data misuse and assures the ethical handling of personal information, which can influence societal trust in digital health platforms.
Sector:
Healthcare (see reasoning)
The CHIPPA Act specifically addresses consumer health data, which is inherently tied to the healthcare sector. The legislation outlines critical protections for consumer health data, requiring organizations to establish privacy policies and ensure informed consent. Its implications are significant for healthcare institutions and related entities that utilize AI technologies for processing health data. While it touches on aspects relevant to government agencies through the regulatory framework it sets, the primary focus remains on healthcare, thus making it most pertinent to that sector. Other sectors such as politics and elections or academic institutions are not directly addressed, and while the implications of data governance can influence various sectors, the clear focus of the legislation confines its primary relevance to healthcare.
Keywords (occurrence): machine learning (1) show keywords in context