4825 results:
Description: By Representative Tarsky of Needham, a petition of Joshua Tarsky for an investigation by a special commission (including members of the General Court) to promote safe social media use, identify best practices for social media platforms to safeguard children’s mental health, and develop guidelines for safe social media use. Mental Health, Substance Use and Recovery.
Summary: The bill establishes a special commission in Massachusetts to investigate social media's impact on children's mental health and recommend guidelines and practices for safe usage.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Joshua Tarsky
(sole sponsor)
Last action: Senate concurred (Feb. 27, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text discusses the establishment of a commission to investigate the risks social media poses to children's mental health and the development of safe practices for social media use. While it does mention algorithms in relation to audits of social media platforms, the broader focus is primarily on social impact, particularly in promoting safe usage and assessing harm. There is some relevance to data governance through the mention of data management practices and privacy concerns. System integrity appears relevant due to oversight and audits needed to ensure platform transparency and safeguard children's interests, but robustness does not directly apply as the emphasis is on immediate safety and ethical guidelines rather than performance benchmarks.
Sector:
Politics and Elections
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs (see reasoning)
The legislation fundamentally addresses how AI might be used or regulated within the context of social media's impact on children, implicating multiple sectors. The primary focus is on the political and public service sectors, as it seeks to regulate practices that affect the public, particularly vulnerable populations like children. While there are aspects pertaining to employment and labor indirectly through algorithm audits and their potential impact on employment practices within social media firms, this is not the primary focus of the text. Therefore, politics and elections and government agencies and public services sectors score higher than others.
Keywords (occurrence): algorithm (6) show keywords in context
Description: For legislation to regulate the use of artificial intelligence. Advanced Information Technology, the Internet and Cybersecurity.
Summary: The bill aims to regulate artificial intelligence use, particularly regarding automated employment decision tools, by ensuring ethical standards, transparency in data collection, and protecting worker privacy in Massachusetts.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Tricia Farley-Bouvier
(15 total sponsors)
Last action: Senate concurred (Feb. 27, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly addresses artificial intelligence within the context of regulating automated decision systems (ADS), indicating a clear focus on social impacts, data governance, system integrity, and robustness. AI is at the core of this legislation as it aims to regulate how automated systems interact with and impact individuals in the workplace, ensuring fairness, accountability, and transparency. This connects directly to Social Impact, as it deals with impacts on individuals and discrimination concerns, as well as Data Governance due to its focus on proper handling and management of data used by these AI systems. Additionally, it touches on aspects of System Integrity by emphasizing transparency and oversight in automated decisions and Robustness through the need for audits and compliance with established benchmarks.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The legislation primarily impacts the workplace, which aligns it closely with Private Enterprises, Labor, and Employment as it regulates how AI affects employment decisions. It discusses the use of AI in hiring, monitoring, and managing employee data, directly addressing concerns for workers' rights and employer responsibilities. It also relates to Government Agencies and Public Services in terms of compliance and oversight by government entities involved in enforcing these regulations. Although it does touch on technological aspects applicable to healthcare or academia, the direct focus is on employment and regulatory processes in business contexts, giving it high relevancy ostensibly in the private sector.
Keywords (occurrence): artificial intelligence (9) machine learning (2) automated (66) algorithm (4) show keywords in context
Description: Requires the division of criminal justice services to formulate a protocol for the regulation of the use of artificial intelligence and facial recognition technology in criminal investigations; restricts the use of artificial intelligence-generated outputs in court.
Summary: The bill mandates the creation of a protocol for using artificial intelligence and facial recognition in criminal investigations while prohibiting AI-generated evidence in court, addressing reliability and transparency concerns.
Collection: Legislation
Status date: March 21, 2025
Status: Introduced
Primary sponsor: Rodneyse Bichotte Hermelyn
(3 total sponsors)
Last action: referred to codes (March 21, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text discusses the regulation of artificial intelligence and facial recognition technology specifically in the domain of criminal justice. It highlights significant impacts on social fairness, accountability, and the integrity of the judicial process, which falls squarely under the Social Impact category. The mention of protocols for transparency, record-keeping, audits, and training for law enforcement using AI aligns with the System Integrity category, as it emphasizes the importance of oversight and control over AI applications. Data governance is relevant due to provisions addressing data accuracy, potential biases, and the implications for evidence in court, specifically regarding AI-generated outputs. However, the text does not focus on robustness as its primary concern, as it does not mention performance metrics or certifications for AI systems outside of compliance with established protocols. Thus, scores reflect strong relevance to Social Impact, System Integrity, and Data Governance but a lesser relevance to Robustness.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The legislation prominently addresses the use of artificial intelligence in the Judicial System, regulating the admissibility of AI-generated evidence in court and ensuring protections for defendants regarding AI outputs. Additionally, it involves protocols for law enforcement agencies, clearly linking it to Government Agencies and Public Services as they will be responsible for implementing these regulations. The emphasis on auditing, transparency, and training related to AI usage in law enforcement indicates significant relevance to this sector. However, other sectors like Healthcare, Private Enterprises, and Nonprofits do not have relevance in this context given that the text is strictly focused on criminal justice and law enforcement. Therefore, the scoring reflects high relevance to the Judicial System and Government Agencies but negligible relevance to the other sectors.
Keywords (occurrence): artificial intelligence (7) machine learning (1) show keywords in context
Description: Crimes and punishments; sexual obscenity; making certain acts unlawful; effective date.
Summary: This bill criminalizes the nonconsensual dissemination of private sexual images and artificially generated sexual depictions in Oklahoma, establishing penalties for offenders and defining relevant terms. It aims to protect individuals from privacy violations.
Collection: Legislation
Status date: March 26, 2025
Status: Engrossed
Primary sponsor: Toni Hasenbeck
(3 total sponsors)
Last action: First Reading (March 26, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text deals with the regulation of artificial intelligence in relation to obscenity and nonconsensual dissemination of private sexual images, specifically focusing on the implications of generative artificial intelligence. This legislation addresses the societal impacts of using AI-generated content in harmful ways, which ties directly into the Social Impact category. There are references to responsible use and labeling of AI, implying a need for accountability and ethical considerations regarding outputs, aligning with the Data Governance category as well. Additionally, the obligations imposed on dissemination processes introduce an aspect of System Integrity, ensuring that there are mechanisms for oversight and compliance in AI-generated content. However, there is limited emphasis on performance benchmarks or robustness of AI systems, which makes the Robustness category less relevant.
Sector:
Judicial system (see reasoning)
The text primarily addresses the implications of AI in the context of obscenity and creation of synthetic content, particularly within the realm of personal rights and privacy concerns. It discusses the legal frameworks surrounding the dissemination of both real and AI-generated sexual depictions. This aligns directly with the Judicial System sector, as it introduces legal definitions and consequences related to the use of technology in crimes against individuals. Conversely, while the text likely has implications for the healthcare and public services sectors regarding the wellbeing of individuals, it does not directly address their specific regulatory frameworks, leading to lower relevance scores for those sectors. Overall, the core concerns appear most relevant to the Judicial System.
Keywords (occurrence): artificial intelligence (3) automated (1) show keywords in context
Description: A resolution expressing the sense of the Senate that paraprofessionals and education support staff should have fair compensation, benefits, and working conditions.
Summary: The bill expresses the Senate's support for fair compensation, benefits, and working conditions for paraprofessionals and educational support staff, advocating for their rights and job security in schools.
Collection: Legislation
Status date: Nov. 6, 2023
Status: Introduced
Primary sponsor: Edward Markey
(12 total sponsors)
Last action: Referred to the Committee on Health, Education, Labor, and Pensions. (text: CR S5361-5362) (Nov. 6, 2023)
Societal Impact
Data Governance (see reasoning)
The text discusses issues related to the compensation, working conditions, and rights of paraprofessionals and education support staff. It does mention 'algorithms' and 'artificial intelligence technology' in the context of notification and opportunities for involvement in their use within the school environments. This indicates a potential awareness and need for governance regarding the implications of AI tools in education. As such, this aspect relates to the Data Governance and Social Impact categories due to the mention of AI alongside the repercussions of its implementation on staff rights and responsibilities. The other two categories (System Integrity and Robustness) are less applicable as the primary focus is on the rights and working conditions rather than on system metrics or performance. Therefore, scores assigned reflect significant but not overarching relevance.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily addresses the working conditions and rights of paraprofessionals and education support staff. The mention of AI and algorithms, while relevant, suggests a focus on the intersection of employment and AI that does not entirely encompass legislation in sectors like Politics and Elections or Healthcare. The Government Agencies and Public Services sector is the most applicable since it discusses the public education system and the staff involved in it. Other sectors like Judicial System, Private Enterprises, Labor, and Employment, while possessing some overlapping ideas related to employment rights, do not directly align with the primary concerns of the legislation. Academic and Research Institutions could relate slightly through the educational context, but it’s a lesser focus. Thus, the highest scores relate to the educational sector concerning competent work environments.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: An act to amend Section 17075.10 of, and to add Section 17254 to, the Education Code, relating to school facilities.
Summary: Senate Bill 539 amends school facility funding regulations, enabling expedited approval and construction for health and safety projects during emergencies, while streamlining design processes utilizing machine learning and regular reviews for improvements.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Christopher Cabaldon
(sole sponsor)
Last action: Read second time and amended. Re-referred to Com. on APPR. (April 10, 2025)
Societal Impact (see reasoning)
The text explicitly mentions the use of machine learning twice in relation to automating aspects of the school facilities permitting process. This aligns closely with the Social Impact category as it discusses the implications of AI technology (in this case, machine learning) on health and safety projects in schools, potentially affecting students and communities. The mention of health and safety, alongside automated decision-making processes for facilitating construction and permitting, also suggests a concern for accountability and bias that could arise from AI applications in public services. There's little focus on data governance, system integrity, or robustness as standalone themes beyond the mention of machine learning, which does not imply broader legislative perspectives on data management or system quality. Therefore, Social Impact is rated highly relevant, while the other categories receive lower relevance scores.
Sector:
Government Agencies and Public Services (see reasoning)
The relevance of the sectors varies based on the content of the text. The most direct relevance is to Government Agencies and Public Services, as the bill involves state agencies (the Department of Education, the State Architect, and the State Allocation Board) in the implementation of machine learning technologies for public safety projects in schools. While there is an implication of impact on the education sector, it does not specifically reference legislation concerning educational policy, thereby limiting the rating for Academic and Research Institutions. Healthcare has no direct connection, and the other sectors are not pertinent in the context of this text. Thus, only Government Agencies and Public Services receives a high score.
Keywords (occurrence): machine learning (4) show keywords in context
Description: An act to add Chapter 25.1 (commencing with Section 22757.20) to Division 8 of the Business and Professions Code, relating to artificial intelligence.
Summary: The LEAD for Kids Act establishes standards for ethical AI systems used by children, requiring risk assessments, regulation compliance, and the creation of an oversight board to ensure child safety in AI interactions.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Rebecca Bauer-Kahan
(sole sponsor)
Last action: From committee chair, with author's amendments: Amend, and re-refer to Com. on P. & C.P. Read second time and amended. (April 10, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly establishes regulations specifically governing AI systems intended for children. Due to its focus on adverse impacts, risk assessments, and protections for minors, the relevance to the Social Impact category is extremely high. The regulation of AI systems underscores the importance of managing psychological and material harm caused by these technologies, directly aligning it with issues of fairness, accountability, and consumer protection in AI applications. The Data Governance category is also highly relevant, as the act discusses criteria for AI system classification, risk evaluation related to personal information, and establishes compliance requirements to ensure children's data privacy. The System Integrity category is moderately relevant, as it touches on oversight mechanisms but is not as focused on the inherent security or transparency of AI systems. Robustness is slightly relevant, mainly because it infers performance benchmarks without specifically detailing any new benchmarks or audit standards for AI systems. Overall, this act is focused on ethical development and ensuring safety for children using AI technology, making it relevant for the Social Impact and Data Governance categories primarily.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
This legislation is particularly focused on children's interactions with AI technology, impacting several sectors related to their welfare. The most direct connection is with Government Agencies and Public Services, as it mandates the establishment of the LEAD for Kids Standards Board and outlines responsibilities for developers and deployers regulated by state authorities. The Healthcare sector is somewhat less relevant, though indirectly related to children's health impacts due to AI technology. The Private Enterprises, Labor, and Employment sector is relevant since it discusses developer obligations and business practices concerning AI products intended for children. Academic and Research Institutions relate to the act in terms of gathering relevant expertise for standards development. Other sectors like Politics and Elections, Judicial System, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified sectors do not directly pertain to the focus of this legislation, resulting in lower relevance scores. This bill primarily influences sectors that are involved directly with child welfare and business practices governing AI technology aimed at minors.
Keywords (occurrence): artificial intelligence (24) chatbot (3) show keywords in context
Description: Establishes and appropriates funds for a data and artificial intelligence governance and decision intelligence center and necessary positions to improve data quality and data sharing statewide.
Summary: The bill establishes a statewide Data and Artificial Intelligence Governance and Decision Intelligence Center in Hawaii, aiming to enhance data sharing, quality, transparency, and evidence-based decision-making across state agencies.
Collection: Legislation
Status date: Jan. 21, 2025
Status: Introduced
Primary sponsor: Ikaika Hussey
(6 total sponsors)
Last action: Referred to ECD, FIN, referral sheet 2 (Jan. 21, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly includes terms like 'data and artificial intelligence governance', 'machine learning', and 'decision intelligence', indicating a strong focus on the implications of AI on data quality and sharing. It discusses the responsible use of AI, which is closely linked to the social impact that AI systems can have on citizens, such as promoting transparency and efficiency in government operations, thus aligning strongly with the Social Impact category. It also emphasizes the need for accurate data management and governance of AI technologies across state agencies, which is related to Data Governance. There are provisions for system integrity such as secured data sharing and proper access control, which relate to ensuring the robustness and accountability of AI systems, thus aligning moderately with System Integrity. However, the text does not directly address benchmarks or performance evaluation of AI technologies, making the relevance to Robustness less pronounced. Overall, the text falls strongly within the Social Impact and Data Governance categories, with some relevance noted for System Integrity.
Sector:
Government Agencies and Public Services (see reasoning)
There is significant relevance of the text to the Government Agencies and Public Services sector, as it focuses on establishing a governance center directly linked to the use of AI by state agencies for improving data collection and sharing for public services. The emphasis on increasing citizen satisfaction and improving government performance aligns closely with this sector. Although there are mentions of potential impacts on other sectors, such as labor and public interest concerns from AI applications, the primary focus remains within the governmental context. Therefore, the Government Agencies and Public Services sector will receive a high score whereas the connections to Political and Elections, Judicial System, Healthcare, Private Enterprises, Academic Institutions, International Cooperation, Nonprofits, or Hybrid sectors are less pronounced and not directly applicable to this text.
Keywords (occurrence): artificial intelligence (19) machine learning (3) show keywords in context
Description: Nursing Practice Changes
Summary: The Nursing Practice Changes bill clarifies the scope of licensed nurses' practices regarding anesthesia administration, modifies licensing processes, expands the Board of Nursing's authority, and ensures confidentiality in disciplinary actions.
Collection: Legislation
Status date: April 8, 2025
Status: Passed
Primary sponsor: Janelle Anyanonu
(10 total sponsors)
Last action: Signed by Governor - Chapter 101 - Apr. 8 (April 8, 2025)
Societal Impact
Data Governance (see reasoning)
The text explicitly mentions 'artificial intelligence' and defines it as a broad category of digital technologies involving algorithms that drive software and robotics behavior, highlighting its relevance to nursing practice. It also includes a mandate for the board to establish standards for the use of AI in nursing, which signifies a focus on the implications of AI in healthcare and nursing practice. This directly ties the legislation to the Social Impact category as it addresses the integration and implications of AI in a healthcare context. Data Governance is moderately relevant as it may imply considerations of data management and accuracy within AI systems used in nursing but lacks specifics in the text. System Integrity is slightly relevant because the mention of AI standards may infer some aspects of oversight, but does not explicitly address security or transparency. Lastly, Robustness is also slightly relevant since the text includes new benchmarks but doesn't focus on certification or auditing of AI systems.
Sector:
Healthcare (see reasoning)
The text directly addresses the use of AI in nursing, clearly placing it within the healthcare sector. The definition of AI and the requirement to develop standards reflect a focused application of AI in clinical settings, impacting nursing practices. Therefore, Healthcare is assigned a high relevance score. Other sectors such as Government Agencies and Public Services might receive slight relevance because the board of nursing functions somewhat like a government agency, but the focus remains predominantly on healthcare. The legislation does not address AI in contexts like Politics and Elections, Judicial System, or Academic and Research Institutions, thereby scoring them as not relevant.
Keywords (occurrence): artificial intelligence (2) show keywords in context
Description: To Create The Arkansas Kids Online Safety Act.
Summary: The Arkansas Kids Online Safety Act aims to enhance online safety for minors by requiring platforms to implement safeguards, parental controls, and responsible advertising practices.
Collection: Legislation
Status date: Jan. 13, 2025
Status: Introduced
Primary sponsor: Zack Gramlich
(5 total sponsors)
Last action: WITHDRAWN BY AUTHOR (March 11, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The Arkansas Kids Online Safety Act primarily addresses the need for protective measures and regulations for minors using online platforms and technologies, which inherently includes AI-based systems such as personalized recommendation algorithms and automated decision-making processes. The content touches on psychological distress caused due to compulsive behaviors, which may be driven by algorithmic manipulations. AI systems that recommend content and advertisements targeted at minors are directly impacted by the guidelines outlined in this act. However, while this legislation could have some implications for robustness and system integrity due to the goals of ensuring safe and responsible use of technology, the primary focus appears to be on social accountability toward minors. This leads to emphasizing its more critical relevance to 'Social Impact,' with modest relevance to other categories.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The act particularly focuses on protecting minors, implying its relevance to sectors such as 'Government Agencies and Public Services' due to the need for oversight and enforcement of regulations. It is also related to 'Private Enterprises, Labor, and Employment' because it affects the responsibilities of businesses that utilize AI and data-driven technology in their platforms. However, it does not notably address 'Healthcare', 'Judicial System', or 'Politics and Elections' directly, hence those sectors receive lower scores. The direct relevance of the act to educational or academic institutions is limited as well, resulting in an overall focus on the aforementioned sectors. Furthermore, while it intersects with various sectors, it does not fit into hybrid or unclassified categories effectively.
Keywords (occurrence): artificial intelligence (1) machine learning (1) automated (2) recommendation system (9) algorithm (18) show keywords in context
Description: A BILL for an Act to provide for a legislative management study relating to the development of advanced technologies.
Summary: The bill mandates a legislative management study in North Dakota to analyze the development of advanced technologies, exploring funding sources and potential grant programs for innovation.
Collection: Legislation
Status date: Feb. 25, 2025
Status: Engrossed
Primary sponsor: Josh Christy
(12 total sponsors)
Last action: Reported back amended, do pass, amendment placed on calendar 16 0 0 (April 10, 2025)
Societal Impact
System Integrity
Data Robustness (see reasoning)
The text contains provisions related to the establishment of an advanced technology grant program, specifically highlighting the emphasis on artificial intelligence, machine learning, and similar technologies. This indicates a direct focus on the social implications and development aspects of AI, thus it has strong relevance to the categories. The text does not deal with data governance practices, systemic integrity concerns, or robustness measures explicitly but does indicate oversight and compliance considerations somewhat indirectly through the review process, making those categories less relevant.
Sector:
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The text mentions advanced technology in contexts that broadly touch the private sector but does not specifically outline provisions for particular sectors like healthcare or government agencies. Its focus on entrepreneurship and small business innovation may tie into the private enterprises sector but it is more overarching. Therefore, while relevant, the specific legislative considerations for sectors aren't the primary focus here.
Keywords (occurrence): machine learning (1) show keywords in context
Description: Enacts the "political artificial intelligence disclaimer (PAID) act"; requires political communications that use synthetic media to disclose that they were created with the assistance of artificial intelligence; requires committees that use synthetic media to maintain records of such usage.
Summary: The "Political Artificial Intelligence Disclaimer (PAID) Act" requires political communications using synthetic media to disclose AI assistance, mandating committees to maintain records of such usage for transparency.
Collection: Legislation
Status date: Jan. 17, 2025
Status: Introduced
Primary sponsor: Kevin Parker
(sole sponsor)
Last action: REFERRED TO ELECTIONS (Jan. 17, 2025)
Societal Impact
Data Governance (see reasoning)
The text specifically addresses the impact of AI in the context of political communications, especially focusing on synthetic media and the necessity for disclosure regarding AI assistance. This touches on issues of accountability and transparency in AI use in political discourse, which are key elements of the Social Impact category. Given its focus on ensuring that political communications made with AI tools do not mislead voters, the relevance to Social Impact is extremely high. It also indirectly relates to Data Governance as it implies the management of records related to the use of AI in political contexts, but its primary emphasis is clearly on social implications. Although it discusses some aspects of oversight and record-keeping, it does not touch directly on system integrity or robustness in a significant way. Therefore, only Social Impact is rated as highly relevant, while Data Governance is considered moderately relevant due to its connection to record keeping but lacks a direct focus on data management issues.
Sector:
Politics and Elections (see reasoning)
The text deals specifically with the regulation of synthetic media in political communications, indicating a clear tie to the politics and elections sector. The need for disclosure about the use of artificial intelligence highlights the efforts to ensure transparency in political processes, making it extremely relevant to the political sector. It does not pertain to government agencies, the judicial system, healthcare, or other sectors because its primary focus lies within the political domain. Thus, it receives a high score in the Politics and Elections sector. Other sectors do not apply as they do not involve direct relevance to the AI's use in political contexts.
Keywords (occurrence): artificial intelligence (3) automated (1) synthetic media (5) show keywords in context
Description: An act to add Sections 16729 and 16756.1 to the Business and Professions Code, relating to business regulations.
Summary: Assembly Bill 325 amends the Cartwright Act to simplify complaint requirements for antitrust violations and prohibits the use of certain pricing algorithms that incorporate nonpublic competitor data, enhancing consumer protection.
Collection: Legislation
Status date: Jan. 27, 2025
Status: Introduced
Primary sponsor: Cecilia Aguiar-Curry
(sole sponsor)
Last action: Re-referred to Com. on P. & C.P. (April 10, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text specifically discusses the regulation of pricing algorithms, which directly relates to the use of AI techniques in those algorithms. The key term 'pricing algorithm' is defined to include 'a computational process... derived from machine learning or other artificial intelligence techniques,' which makes this text highly relevant to both AI system integrity and its social impact. The legislation seeks to prohibit certain uses of algorithms that leverage nonpublic competitor data, thus touching on data governance aspects as well. However, it does not directly address performance benchmarks or compliance standards, limiting its relevance to robustness. The implications for societal fairness and potential discrimination from algorithmic pricing further enhance its relevance to social impact. Therefore, I assign scores that reflect this nuanced connection.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
While the text does not uniquely identify its application within a specific sector such as Politics, Healthcare, or Education, it is likely to influence Private Enterprises due to its focus on pricing algorithms. It also relates to Government Agencies and Public Services, as enforcement may fall within the purview of the Attorney General and local agencies. Given that it addresses regulatory guidelines that affect how businesses operate concerning pricing, it is relevant to Private Enterprises. I scored the sectors based on the extent to which they are affected or involved, recognizing that the primary focus remains on the legality and application of AI in pricing, affecting both businesses and regulatory frameworks.
Keywords (occurrence): artificial intelligence (1) machine learning (1) algorithm (21) show keywords in context
Description: Relates to the training and use of artificial intelligence frontier models; defines terms; establishes remedies for violations.
Summary: The RAISE Act establishes regulations for the training and use of advanced AI models in New York. It mandates transparency, safety protocols, employee protections, and remedies for violations to mitigate risks associated with AI deployment.
Collection: Legislation
Status date: March 27, 2025
Status: Introduced
Primary sponsor: Andrew Gounardes
(sole sponsor)
Last action: REFERRED TO INTERNET AND TECHNOLOGY (March 27, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text establishes definitions, protections, and obligations related to the training and use of artificial intelligence frontier models. It explicitly includes terms like 'Artificial Intelligence,' 'frontier model,' and discusses obligations for developers, emphasizing transparency, safety, and the potential risk of critical harm associated with AI systems. This makes it relevant to all categories: Social Impact, Data Governance, System Integrity, and Robustness, as it touches upon the societal implications of AI, management of data related to these models, the integrity of the AI systems themselves, and the performance benchmarks that should be employed.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation addresses various sectors significantly, particularly Government Agencies and Public Services, since it is likely that oversight and compliance mechanisms will involve state enforcement through the Attorney General. It is also relevant to Private Enterprises, Labor, and Employment, as it discusses employee protections and obligations for developers, which directly relates to business practices in AI deployment. The legislation does not focus on sectors like Healthcare, Politics and Elections, or Judicial System specifically, but it contains elements relevant to how AI is used more generally in business and public services, without strictly fitting into other sectors. Therefore, relevant sectors are Government Agencies and Public Services, and Private Enterprises, Labor, and Employment.
Keywords (occurrence): artificial intelligence (7) automated (1) foundation model (1) show keywords in context
Description: To amend the Federal Election Campaign Act of 1971 to provide further transparency and accountability for the use of content that is generated by artificial intelligence (generative AI) in political advertisements by requiring such advertisements to include a statement within the contents of the advertisements if generative AI was used to generate any image or video footage in the advertisements, and for other purposes.
Summary: The REAL Political Advertisements Act mandates transparency in political ads featuring AI-generated content, requiring clear disclaimers and promoting accountability to combat misinformation and protect democratic processes.
Collection: Legislation
Status date: May 2, 2023
Status: Introduced
Primary sponsor: Yvette Clarke
(sole sponsor)
Last action: Referred to the House Committee on House Administration. (May 2, 2023)
Societal Impact
System Integrity (see reasoning)
The text explicitly addresses the use of artificial intelligence (AI) in political advertisements and emphasizes the importance of transparency and accountability in these ads. This directly relates to the social implications of AI on public discourse, particularly concerning misinformation and the integrity of democratic processes. The legislation aims to mitigate the risks associated with the misuse of AI-generated content in political contexts, establishing a clear connection to the social impact of AI. Therefore, 'Social Impact' scores high. In terms of 'Data Governance', while the act implies indirect concerns about data through its transparency requirements, it does not directly tackle data management issues like accuracy or bias in datasets. Hence, it scores low here. For 'System Integrity', the requirement for clear disclaimers enhances the integrity of the information presented, but it does not delve into comprehensive measures for system security or human oversight. Consequently, it receives a moderate score. Lastly, 'Robustness' deals with the specific benchmarks and compliance for AI systems, which is not the primary focus of this act, resulting in a low score.
Sector:
Politics and Elections (see reasoning)
The text is specifically about political advertisements and their regulation in the context of AI, directly aligning it with the 'Politics and Elections' sector. The act mandates transparency regarding the use of AI in these advertisements, enhancing the democratic process. While there may be elements affecting 'Government Agencies and Public Services', it is not primarily focused on government operations outside of election processes, so that score is lower. The bill does not attempt to address the use of AI in healthcare, the judicial system, employment, research, international cooperation, or the operation of nonprofits. It neither fits into 'Hybrid, Emerging, and Unclassified' as it explicitly pertains to a defined sector. Therefore, the strongest match remains with 'Politics and Elections', resulting in a high score.
Keywords (occurrence): artificial intelligence (6) show keywords in context
Description: Creates standards for independent bias auditing of automated employment decision tools.
Summary: The bill establishes standards for independent bias auditing of automated employment decision tools (AEDTs) used by employers, ensuring compliance with anti-discrimination laws and promoting transparency in employment practices.
Collection: Legislation
Status date: March 18, 2024
Status: Introduced
Primary sponsor: Andrew Zwicker
(sole sponsor)
Last action: Introduced in the Senate, Referred to Senate Labor Committee (March 18, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
This legislation primarily impacts Social Impact as it addresses biases in employment decisions made by automated employment decision tools (AEDTs), which can affect the job opportunities of individuals. The legislation mandates independent bias auditing, which aims to ensure fairness and accountability in AI systems used in hiring, thus addressing potential discrimination and promoting equitable treatment. Given the specific mention of bias audits and compliance with anti-discrimination laws, this score is high. Data Governance is also highly relevant, as it involves ensuring that the data used by AEDTs is accurate and free from biases, as indicated by the legislation's requirements for audits based on historical and test data. System Integrity is moderately relevant since the legislation doesn’t explicitly mention system security, though the requirement for transparency aligns with core integrity concepts. Robustness is less relevant here as the focus of this bill is more on auditing and compliance rather than performance benchmarks for AI. Overall, the strongest connections are with Social Impact and Data Governance, which are crucial given that the bill directly relates to preventing bias in employment scenarios.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The primary sectors impacted by this legislation include Private Enterprises, Labor, and Employment, as it concerns the use of AEDTs within workforce hiring practices, promoting fairness and accountability in employment decisions. The legislation requires employers to be transparent about their use of these tools and their assessment processes, directly impacting how businesses operate concerning employment. Government Agencies and Public Services may also be relevant to some extent, as the regulation could involve public sector employment entities, but is primarily targeted toward private employers. Judicial System is not significantly addressed as the text focuses on employment auditing rather than any legal adjudication process. The remaining sectors such as Politics and Elections, Healthcare, Academic and Research Institutions, and Nonprofits and NGOs are not directly applicable here as they don’t pertain to the scope of automated employment decisions. Thus, the most pertinent sector for this legislation is Private Enterprises, Labor, and Employment.
Keywords (occurrence): artificial intelligence (2) machine learning (1) automated (7) show keywords in context
Description: To require a strategy to defend against the economic and national security risks posed by the use of artificial intelligence in the commission of financial crimes, including fraud and the dissemination of misinformation, and for other purposes.
Summary: The AI PLAN Act mandates a strategy to address economic and national security risks from AI-driven financial crimes, including fraud and misinformation, requiring reports and recommendations from key federal officials.
Collection: Legislation
Status date: March 14, 2025
Status: Introduced
Primary sponsor: Zachary Nunn
(2 total sponsors)
Last action: Referred to the House Committee on Financial Services. (March 14, 2025)
Societal Impact
Data Governance (see reasoning)
The proposed legislation focuses heavily on the risks associated with AI, especially in relation to financial crimes and misinformation. It mentions explicitly the use of AI in the commission of fraud and misinformation, which directly ties to the social implications AI poses. The need to develop interagency policies and procedures to mitigate these risks indicates a strong concern for the societal impact of AI technologies. Additionally, the legislation addresses the critical concern around AI and misinformation, which has significant consequences for public trust and discourse. Given these factors, the Social Impact category clearly aligns with the text. Regarding Data Governance, while there are indirect references to the management and control of AI to tackle fraud and misinformation, it is not the primary focus of the text. System Integrity and Robustness relate to underlying AI system characteristics, which this text does not directly address, making them less relevant.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
This legislation primarily focuses on the impact of AI on national security and economics, specifically looking at financial crimes and misinformation, which are critical in the sector of Government Agencies and Public Services. The bill is intended for implementation by various governmental departments (like the Treasury and Homeland Security) in addressing the risks posed by AI. While the aspects of fraud and misinformation could be seen in the context of the Judicial System, the primary emphasis of this legislation is more about managing and regulating AI use within government operations rather than within courts. Politics and Elections is somewhat relevant due to the mention of foreign election interference through AI but is not the main theme of the bill. Other sectors such as Healthcare, Private Enterprises, Labor, and Employment, Academic and Research Institutions, and Nonprofits and NGOs are not directly mentioned or relevant in the context of this text. Thus, Government Agencies and Public Services receives the highest scoring.
Keywords (occurrence): artificial intelligence (8) show keywords in context
Description: Requires generative artificial intelligence providers to include provenance data on synthetic content produced or modified by a generative artificial intelligence system that the generative artificial intelligence provider makes available; provides for the repeal of certain provisions relating thereto.
Summary: The bill mandates generative AI providers to include provenance data for synthetic content, enhancing transparency and accountability regarding AI-generated material, and aims to combat deepfakes in New York.
Collection: Legislation
Status date: March 5, 2025
Status: Introduced
Primary sponsor: Alex Bores
(sole sponsor)
Last action: referred to science and technology (March 5, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The AI-related portions of the text focus significantly on the implementation of provenance data for synthetic content created or modified by generative artificial intelligence systems. This gives rise to implications across all four categories. The Social Impact category is relevant due to the law's implications for misinformation, trust, and accountability regarding AI-generated content—a significant societal concern. Data Governance is also highly relevant, as the legislation establishes requirements for documenting the origins and modifications of content produced by AI systems, which directly relates to accurate data management. System Integrity pertains to the security and accuracy of the AI outputs as it ensures protocols on how such data is used and maintained, emphasizing the need for oversight. Robustness is relevant due to setting standards for evaluating AI systems' generative capabilities, as it sets benchmarks for the generations of synthetic media. Overall, all categories have a meaningful connection to the text due to its emphasis on the regulation of generative AI systems and their societal implications.
Sector:
Politics and Elections
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs (see reasoning)
The text primarily relates to various sectors, especially focusing on AI's interaction with issues surrounding misinformation and societal trust associated with synthetic media. The potential implications for politics and elections arise concerning misinformation during campaigns. There is also clear relevance for government agencies and public services due to their role in implementing and adapting to the regulations imposed by the law. However, while the healthcare sector may touch upon AI's implications, its direct relevance is lower here. Academic and research institutions may benefit through research in AI governance and ethics. Overall, while several sectors are touched upon, primarily the most relevance lies within government oversight and potential impacts on political processes.
Keywords (occurrence): artificial intelligence (21) automated (1) show keywords in context
Description: Revise election laws regarding disclosure requirements for the use of AI in elections
Summary: The bill regulates the use of AI-generated deepfakes in election communications, establishing disclosure requirements and penalties to protect against misinformation and ensure fair elections in Montana.
Collection: Legislation
Status date: March 7, 2025
Status: Engrossed
Primary sponsor: Janet Ellis
(sole sponsor)
Last action: (H) Committee Report--Bill Concurred as Amended (H) State Administration (April 3, 2025)
Description: Relating to the deceptive trade practice of failure to disclose information regarding the use of artificial intelligence system or algorithmic pricing systems for setting of price.
Summary: This bill establishes that failing to disclose the use of artificial intelligence or algorithmic pricing in setting prices is a deceptive trade practice in Texas. It aims to enhance transparency and consumer protection.
Collection: Legislation
Status date: March 13, 2025
Status: Introduced
Primary sponsor: Royce West
(sole sponsor)
Last action: Filed (March 13, 2025)
Societal Impact
Data Governance (see reasoning)
The text is primarily focused on the deceptive practices associated with the lack of disclosure related to artificial intelligence systems and algorithmic pricing. This falls under the 'Social Impact' category as it aims to protect consumers from misinformation regarding AI applications that could affect their purchasing decisions. Additionally, it addresses issues such as consumer protection and potentially the biases that may arise in automated pricing systems. The relevance to 'Data Governance' is present but less explicit, as it tangentially relates to the management and transparency of AI systems but does not focus on data security or accuracy. 'System Integrity' and 'Robustness' are less relevant as the legislation does not address specific security, oversight, or performance benchmarks of AI systems.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text's emphasis on the disclosure of AI usage in pricing directly relates to 'Private Enterprises, Labor, and Employment,' as it affects how businesses operate regarding consumer transactions and pricing strategies. It is also relevant to 'Government Agencies and Public Services' since regulations of this nature often stem from governmental oversight intended to protect consumers. There are non-significant implications for 'Judicial System' as enforcement could involve legal interpretations of deceptive practices, but this is more of an indirect association. Other sectors such as 'Politics and Elections,' 'Healthcare,' and 'Academic and Research Institutions' do not find direct connections in this text. The text does not fit into 'Nonprofits and NGOs' or 'International Cooperation and Standards' as it focuses more on local business practices.
Keywords (occurrence): artificial intelligence (4) machine learning (1) show keywords in context