4827 results:
Description: Requiring that candidates, campaign finance entities, and specified other persons, or agents of candidates, campaign finance entities, or specified other persons, that publish, distribute, or disseminate, or cause to be published, distributed, or disseminated, to another person in the State campaign materials that use or contain synthetic media include a specified disclosure in a specified manner; and defining "synthetic media" as an image, an audio recording, or a video recording that has be...
Summary: House Bill 740 mandates disclosure of synthetic media in campaign materials, requiring explicit statements to inform voters when images, audio, or videos have been digitally altered, enhancing transparency in elections.
Collection: Legislation
Status date: Jan. 27, 2025
Status: Introduced
Primary sponsor: Anne Kaiser
(11 total sponsors)
Last action: Hearing 2/11 at 1:00 p.m. (Jan. 28, 2025)
Societal Impact (see reasoning)
The text explicitly involves synthetic media, which is created using artificial intelligence technologies. The legislation calls for the disclosure of synthetic media used in campaign materials to ensure transparency and inform the public about alterations made to images, audio, and video. This relevance connects directly to the broader implications of AI on society, particularly concerning misinformation, public understanding, and deception in political contexts. Thus, it fits within the 'Social Impact' category very well. The other categories, such as Data Governance, System Integrity, and Robustness, do not apply strongly since the focus is primarily on the social and ethical implications rather than data handling, system security, or performance metrics.
Sector:
Politics and Elections (see reasoning)
This legislation is primarily focused on the political sector, regulating how synthetic media is used within election campaigns to ensure that voters are informed about the nature of the media they encounter. It specifically addresses the role of AI in crafting potentially misleading campaign materials, making it pertinent to the political context. Though there may be tangential relevance to government and public services because campaign laws affect public interactions with government operations, the legislation's focus on election materials distinctly categorizes it within 'Politics and Elections' rather than the broader government sector.
Keywords (occurrence): synthetic media (4) show keywords in context
Description: A bill to improve the resilience of critical supply chains, and for other purposes.
Summary: The Promoting Resilient Supply Chains Act of 2025 aims to enhance the resilience and stability of critical supply chains by establishing a working group and new responsibilities for commerce officials, while supporting U.S. manufacturing and reducing reliance on specific countries.
Collection: Legislation
Status date: Jan. 27, 2025
Status: Introduced
Primary sponsor: Maria Cantwell
(3 total sponsors)
Last action: Committee on Banking, Housing, and Urban Affairs. Hearings held. (Feb. 11, 2025)
Data Robustness (see reasoning)
The Promoting Resilient Supply Chains Act of 2025 primarily focuses on enhancing the resilience and stability of critical supply chains, especially in relation to critical and emerging technologies. Although the text does not explicitly mention AI technologies or related keywords, it does imply relevance to AI in the context of 'emerging technologies'. AI is often categorized as an emerging technology critical to supply chains, especially concerning automation and efficiency in production and supply chain management. Therefore, it can be reasoned that there may be significant overlap between the objectives of this bill and the transformative potential of AI within supply chains. However, since AI is not explicitly acknowledged in the text, the relevance is somewhat diminished. Thus, scores across the categories may reflect limited but potential relevance to AI systems across operations, fairness, or economic impacts associated with technology.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
International Cooperation and Standards
Hybrid, Emerging, and Unclassified (see reasoning)
The text discusses improvements to supply chains specifically, without directly referencing applications of AI in any identified sectors such as healthcare or government services. However, critical supply chains are inherently tied to sectors like manufacturing, which increasingly utilize AI for process optimization, forecasting, and analytics. Hence, while the direct references to AI-related applications in these sectors are lacking, the implication of improvement to critical industries may touch on their dependence on AI systems to enhance efficiency and resilience. As the bill does not cater explicitly to the different sectors defined, it is scored with lower relevance across all sectors, but acknowledges the potential connection to certain sectors where AI may have a role.
Keywords (occurrence): artificial intelligence (1) automated (1)
Description: Requires the owner, licensee or operator of a generative artificial intelligence system to conspicuously display a warning on the system's user interface that is reasonably calculated to consistently apprise the user that the outputs of the generative artificial intelligence system may be inaccurate and/or inappropriate.
Summary: This bill mandates that generative AI systems must display warnings about the potential inaccuracies of their outputs, ensuring users are informed. Non-compliance incurs monetary penalties.
Collection: Legislation
Status date: Jan. 27, 2025
Status: Introduced
Primary sponsor: Clyde Vanel
(2 total sponsors)
Last action: referred to science and technology (Jan. 27, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly pertains to the implications of AI on user experience, particularly highlighting the potential inaccuracies and appropriateness of outputs generated by generative AI systems. This aligns closely with Social Impact, as it addresses user protection and the information users receive when interacting with AI systems. Data Governance is also relevant since the warning implies a management responsibility towards data outputs generated by these systems. System Integrity might be relevant as it pertains to the transparency of the AI's outputs and oversight, while Robustness is less applicable since the focus doesn’t primarily deal with performance metrics or benchmarks. In summary, the legislation is significantly concerned with how AI impacts users and society, particularly in managing expectations and providing warnings regarding generated outputs. This raises vital issues around accountability and the ethical use of AI, particularly generative models and their content creation capabilities.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation relates most closely to Government Agencies and Public Services as it involves the requirement for system operators to provide warnings about AI outputs, which is a public service to ensure safe interaction with AI technology. There are also implications for Private Enterprises, Labor, and Employment, as businesses deploying generative AI systems will be affected by these regulations regarding consumer safety. However, it does not clearly pertain to the other sectors like Politics and Elections, Healthcare, or the Judicial System. In consideration of the specific context provided, this legislation is primarily directed at enhancing user knowledge and safety in public and business environments.
Keywords (occurrence): artificial intelligence (5) machine learning (1) automated (1) show keywords in context
Description: Amends the Election Code. Provides that, if a person, committee, or other entity creates, originally publishes, or originally distributes a qualified political advertisement, the qualified political advertisement shall include, in a clear and conspicuous manner, a statement that the qualified political advertisement was generated in whole or substantially by artificial intelligence that satisfies specified requirements. Provides for civil penalties and exceptions to the provision.
Summary: The bill mandates that political advertisements generated by artificial intelligence must disclose this fact clearly. It establishes requirements for the disclosure format and specifies penalties for non-compliance.
Collection: Legislation
Status date: Jan. 17, 2025
Status: Introduced
Primary sponsor: Steve Stadelman
(2 total sponsors)
Last action: Added as Chief Co-Sponsor Sen. Sally J. Turner (Feb. 25, 2025)
Societal Impact
Data Governance (see reasoning)
The legislation explicitly addresses the implications of AI-generated content in the context of political advertising, thereby focusing on its social impact, particularly in terms of accountability and transparency in communication. It mandates the disclosure of AI involvement in political advertisements, implicating concerns about misinformation and the erosion of trust in public discourse. This aligns strongly with the Social Impact category, as it considers how AI affects individuals and society through its influence in elections and related processes. Data governance is also relevant, as the enforcement of accurate disclosures may imply that data management practices must ensure truthful representation of AI-generated ads. However, there is no direct reference to the operation or integrity of the AI systems themselves, which affects the relevance to System Integrity and Robustness. Overall, the focus on AI advertising impacts societal accountability, so Social Impact scores highly, with Data Governance also having moderate relevance; System Integrity and Robustness have lesser relevance, thus scoring lower.
Sector:
Politics and Elections (see reasoning)
The text directly relates to politics, particularly concerning the regulation of AI in political advertising, making it extremely relevant to the sector of Politics and Elections. It sets out clear guidelines for disclosing AI involvement in political communications, aiming to maintain integrity in electoral processes. Although it speaks to issues that could have broader relevance in Government Agencies and Public Services in terms of transparency and accountability, it does not explicitly concern government operations or service delivery. Therefore, the scores reflect the primary focus on the electoral process rather than government services more broadly. The other sectors experience even less relevance as they do not pertain to the specifics of the text.
Keywords (occurrence): artificial intelligence (8) automated (1) show keywords in context
Description: As introduced, defines "human being," "life," and "natural person" for statutory construction purposes; excludes from the definition of "person," "life," and "natural person" artificial intelligence, a computer algorithm, a software program, computer hardware, or any type of machine. - Amends TCA Title 1.
Summary: This bill amends Tennessee's definition of "person" to exclude artificial intelligence and machines while affirming personhood for humans, including the unborn from fertilization to full gestation.
Collection: Legislation
Status date: Feb. 4, 2025
Status: Introduced
Primary sponsor: Mark Pody
(sole sponsor)
Last action: Passed on Second Consideration, refer to Senate Judiciary Committee (Feb. 12, 2025)
Societal Impact (see reasoning)
The text explicitly excludes artificial intelligence and related technologies from the definitions of 'person,' 'life,' and 'natural person.' This directly impacts the understanding of AI's role in legal and societal contexts, particularly with respect to personhood and human rights. However, it does not delve deeper into the potential social implications of AI itself, data governance around AI, or systems that ensure integrity of AI technologies. Therefore, while the act is relevant to the Social Impact category due to its implications for how society views AI, it lacks direct relevance to Data Governance, System Integrity, and Robustness, as it does not address the management, control, or performance evaluation of AI systems. The relevance is primarily philosophical and definitional in nature.
Sector: None (see reasoning)
The text discusses the definitions of 'human being,' 'life,' and 'natural person' in the context of Tennessee law but does not specify the application of AI in any particular sector such as politics, healthcare, or employment. It is primarily about legal definitions rather than the use or regulation of AI in practical contexts. As such, the legislation does not pertain to the specified sectors because it does not outline the role of AI in political campaigns, healthcare settings, government services, or private enterprises. The lack of reference to sector-specific AI applications significantly reduces its relevance across all sectors.
Keywords (occurrence): artificial intelligence (3) algorithm (3) show keywords in context
Description: Regulates the development and use of certain artificial intelligence systems to prevent algorithmic discrimination; requires independent audits of high risk AI systems; provides for enforcement by the attorney general as well as a private right of action.
Summary: The New York AI Act regulates AI development and implementation to prevent algorithmic discrimination, mandates audits of high-risk AI systems, and allows enforcement by the attorney general and private individuals.
Collection: Legislation
Status date: Jan. 8, 2025
Status: Introduced
Primary sponsor: Kristen Gonzalez
(sole sponsor)
Last action: REFERRED TO INTERNET AND TECHNOLOGY (Jan. 8, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The legislation explicitly targets the development and use of artificial intelligence to prevent algorithmic discrimination and mandates independent audits for high-risk AI systems. It addresses key concerns related to social impact by seeking to eliminate biases in AI that could harm disadvantaged groups or infringe upon civil rights. Additionally, the requirement for regular audits and developer responsibilities emphasizes system integrity, focusing on accountability and oversight. While the law does conflate aspects of data governance, particularly regarding personal data management in audits, the primary focus on preventing algorithmic discrimination and ensuring equitable treatment makes it more aligned with the social impact category. Therefore, 'Social Impact' receives a high score, and both 'System Integrity' and 'Data Governance' are also relevant, albeit with slightly lower scores.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
The legislation has a strong relevance to several sectors due to its implications on AI's application in various fields like healthcare, employment, and public services. In healthcare, it defines how algorithmic decision-making can impact access to services and discrimination. In terms of government services, it specifies how AI should operate fairly and transparently to ensure equitable treatment for the public. However, while it touches on these sectors, the text is primarily concerned with overarching regulations rather than specific applications within these sectors, hence scores for 'Healthcare' and 'Government Agencies and Public Services' are moderately high but not maximum. Other sectors like 'Politics and Elections' are less relevant.
Keywords (occurrence): artificial intelligence (7) automated (4) show keywords in context
Description: Telehealth program; homeless; recovery services
Summary: The bill establishes a pilot telehealth program in Arizona to provide mental health and addiction recovery services for homeless individuals, integrating telehealth into shelters and healthcare facilities while tracking patient outcomes.
Collection: Legislation
Status date: Jan. 22, 2025
Status: Introduced
Primary sponsor: Catherine Miranda
(6 total sponsors)
Last action: Senate read second time (Jan. 23, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text addresses the establishment of a telehealth program specifically focused on mental health, addiction, and recovery services for homeless individuals. While it is mainly focused on healthcare services, the mention of 'machine learning algorithms' and 'predictive modeling' indicates a direct integration of AI technologies for enhancing healthcare outcomes. Given this integration, the 'Social Impact' category is highly relevant as the program aims to support marginalized populations and addresses critical social issues like homelessness and health recovery. 'Data Governance' is also relevant due to the emphasis on secure data management within the program, particularly with respect to patient tracking and compliance with privacy regulations. 'System Integrity' also holds relevance as the program includes provisions for secure databases and continuous monitoring of data. However, 'Robustness' is less relevant as there is limited discussion on performance benchmarks or compliance auditing for AI systems within the text.
Sector:
Government Agencies and Public Services
Healthcare
Academic and Research Institutions (see reasoning)
The text outlines a telehealth program that integrates AI technologies to enhance mental health and recovery services for homeless individuals. This initiative clearly aligns with the 'Healthcare' sector due to its focus on health services and the use of telehealth technologies to address health issues among a vulnerable population. It lacks specificity in discussions on legislative impacts in other sectors, such as 'Politics and Elections' or 'Judicial System,' suggesting those are not relevant. The connection to 'Government Agencies and Public Services' is present, as the program involves state departments and efforts to improve public health services. However, 'Academic and Research Institutions' is only indirectly relevant due to the potential for data analysis and research outcomes; thus, it would receive a lower score. Other sectors such as 'Private Enterprises, Labor, and Employment' and 'International Cooperation and Standards' are irrelevant as they don't relate directly to the purposes of the bill.
Keywords (occurrence): machine learning (1) show keywords in context
Description: A BILL for an Act to create and enact a new section to chapter 12.1-31 of the North Dakota Century Code, relating to prohibiting deepfake videos and images; and to provide a penalty.
Summary: The bill prohibits the creation and distribution of deepfake videos and images without consent, classifying violations as a class A misdemeanor to combat deception and protect individuals' rights.
Collection: Legislation
Status date: Jan. 13, 2025
Status: Introduced
Primary sponsor: Josh Christy
(12 total sponsors)
Last action: Second reading, failed to pass, yeas 17 nays 69 (Jan. 21, 2025)
Societal Impact
Data Governance (see reasoning)
The text clearly focuses on the prohibition of deepfake videos and images generated using artificial intelligence. This aligns strongly with the Social Impact category, as it addresses concerns about misinformation, deception, and the potential harm caused by AI technologies, particularly how they can exploit individuals' likenesses. There are also implications related to Data Governance, as the proper and ethical use of data is crucial to prevent the misuse of AI-generated content. Although there is some mention of legal penalties, the text does not emphasize system integrity or robustness in its regulations, leading to lesser relevance in those areas.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The text is particularly relevant to the sector of Politics and Elections, as deepfake technology can have substantial implications for misinformation in political contexts, potentially influencing public opinion and election outcomes. It also touches on Government Agencies and Public Services due to the potential need for public regulation and oversight regarding the distribution of such content. The Judicial System is concerned with enforcement measures for these laws but isn't directly addressed here. Given that deepfakes can affect various sectors, but no direct mention is made of healthcare, private enterprises, or NGOs, the remaining sectors are less relevant.
Keywords (occurrence): artificial intelligence (2) deepfake (5) show keywords in context
Description: Schools; subject matter standards; computer science; personal financial literacy; math; updating references; permitting alternate diploma for certain students; repealer; effective date; emergency.
Summary: This bill updates Oklahoma's school curriculum standards, incorporating computer science and personal financial literacy, allowing alternate diplomas for certain students, and refining graduation requirements for math and other subjects.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Dick Lowe
(2 total sponsors)
Last action: Authored by Senator Pugh (principal Senate author) (March 5, 2025)
The text predominantly discusses educational standards related to computer science but does not delve into social impacts of AI as it relates to community or societal issues. Although it mentions computer science and technology, it does not consider ethical considerations, accountability, or the broader societal implications that may arise from AI implementation in education. It also does not touch on data governance, such as data privacy or management within AI systems, nor does it address system integrity and robustness regarding AI functionalities. Therefore, it does not squarely fit into any of the established categories, leading to low relevance scores across the board.
Sector: None (see reasoning)
The text mentions computer science in the context of educational standards but does not specifically address the regulation or application of AI in any of the highlighted sectors. There is no mention of AI’s role in politics, public services, healthcare, or any other sector listed. The connection to academic and research institutions is nominal since it only highlights technology in schools rather than addressing deeper aspects of AI in academia. Therefore, all sectors receive low relevance scores.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Relates to privacy rights involving digitization; provides that such right of privacy and action for injunction and damages shall include a portrait, picture, likeness or voice created or altered by digitization.
Summary: The bill establishes privacy rights for individuals regarding the use of their digitized likenesses or voices, requiring consent for commercial use and allowing for legal action against unauthorized usage.
Collection: Legislation
Status date: Jan. 30, 2025
Status: Introduced
Primary sponsor: Alex Bores
(sole sponsor)
Last action: referred to judiciary (Jan. 30, 2025)
Societal Impact (see reasoning)
The text explicitly addresses privacy rights related to the use of AI technologies in creating or altering likenesses (e.g., portrait, picture, voice) through digitization methods such as software and machine learning. This connection to AI makes it highly relevant to the Social Impact category due to the implications for individual privacy and consent when using AI-generated representations. It highlights potential psychological and material harm from unauthorized use, which directly correlates with social impact. Moreover, because it mandates the involvement of consent and the right for individuals to take legal action against misuse, it stresses the importance of accountability in AI systems that alter personal likenesses. In contrast, the relevance to Data Governance may be less pronounced, as there isn't an explicit focus on data management or collection; it more focuses on the resulting outputs of AI technology. System Integrity and Robustness are also less directly applicable, as the text does not address the security, transparency, or benchmark aspects of AI systems directly, though they could be inferred in broader terms of ethical implementation of AI. Therefore, Social Impact is extremely relevant, while the relevancy to other categories is less compelling, scoring overall as follows.
Sector: None (see reasoning)
The legislation mentions the use of AI technologies primarily within the context of privacy rights concerning likenesses, which has implications for both individuals and potentially for companies involved in advertising or media production. However, it does not apply directly to sectors like Politics and Elections, Government Agencies and Public Services, or others, as there's no mention or regulation specific to these sectors. The closest connections would be with Private Enterprises where the misuse of likenesses could affect marketing practices. However, it's not comprehensive enough to be centrally relevant to this sector. Therefore, I have assigned lower scores across the board, noting particularly that relevance to the Government Agencies and Public Services and other specific sector categories is limited.
Keywords (occurrence): artificial intelligence (2) machine learning (1) show keywords in context
Description: Artificial intelligence; AI devices in health care; qualified end-user; deployer; quality assurance program; State Department of Health; effective date.
Summary: House Bill 1915 regulates the use of artificial intelligence devices in healthcare, ensuring they're utilized by qualified users under strict quality assurance guidelines, and are subject to ongoing performance evaluations.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Arturo Alonso-Sandoval
(sole sponsor)
Last action: Second Reading referred to Rules (Feb. 4, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This legislation explicitly discusses the deployment, governance, and quality assurance of artificial intelligence (AI) devices used in healthcare. It requires that AI devices must adhere to regulations, be managed by qualified professionals, and ensures oversight through a governance group, highlighting accountability measures. These elements reflect a significant social impact due to the implications for patient care and safety. The presence of specified governance structures indicates concerns about system integrity. Because the text emphasizes quality assurance and compliance with regulations, it inherently connects to notions of robustness, particularly in creating a structured approach to performance evaluation and risk management of AI devices within the healthcare context. Therefore, Social Impact and System Integrity are highly relevant, while Data Governance, despite its potential relevance, is indirectly addressed through performance monitoring rather than being the core focus. Robustness pertains to establishing benchmarks for AI performance in healthcare but doesn't reach the intensity of the other categories.
Sector:
Healthcare (see reasoning)
The legislation directly outlines the use of artificial intelligence, particularly in medical devices and healthcare settings. It prescribes regulations for deployment and usage by qualified end-users, emphasizing safety and effectiveness. This aligns with the Healthcare sector, as the text relates explicitly to AI applications in clinical environments, underscoring the importance of compliance with medical and regulatory standards. As a result, it is highly pertinent to the Healthcare sector, while it seems less applicable to other sectors such as Politics and Elections or Private Enterprises, Labor, and Employment. Consequently, it warrants a high score for the Healthcare sector and does not significantly touch upon the aspects that would pertain to other sectors.
Keywords (occurrence): artificial intelligence (6) machine learning (2) show keywords in context
Description: High-risk artificial intelligence; development, deployment, and use by public bodies; work group; report. Creates requirements for the development, deployment, and use of high-risk artificial intelligence systems, as defined in the bill, by public bodies. The bill also directs the Chief Information Officer of the Commonwealth (CIO) to develop, publish, and maintain policies and procedures concerning the development, procurement, implementation, utilization, and ongoing assessment of systems t...
Summary: This bill establishes regulations for the development, deployment, and use of high-risk artificial intelligence systems by public bodies in Virginia, aiming to ensure data security, prevent algorithmic discrimination, and set guidelines for compliance.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Engrossed
Primary sponsor: Bonita Anthony
(8 total sponsors)
Last action: Failed to pass (Feb. 22, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text is primarily focused on the governance and regulation of high-risk artificial intelligence systems used by public bodies. This includes provisions for the ethical development, deployment, and ongoing assessment of AI technologies. Accordingly, legislation impacts various societal dynamics and ethical considerations, thus linking it to 'Social Impact'. Additionally, there are clear mandates regarding the secure management of data and the integrity of the AI systems, which is very relevant to the 'Data Governance' and 'System Integrity' categories. 'Robustness' is indirectly referenced through compliance and ongoing assessment frameworks but does not address performance benchmarks directly. Overall, the strong emphasis on accountability, ethical AI use, and governance highlights the relevance of these categories.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The text relates to multiple sectors primarily through its examination of AI use in public administration. The legislation is pertinent to 'Government Agencies and Public Services' as it establishes guidelines for how public bodies should manage high-risk AI systems. It has implications for 'Judicial System' in terms of legal standards for AI deployment that could affect consequential decisions. It also touches on 'Private Enterprises, Labor, and Employment' due to potential impacts on public-private partnerships and contractual oversight. The other sectors, while potentially relevant, do not receive enough emphasis in the text to be rated highly. The strongest relevance is to public service agencies.
Keywords (occurrence): artificial intelligence (123) machine learning (2) foundation model (2) algorithm (1) show keywords in context
Description: Establishes provisions relating to autonomous vehicles
Summary: House Bill 1166 establishes regulations for the operation of fully autonomous vehicles in Missouri, allowing them to operate without a human driver under certain conditions and governing their registration and compliance with safety standards.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Don Mayhew
(sole sponsor)
Last action: Read Second Time (H) (Feb. 4, 2025)
System Integrity (see reasoning)
This text establishes provisions specific to autonomous vehicles and their operation, predominantly discussing automated driving systems. Each section addresses critical aspects of the functionality, classification, and operational guidelines of fully autonomous vehicles. However, it does not explicitly touch upon the broader societal implications or ethical considerations associated with AI, which would be necessary to score highly on the Social Impact category. While data governance implications arise — e.g., regarding the collection and management of data for autonomous systems — the primary focus is not on data security or management mandates, hence lower relevance for Data Governance. The text emphasizes safety standards and regulations applicable to these vehicles, which relates to System Integrity features like the requirement for operational safety compliance; therefore, it scores higher here. The focus on autonomous vehicle benchmarks and compliance does not adequately align with the Robustness category's wider set of benchmarks for AI performance, which are more general in nature. Thus, the highest score in this context should be assigned to System Integrity.
Sector:
Government Agencies and Public Services (see reasoning)
The legislation clearly pertains to the functioning and regulatory requirements associated with autonomous vehicles as it lays out rules for their operation. It discusses aspects that govern the use of automated driving systems in a transportation context, as well as the exemptions granted under certain conditions. However, the references are focused more on vehicular operations rather than broader implications for societal governance or AI applications in diverse sectors. The nature of the text suggests a significant relevance to Government Agencies and Public Services since it outlines conditions under which state laws can regulate these automated systems within public road use. While the text could tangentially relate to other sectors, none receive sufficient direct reference to warrant higher scores outside Government Agencies and Public Services.
Keywords (occurrence): automated (3) autonomous vehicle (3) show keywords in context
Description: HEALTH AND SAFETY -- THE RHODE ISLAND CLEAN AIR PRESERVATION ACT - Establishes the Rhode Island Clean Air Preservation Act that establishes a regulatory process to prohibit polluting atmospheric experimentation.
Summary: The Rhode Island Clean Air Preservation Act prohibits atmospheric experiments releasing pollutants, including solar radiation modification and cloud seeding, to protect public health and environmental safety.
Collection: Legislation
Status date: Jan. 29, 2025
Status: Introduced
Primary sponsor: Evan Shanley
(5 total sponsors)
Last action: Committee recommended measure be held for further study (Feb. 6, 2025)
Societal Impact
Data Governance (see reasoning)
The Rhode Island Clean Air Preservation Act includes references to artificial intelligence (AI) and machine learning within the context of atmospheric experiments and interventions. Specifically, it denotes how AI can be involved in activities that may harm public health and safety, indicating a significant social impact as it touches on human welfare and environmental integrity. It also outlines the role of AI in monitoring, regulating, and potentially being enforced in atmospheric activities, hinting at the need for data governance concerning AI applications. However, while it discusses AI, it does not deeply explore aspects of system integrity or robustness, leading to lower relevance in those categories. Therefore, the Act shows substantial relevance to Social Impact and Data Governance, albeit less so to System Integrity and Robustness.
Sector:
Government Agencies and Public Services (see reasoning)
The Act's focus primarily on environmental health and safety, particularly pollution and atmospheric experiments, suggests limited direct relevance to the defined sectors. It does mention AI's role in atmospheric activities but does not specify its applications within political campaigning, public services, judicial systems, healthcare, labor and employment, or academic research. However, the explicit mention of government oversight and public safety does suggest some connection to Government Agencies and Public Services, while the overall context bears less relevance to other sectors. Thus, the Act scores highest in Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (3) machine learning (3) show keywords in context
Description: Prohibits the use of external consumer data and information sources being used when determining insurance rates; provides that no insurer shall unfairly discriminate based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression; or use any external consumer data and information sources, as well as any algorithms or predictive models that use external consumer data and information sources, in a way that unfairly discrimina...
Summary: The bill prohibits insurers from using external consumer data that may lead to unfair discrimination in determining insurance rates based on various protected characteristics, aiming to enhance equity in insurance practices.
Collection: Legislation
Status date: Feb. 4, 2025
Status: Introduced
Primary sponsor: Brian Cunningham
(sole sponsor)
Last action: referred to insurance (Feb. 4, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly prohibits the use of external consumer data and algorithms in determining insurance rates where it may lead to unfair discrimination. Keywords related to AI, such as 'algorithm' and 'predictive model,' are prominently mentioned. The legislation addresses potential biases in AI-driven decision-making processes, aiming to protect consumers from unfair treatment based on these factors. This clearly links the text to the Social Impact category as it aims to mitigate AI-induced discrimination and protect vulnerable demographics. There is also a significant focus on data governance, requiring insurers to ensure their algorithms are free of bias, which is tightly aligned with data-related legal standards. System Integrity is relevant, as oversight and accountability mechanisms for the use of algorithms and external data are integrated into the legislation. The Robustness category is less relevant because, while performance benchmarks might be implied, they are not directly addressed in this text.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The legislation speaks directly to the insurance sector, focusing on how algorithms and data can lead to discrimination in insurance practices. The mention of external data and algorithms positions this text as being extremely relevant to the Private Enterprises, Labor, and Employment sector, as insurers are private entities that must now navigate these newly defined regulations. The implications of unfair discrimination based on the use of consumer data also resonate with the broader societal impact, linking it to the Political and Election sector as it addresses fairness in essential market operations. However, it does not specifically pertain to Politics and Elections, Healthcare, or other mentioned sectors like Government Agencies and Public Services, Academic and Research Institutions, etc., thus scoring lower on those. Overall, this text appears most relevant to the sectors related to fairness in employment and business practices.
Keywords (occurrence): machine learning (1) algorithm (6) show keywords in context
Description: An Act; Relating to: state finances and appropriations, constituting the executive budget act of the 2025 legislature. (FE)
Summary: This bill outlines the executive budget for the 2025-2027 fiscal biennium, detailing state finances and appropriations, including various programs in agriculture, economic development, and broadband service enhancements.
Collection: Legislation
Status date: Feb. 18, 2025
Status: Introduced
Primary sponsor: Finance
(sole sponsor)
Last action: Read first time and referred to Joint Committee on Finance (Feb. 18, 2025)
The text mainly discusses state finances and appropriations under the executive budget act for the 2025 legislature. There are no explicit references to AI or related technologies that would impact societal functions, data governance regarding AI, integrity in AI systems, or performance benchmarks of AI applications. Consequently, none of the categories seem to fit well with the content of the text.
Sector: None (see reasoning)
The text does not address the use of AI in any sectors such as politics, healthcare, or private enterprises. It focuses on appropriations and budgeting for various departments without mentioning AI applications, regulations, or impacts. Thus, it falls outside all defined sectors presented.
Keywords (occurrence): artificial intelligence (21) automated (12) show keywords in context
Description: Revised for 1st Substitute: Concerning sexually explicit depictions of minors.Original: Concerning offenses involving fabricated depictions of minors.
Summary: The bill addresses the creation and regulation of sexually explicit depictions of minors, particularly focusing on fabricated images produced through digital technology. It updates existing laws to include non-identifiable minors and establishes penalties for offenses related to these depictions.
Collection: Legislation
Status date: Feb. 5, 2025
Status: Engrossed
Primary sponsor: Tina Orwall
(7 total sponsors)
Last action: First reading, referred to Community Safety. (Feb. 7, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text primarily addresses the challenge posed by advancements in AI technologies in relation to the creation and alteration of fabricated depictions of minors engaging in sexually explicit conduct. These concerns align closely with the categories presented. For Social Impact, the text is very relevant as it discusses the potential harm to minors and societal implications of AI-generated explicit materials. In terms of Data Governance, the text mentions issues related to the management of data that might include illicit materials, but does not focus on data collection and management practices sufficiently to warrant a high score. System Integrity is pertinent due to the need for human oversight in detecting AI-generated content, but again the focus on security measures appears secondary. Robustness is less relevant as it is not primarily about performance benchmarks for AI systems but rather focuses on legislative regulation rather than compliance standards. Thus, Social Impact gets a very high score while Data Governance, System Integrity, and Robustness have less direct relevance.
Sector:
Government Agencies and Public Services
Judicial system
Nonprofits and NGOs (see reasoning)
The text has strong implications for several sectors. Particularly, it affects Government Agencies and Public Services, as law enforcement agencies will utilize AI detection methods to combat crimes involving fabricated child depictions. The Judicial System is impacted as it addresses legal definitions and penalties surrounding such crimes. Nonprofits and NGOs, particularly those focused on child protection, will also find this legislation relevant as it may inform their advocacy and prevention programs. Although the text does not principally focus on the academic or healthcare sectors, some AI applications in those areas may intersect with the issues raised regarding exploitation and the use of AI in generating content. Thus, Government Agencies and Public Services and the Judicial System receive high relevance scores, while Nonprofits and NGOs also see a moderate connection. Other sectors like Politics and Elections, Private Enterprises, Academic and Research Institutions, and International Cooperation receive low relevance due to lack of direct mention or implication.
Keywords (occurrence): artificial intelligence (6) automated (1) show keywords in context
Description: For legislation to ensure accountability and transparency in artificial intelligence systems. Advanced Information Technology, the Internet and Cybersecurity.
Summary: The bill establishes regulations for accountability and transparency in AI systems in Massachusetts, including developer responsibilities, consumer protections, and enforcement by the Attorney General to mitigate algorithmic discrimination.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Francisco Paulino
(sole sponsor)
Last action: Senate concurred (Feb. 27, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly addresses the accountability and transparency in artificial intelligence systems through various outlined responsibilities for developers and deployers of AI. It emphasizes algorithmic discrimination and essential provisions for consumer protection, demonstrating a strong link to social impact. The legislation lays out structures for documenting and managing risks associated with AI systems, which ties to data governance. The requirements for transparency and risk management indicate significant attention to system integrity, ensuring that AI systems operate securely and transparently. While it indirectly touches on performance benchmarks as it mandates risk assessments and compliance with recognized frameworks, the primary focus seems more on regulatory compliance and consumer rights than on establishing new performance benchmarks, suggesting a less direct relevance to robustness. Therefore, this suggests scores that confirm the relevance of social impact, data governance, and system integrity, while robustness is given a lower score on its explicit mention in the text.
Sector:
Politics and Elections
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Hybrid, Emerging, and Unclassified (see reasoning)
The text addresses elements relevant to multiple sectors. For Politics and Elections, the implications of AI on influential decisions in this realm are clearly relevant, but do not explicitly address campaigns or elections directly, yielding a moderately relevant score. The Government Agencies and Public Services sector is highly affected due to the defined roles of state-managed AI oversight and public education campaigns mandated in the legislation, indicating high relevance here. The Judicial System is less relevant as it pertains more typically to legal decisions than to AI usage directly. In Healthcare, while there are mentions of AI impacting consequential decisions, it seems more about regulation rather than specific healthcare applications. The Private Enterprises sector is present due to mandatory disclosures for businesses utilizing AI, indicating a moderate relevance. The Academic and Research Institutions and International Cooperation sectors are not explicitly addressed, but research compliance may be involved tangentially. Nonprofits and NGOs are slightly mentioned but not given any specific provisions, marking low relevance. The Hybrid, Emerging, and Unclassified sector could relate if we consider the wider implications of AI across sectors, but this text is very targeted. Therefore, scoring reflects those distinctions.
Keywords (occurrence): artificial intelligence (6) show keywords in context
Description: Relative to artificial intelligence disclosures. Advanced Information Technology, the Internet and Cybersecurity.
Summary: The bill mandates clear disclosure of AI-generated content, requiring identification of the creator's AI system and details about the content. It aims to promote transparency and accountability in AI usage in Massachusetts.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Steven Howitt
(sole sponsor)
Last action: Senate concurred (Feb. 27, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text discusses the 'Massachusetts Artificial Intelligence Disclosure Act', which addresses AI disclosures relevant to content generated by AI systems. This is significant for 'Social Impact', as it aims to ensure transparency concerning AI-generated content, directly impacting how society interacts with such media. The legislation emphasizes accountability for AI-generated content and implies a need to protect consumers from misinformation and manipulation. The potential for consumer protection and accountability aligns closely with concerns over the social impact of AI systems. Additionally, the focus on the identification and disclosure of AI-generated content pertains to 'Data Governance', ensuring accurate metadata is included and maintained. 'System Integrity' may also be relevant as the text mandates disclosures that enhance transparency and may include measures for human oversight in ensuring compliance. However, 'Robustness' is less relevant here as this bill does not primarily address performance benchmarks or auditing standards for AI systems. Overall, the text outlines a clear initiative to regulate and disclose AI's outputs, particularly regarding consumer impact, which is essential for 'Social Impact' and 'Data Governance'.
Sector:
Hybrid, Emerging, and Unclassified (see reasoning)
The legislation primarily focuses on the regulation of AI-generated content, making it highly relevant to multiple sectors. For 'Politics and Elections', while there are implications for misinformation in campaign contexts, the text does not directly address election-related AI use, leading to a score of 2. 'Government Agencies and Public Services' sees slight relevance due to the involvement of the Office of Consumer Affairs; therefore, a score of 2 is justified. The 'Judicial System' is not directly relevant as there is no mention of legal applications or implications, so it scores a 1. The 'Healthcare' sector is not addressed at all, leading to a 1 as well. For 'Private Enterprises, Labor, and Employment', there is indirect relevance in terms of businesses producing AI-generated content, but it does not focus on employment practices explicitly, resulting in a score of 2. 'Academic and Research Institutions' score an initial 1 as there is no mention of educational applications of AI. 'International Cooperation and Standards' is also irrelevant, scoring a 1. However, given the emphasis on regulation and AI-generated content, the text could potentially lead to relevant practices in 'Nonprofits and NGOs' concerned with AI ethics, but this is quite loose, so it scores a 2. Finally, 'Hybrid, Emerging, and Unclassified' captures the broader implications of AI regulations in various contexts, warranting a score of 3. Overall, the text’s focus on transparency and accountability positions it strongly within the discourse surrounding AI governance.
Keywords (occurrence): artificial intelligence (8) machine learning (1) show keywords in context
Description: To provide for Department of Energy and Department of Agriculture joint research and development activities, and for other purposes.
Summary: The bill establishes joint research and development efforts between the Department of Energy and the Department of Agriculture, focusing on enhancing agriculture and energy sectors through innovative technologies and collaboration.
Collection: Legislation
Status date: Feb. 13, 2025
Status: Introduced
Primary sponsor: Frank Lucas
(3 total sponsors)
Last action: Referred to the Committee on Science, Space, and Technology, and in addition to the Committee on Agriculture, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (Feb. 13, 2025)
Societal Impact
Data Governance (see reasoning)
The text explicitly references 'artificial intelligence' and 'machine learning' in the context of joint research and development between the Department of Energy and the Department of Agriculture. These technologies are integral to the optimization of algorithms related to agriculture and energy, indicating a significant intersection with the social impact of AI, particularly concerning efficiency improvements and environmental considerations. However, the text does not delve into broader societal implications, which may limit relevance to certain aspects of social impact. Data governance is relevant because it emphasizes secure data sharing and the integration of large datasets. System integrity scores lower as the text does not address transparency, oversight, or security measures for AI applications directly. Robustness is marginally relevant due to the mention of performance optimization but lacks focus on benchmarks or certification standards. Overall, the obligations regarding AI use in agriculture and energy anchor the relevance predominantly in terms of social impact and data governance.
Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)
The text discusses cooperative research and developments involving AI within the agriculture and energy sectors. The prominence of AI applications suggests relevance to the Agriculture sector, particularly regarding advancements in agricultural technology and data analysis. However, the reference to energy systems introduces a dual sector relevance, signifying a connection to Energy but not directly filtering through other sectors such as Politics and Elections or Government Agencies and Public Services, as those are not the primary focus. The mention of federal agencies involved hints at possible relevance to Government Agencies but is not a dominating theme. Overall, the AI implications pertain mainly to Agriculture and Energy's technological improvement, hence drawing focus primarily onto these two sectors.
Keywords (occurrence): artificial intelligence (1) show keywords in context