4160 results:
Description: To direct the Federal Trade Commission to require impact assessments of automated decision systems and augmented critical decision processes, and for other purposes.
Collection: Legislation
Status date: Sept. 21, 2023
Status: Introduced
Primary sponsor: Yvette Clarke
(16 total sponsors)
Last action: Referred to the Subcommittee on Innovation, Data, and Commerce. (Sept. 22, 2023)
Societal Impact
Data Governance
System Integrity (see reasoning)
The Algorithmic Accountability Act of 2023 explicitly addresses impact assessments of automated decision systems, which fundamentally includes AI techniques such as machine learning. The definitions within the text acknowledge automated decision systems and augmented critical decision processes, clarifying their relationship with AI. This forms a basis for accountability and regulatory assessments directly connected to the implications of AI on various critical decisions, thereby underscoring the societal impact and the necessity for ethical guidelines around AI deployment. Accordingly, it is very relevant to both the Social Impact and System Integrity categories. Data Governance is relevant as the Act entails performance documentation and potential consumer protections through assessments, including aspects of data correctness and citizen engagement. Robustness is less relevant since the document does not focus explicitly on performance benchmarks or standards for AI systems, leaning more towards assessment and accountability instead of direct performance metrics or compliance benchmarks. Thus it garners a lower score.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The text involves applications across various sectors, particularly impacting Government Agencies and Public Services by necessitating regulatory compliance and assessments of automated decision systems used by public authorities. The implications for Healthcare also arise given that healthcare decisions are classified under critical decisions within the text. The potential implications on Private Enterprises are notable since they may also have to comply with this regulation if deploying automated systems that affect consumer decisions. Academic and Research Institutions may find relevance in the collaborative aspects of developing best practices for AI governance, but the main focus isn't central to academic settings. Other sectors have tenuous connections. Politics and Elections could be touched upon indirectly through automated decision systems used in campaign strategies, but the legislation does not directly address political mechanisms. Therefore, the strongest relevance lies in Government Agencies and Public Services and Healthcare, while other sectors score lower due to less direct connections.
Keywords (occurrence): artificial intelligence (3) machine learning (1) automated (44) show keywords in context
Description: To amend title XI of the Social Security Act to establish a pilot program for testing the use of a predictive risk-scoring algorithm to provide oversight of payments for durable medical equipment and clinical diagnostic laboratory tests under the Medicare program.
Collection: Legislation
Status date: Jan. 30, 2024
Status: Introduced
Primary sponsor: David Schweikert
(sole sponsor)
Last action: Referred to the Subcommittee on Health. (Feb. 2, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The legislation focuses on the implementation of a predictive risk-scoring algorithm as part of oversight for Medicare transactions, which clearly ties into the realm of Artificial Intelligence and machine learning. As it pertains to the accountability of the algorithm used to prevent fraud and potentially mitigate harm caused to beneficiaries (Social Impact), assess risks and ensure data handling is appropriate (Data Governance), maintain human oversight and security checks (System Integrity), and establish standards for the algorithm's performance (Robustness), it is essential to evaluate each category thoroughly.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
This text primarily deals with the use of AI specifically in the context of Medicare services, which falls primarily under the sector of Healthcare. Additionally, considerations for potential fraud prevention and algorithm implementation involve elements of Government Agencies and Public Services as it outlines how government oversight will be exercised via this pilot program. There is a slight mention of collaboration with industry representatives that could invoke aspects of the Private Enterprises sector. Hence, the Healthcare and Government Agencies and Public Services sectors are most relevant here.
Keywords (occurrence): algorithm (11) show keywords in context
Description: Relates to the use of automated decision tools by landlords for making housing decisions; sets conditions and rules for use of such tools.
Collection: Legislation
Status date: Nov. 3, 2023
Status: Introduced
Primary sponsor: Cordell Cleare
(sole sponsor)
Last action: PRINT NUMBER 7735A (April 2, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text focuses on the use of automated decision tools in the context of housing decisions made by landlords, which directly relates to how AI might impact fairness, bias, and accountability. The legislation mandates the conduct of disparate impact analyses, thereby addressing potential discrimination in housing applications influenced by AI. This aligns strongly with the Social Impact category. Because it also involves the regulation and oversight of systems (automated decision tools), it is relevant to System Integrity, although to a lesser degree. Data Governance is also applicable as the legislation dictates how data used in these tools must be treated, but the emphasis isn't solely on data management. Robustness has minimal relevance since the text doesn’t address AI performance benchmarking or regulatory compliance measures explicitly. Overall, the explicit focus on AI's societal implications earns a high score for the Social Impact category, while the other categories receive moderate to low scores based on their relevance to the text.
Sector:
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The text relates primarily to the application of automated decision tools by landlords which are influential in housing decisions, placing it directly in the Private Enterprises, Labor, and Employment sector due to landlord-tenant dynamics. This can indirectly touch on Government Agencies and Public Services due to regulatory oversight, but the primary context lies in the landlord-tenant relationship rather than public sector applications. The Judicial System can be linked through the legal implications of enforcement and compliance under the oversight of the attorney general, although this is less direct. Healthcare, Politics and Elections, Academic Institutions, International Cooperation, Nonprofits, and the emerging sectors are not applicable as there are no direct mentions or implications in these areas within the text.
Keywords (occurrence): machine learning (1) automated (11) show keywords in context
Description: Establishes the offenses of virtual token fraud, illegal rug pulls, private key fraud and fraudulent failure to disclose interest in virtual tokens.
Collection: Legislation
Status date: Jan. 4, 2023
Status: Introduced
Primary sponsor: Kevin Thomas
(sole sponsor)
Last action: REFERRED TO CODES (Jan. 3, 2024)
The text primarily addresses fraud related to virtual tokens, with a specific focus on the legal definitions and penalties associated with such actions. Given this focus, the consideration of categories in relation to AI becomes nuanced. While the text mentions concepts like algorithms (in the context of private keys and transactions), it does not engage significantly with AI-related themes like system integrity, robustness, or the social implications of AI's use in fraud. The indirect mention of algorithms suggests a potential relevance to technology but lacks the depth or emphasis needed for higher categorization under any of the defined categories. As a result, the relevance to the categories is minimal, primarily due to the absence of explicit AI focus in the text.
Sector: None (see reasoning)
The text addresses the establishment of legal frameworks around cryptocurrencies and virtual tokens rather than focusing specifically on AI applications. While cryptocurrencies may utilize algorithmic processes and some AI technologies in broader contexts, the legislation itself is primarily concerned with fraud prevention in virtual transactions. This places the text outside the specific focus required to score highly in any sector related to AI applications. The mention of algorithms and blockchain technology has some linkage to data governance but does not sufficiently exceed a score of 2. Other sectors including healthcare, judicial systems, and public services were not related enough to warrant any scoring.
Keywords (occurrence): algorithm (1) show keywords in context
Description: Amends the Freedom of Information Act. Provides that, for a public body that is a HIPAA-covered entity, "private information" includes electronic medical records and all information, including demographic information, contained within or extracted from an electronic medical records system operated or maintained by the public body in compliance with State and federal medical privacy laws and regulations, including, but not limited to, the Health Insurance Portability and Accountability Act and...
Collection: Legislation
Status date: Aug. 11, 2023
Status: Passed
Primary sponsor: Sara Feigenholtz
(4 total sponsors)
Last action: Public Act . . . . . . . . . 103-0554 (Aug. 11, 2023)
Data Governance (see reasoning)
The text primarily revolves around amendments to the Freedom of Information Act (FOIA), clarifying the definition of private information, especially concerning electronic medical records and compliance with medical privacy laws like HIPAA. As such, this legislation directly pertains to data privacy and access, but there are limited explicit references to AI technologies. Although certain phrases suggest the automation of data processing, the text lacks depth in addressing the complexity of AI's role, its implications for social structures or data practices, or technological mechanisms such as algorithms or automated decision-making. Thus, it touches on AI but does not engage directly with its social implications, governance, or robust framework.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text focuses on the legislative framework surrounding public records and privacy laws, primarily related to the healthcare sector and the management of medical data. However, while it addresses potential database automation and privacy impacts, it does not explicitly delineate how AI could be utilized or governed in these contexts. Thus, relevant sectors relate predominantly to healthcare and governmental oversight of public information but lack broader implications for AI in politics or other sectors.
Keywords (occurrence): automated (2) show keywords in context
Description: An Act amending Titles 18 (Crimes and Offenses) and 61 (Prisons and Parole) of the Pennsylvania Consolidated Statutes, in sexual offenses, further providing for the offense of unlawful dissemination of intimate image; in minors, further providing for the offense of sexual abuse of children and for the offense of transmission of sexually explicit images by minor; and making editorial changes to replace references to the term "child pornography" with references to the term "child sexual abuse m...
Collection: Legislation
Status date: June 10, 2024
Status: Engrossed
Primary sponsor: Tracy Pennycuick
(19 total sponsors)
Last action: Signed in Senate (Oct. 9, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This legislation contains explicit references to 'Artificial Intelligence' and its role in the criminalization of the unlawful dissemination of intimate images and depictions, particularly in the context of sexually explicit material generated by AI. It establishes definitions that clarify the implications of AI technology in potentially harmful contexts and establishes legal consequences for its misuse. As such, the relevance to the Social Impact category is significant due to the societal issues associated with AI-generated intimate depictions. The role of accountability for developers and the implications for minors further bolster this relevance. Similarly, it pertains to Data Governance due to mentions of the need to manage the data used to generate such imagery responsibly. System Integrity also comes into play with measures for human oversight and security as it relates to accountability for AI systems being misused. Lastly, the discussion of defining standards for AI-generated content aligns with the Robustness category, although it is not as strongly detailed within the text. Therefore, I would assign high relevance to Social Impact (5) and Data Governance (4), moderate to System Integrity (3) and Robustness (3).
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system (see reasoning)
This legislation addresses the implications of AI in laws pertaining to sexual offenses, particularly in how AI technology can facilitate harmful behaviors like the unlawful dissemination of intimate images. This directly impacts the Political and Elections sector due to discussions around the use of AI in potentially influencing electoral processes when targeting young individuals. It also indirectly impacts the Government Agencies and Public Services sector as it presents a need for regulation by government entities concerning AI and its application in law enforcement. While it mentions elements relevant to the Judicial System in terms of enforcement and legal definitions, it does not explicitly address judicial applications of AI. Sectors such as Healthcare, Private Enterprises, Labor, Education, and others are less relevant on the face of this text as they do not prominently feature discussions about AI applications. Overall, the most relevant sectors would be categorized as Politics and Elections (3) due to potential implications for governance and regulation, and Government Agencies and Public Services (4) for the necessity of oversight in applying this legislation. The Judicial System sees moderate relevance (3) due to enforcement aspects, while other sectors rank lower.
Keywords (occurrence): artificial intelligence (11) automated (1) show keywords in context
Description: This Act creates a new elections crime: use of deep fake technology to influence an election. Under this statute it would be a crime to distribute within 90 days of an election a deep fake that is an audio or visual depiction that has been manipulated or created with generative adversarial network techniques, with the intent of harming a party or candidate or otherwise deceiving voters. It is not a crime, nor is there a penalty, if the altered media contains a disclaimer stating This audio/v...
Collection: Legislation
Status date: June 27, 2024
Status: Enrolled
Primary sponsor: Cyndie Romer
(10 total sponsors)
Last action: Passed By House. Votes: 40 YES 1 NO (June 30, 2024)
Societal Impact (see reasoning)
The text primarily addresses the implications of deep fake technology in the context of elections. It outlines laws regarding the distribution of deep fakes that could mislead voters, highlighting accountability and consumer protection in electoral processes. This connection with misinformation, deception in public discourse, and implications for trust in democratic institutions aligns closely with the 'Social Impact' category. The legislation indirectly addresses potential biases arising from AI-generated content affecting political representation, which reinforces its relevance to addressing social impact. In contrast, the other categories, such as Data Governance, System Integrity, and Robustness, do not relate as strongly to the core focus of the legislation on election integrity and the social consequences of misinformation. The measures proposed do not delve into data handling practices, system security, or benchmarking the technologies used, which are the crux of the other categories, and therefore score lower on relevance.
Sector:
Politics and Elections
Judicial system (see reasoning)
The legislation specifically addresses how deep fake technology can be used in political contexts to influence election outcomes, making it extremely relevant to the 'Politics and Elections' sector. It establishes clear legal ramifications for the distribution of misleading media that could alter voter perception or election integrity. While the implications of AI in relation to public service delivery or nonprofit operations may tangentially touch on the legislation, the primary emphasis on election integrity and the role of misleading information directly aligns the text with the political sector. Consequently, sectors such as Government Agencies and Public Services or Nonprofits and NGOs do not receive higher scores, as the focus remains squarely on electoral processes and misinformation rather than broader social governance or nonprofit applications.
Keywords (occurrence): deepfake (11) synthetic media (5) show keywords in context
Description: To provide for the establishment of a program to certify artificial intelligence software used in connection with producing agricultural products.
Collection: Legislation
Status date: Dec. 14, 2023
Status: Introduced
Primary sponsor: Randy Feenstra
(4 total sponsors)
Last action: Referred to the House Committee on Agriculture. (Dec. 14, 2023)
Data Governance
System Integrity
Data Robustness (see reasoning)
The text is highly relevant to the category of Robustness because it establishes a program to certify artificial intelligence software used in agriculture, which implies a focus on performance benchmarks and compliance standards. The mention of adherence to the AI Risk Management Framework indicates an emphasis on operational robustness and safety in AI systems within agricultural applications. Similarly, it relates to System Integrity as certification inherently involves measures for ensuring the accuracy and reliability of AI software, fostering accountability and oversight. However, its relevance to Social Impact is limited because while the legislation addresses AI's application in agriculture, it does not focus on its broader societal effects or ethical implications. Data Governance is somewhat relevant as the certification process might involve data management concerns but is not the primary focus of the text. Overall, the text strongly emphasizes certification related to performance standards and operational integrity for specific AI applications.
Sector:
Government Agencies and Public Services (see reasoning)
The sector relevance of the text is primarily tied to the Agriculture sector because it explicitly addresses the use of artificial intelligence within agricultural practices. It emphasizes the certification of AI software in performing tasks related to agricultural products, which is critical for enhancing agricultural operations. While it might slightly touch upon regulatory concerns that could involve Government Agencies and Public Services, the primary focus remains on agriculture, giving less relevance to other sectors listed. Therefore, Agriculture should receive a high score, while other sectors receive lower scores due to their minimal impact or relevance to the text.
Keywords (occurrence): artificial intelligence (4) automated (1) show keywords in context
Description: To provide for the future information technology needs of Massachusetts
Collection: Legislation
Status date: Jan. 10, 2024
Status: Introduced
Last action: New draft substituted, see H4642 (May 15, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text discusses the FutureTech Act, which encompasses various aspects of information technology development in Massachusetts. The AI-related sections specifically mention funding for AI projects and the implementation of AI and machine learning systems for state agencies. This clearly indicates a focus on the societal impacts of AI (e.g., enhancing public services and efficiency) and implies a need for oversight and governance regarding data usage in these systems. Thus, there is a significant relevance to both the Social Impact and Data Governance categories, suggesting strong considerations around the implications of AI on community dynamics and the responsibilities tied to managing information technology and data. The calls for security and efficiency also indicate a relevant connection to System Integrity, while benchmarks related to technology performance hint at potential relevance to Robustness, although possibly to a lesser extent. Overall, the legislation is deeply connected to the role and implications of AI in state governance and service provision.
Sector:
Government Agencies and Public Services
Healthcare
Hybrid, Emerging, and Unclassified (see reasoning)
The text addresses various sectors, including government operations and public services directly. It discusses the implementation of AI and machine learning within state agencies, enhances user experience across governmental services, and promotes transparency and efficiency in public service delivery. The work with municipal fiber broadband infrastructure also implicates government efficiency and citizen engagement. However, it does not strongly center on sectors like healthcare or judicial systems, which can lead to lower scores in those areas. The focus on AI applications broadly affects multiple facets of society, including economic aspects related to labor and employment, which could also connect to Private Enterprises. Given these observations, there are strong connections to Government Agencies and Public Services, with moderate to slight relevance to other sectors.
Keywords (occurrence): machine learning (1) chatbot (1) show keywords in context
Description: Requesting The Hawaii Professional Chapter Of The Society Of Professional Journalists To Recommend A Process That Individuals Can Utilize To Evaluate And Identify Whether Or Not News Sources Adhere To Ethical And Objective Standards.
Collection: Legislation
Status date: April 4, 2024
Status: Passed
Primary sponsor: Chris Lee
(8 total sponsors)
Last action: Certified copies of resolution sent, 05-31-24 (May 31, 2024)
Societal Impact (see reasoning)
The text discusses the relationship between AI advancements and the spread of misinformation, highlighting the need for ethical standards in news sourcing. This directly ties into the social impact of AI, specifically regarding misinformation and its effects on public understanding. Although the text touches on certain aspects of data governance indirectly, the main focus is on the implications of AI in journalism and the importance of ethical practices. Therefore, Social Impact is given a high relevance score while Data Governance, System Integrity, and Robustness scores remain low as they do not align directly with the core focus of the resolution.
Sector:
Government Agencies and Public Services
Nonprofits and NGOs (see reasoning)
The text primarily relates to the media sector and the dissemination of information. It discusses how AI relates to societal issues around misinformation, which can impact political discourse and public trust. However, it does not focus explicitly on legislative measures across sectors such as healthcare or judicial systems. The emphasis is drawn mainly towards media literacy and journalistic responsibility rather than a specific sector-based approach, such as Private Enterprises or Academic Institutions. Thus, while there are faint implications for governmental roles, the scores reflect the text's central focus on media ethics and standards impacted by AI without deep engagement with other sector-specific applications.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: A bill to require the Administrator of the Environmental Protection Agency to carry out a study on the environmental impacts of artifical intelligence, to require the Director of the National Institute of Standards and Technology to convene a consortium on such environmental impacts, and to require the Director to develop a voluntary reporting system for the reporting of the environmental impacts of artificial intelligence, and for other purposes.
Collection: Legislation
Status date: Feb. 1, 2024
Status: Introduced
Primary sponsor: Edward Markey
(7 total sponsors)
Last action: Read twice and referred to the Committee on Commerce, Science, and Transportation. (Feb. 1, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text centers on the environmental impacts of artificial intelligence, which ties directly into societal concerns regarding how AI affects the environment. This means it has implications on various elements relevant to 'Social Impact,' such as influencing energy consumption, pollution, and e-waste. Thus, this category is extremely relevant. The 'Data Governance' category is less about AI lifecycle impacts or environmental data management, making it less relevant, but it does touch on managing and reporting data about these impacts, leading to a score that reflects moderate relevance. The text also emphasizes the need for transparency and accountability in measuring AI's environmental impact, aligning it more closely with 'System Integrity,' but it's not about security or control directly, resulting in a moderate score. 'Robustness' is not particularly addressed in terms of AI performance benchmarks, so it scores lower on relevance as it does not fit within this legislative focus.
Sector:
Government Agencies and Public Services
Nonprofits and NGOs (see reasoning)
The legislation focuses on the environmental implications of AI, making it highly relevant to Environmental policy sectors. While it doesn’t specifically touch on sectors like Politics and Elections or the Judicial System, its relevance may extend to broader implications for Government Agencies and Public Services given collaboration with agencies like the EPA and NIST. It doesn’t address sectors like Healthcare or Private Enterprises specifically, accumulating moderate scores only for the government as these systems will benefit or be affected by the implementations discussed. Nonprofits may find relevance as well through potential involvement in the consortium mentioned, but again that is indirect. There is no significant coverage of the Academic sector, nor is it addressing international cooperation directly, allowing for lower scores in those areas.
Keywords (occurrence): artificial intelligence (32) show keywords in context
Description: Creates a temporary state commission to study and investigate how to regulate artificial intelligence, robotics and automation; repeals such commission.
Collection: Legislation
Status date: March 20, 2024
Status: Introduced
Primary sponsor: Clyde Vanel
(sole sponsor)
Last action: referred to science and technology (March 20, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text outlines the establishment of a temporary commission focused on investigating the regulation of artificial intelligence, robotics, and automation. The commission’s responsibilities will include examining current laws, potential liabilities, employment impacts, information confidentiality, weapon usage restrictions, and public sector applications. Therefore, it is closely aligned with issues that relate to social impact as the commission will assess how AI and related technologies affect employment and societal structures. Furthermore, the text implicates system integrity as it aims to ensure proper accountability and oversight of AI technologies' use and regulation. Given the nature of the commission’s work, data governance also comes into play, especially regarding the handling of confidential information. Robustness is less relevant here as the focus is not primarily on performance benchmarks or compliance, but instead on regulatory frameworks and ethical considerations. Thus, categories of Social Impact, Data Governance, and System Integrity are particularly relevant.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The text speaks primarily to a public sector initiative as it creates a commission that will investigate how government regulation can handle AI and related technologies. This involves public considerations, governmental oversight, and societal impacts—categorizing it primarily under Government Agencies and Public Services. It touches on potential impacts on employment and legal aspects, suggesting some relevance to the Private Enterprises, Labor, and Employment sector but not strongly enough to elevate it beyond moderate relevance. The mention of confidentiality and regulations related to AI processing could link to aspects of the Judicial System or Healthcare, but those are not the focus of this text. Therefore, the most fitting category here is Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (9) show keywords in context
Description: Providing for the award of grants to school districts to implement artificial intelligence in support of students and teachers; providing requirements for the use of such artificial intelligence; eligible expenses for
Collection: Legislation
Status date: May 13, 2024
Status: Passed
Primary sponsor: Education & Employment Committee
(10 total sponsors)
Last action: Chapter No. 2024-162 (May 13, 2024)
Societal Impact
Data Governance (see reasoning)
The text primarily discusses legislative measures related to artificial intelligence in education. It specifically addresses the implementation of AI to support learning for students and assist teachers, which directly connects to social implications of AI in an educational context. The text also outlines the requirements for the use of AI platforms, indicating consideration for security and functionality, which can tie into both system integrity and data governance. It outlines grants to facilitate the use of AI in education, implying the potential for broad societal impacts and requirements for data handling and secure interactions. However, the primary focus on grants and educational enhancements aligns the text more closely with the 'Social Impact' category rather than the other categories. The aspects of legislation addressing data security might have a slight overlap with 'Data Governance', though it's not the main focus. System integrity and robustness are not explicitly addressed in the text. Overall, the emphasis on educational enhancements through AI demonstrates a significant social impact. Thus, the 'Social Impact' category is deemed very relevant, while the 'Data Governance' has moderate relevance, as the data handling and security implications are present but less emphasized in the overall narrative.
Sector:
Academic and Research Institutions (see reasoning)
The text primarily pertains to educational settings, focusing on the implementation of AI to enhance learning experiences for students and provide support for educators. It discusses grants for school districts to adopt AI technologies, indicating a targeted approach towards integrating AI within educational frameworks. While there might be implications for other sectors, such as government agencies involved in funding or educational standards, the primary focus remains on education. Consequently, the 'Academic and Research Institutions' sector is rated as significantly relevant due to the clear focus on educational enhancements via AI. Other sectors, such as 'Government Agencies and Public Services' and 'Private Enterprises, Labor, and Employment', have less direct relevance as the text does not address broader governmental or business implications regarding AI. Thus, the sector primarily aligns with education.
Keywords (occurrence): artificial intelligence (3) automated (1) show keywords in context
Description: An Act amending Title 50 (Mental Health) of the Pennsylvania Consolidated Statutes, providing for protection of minors on social media; and imposing penalties.
Collection: Legislation
Status date: May 8, 2024
Status: Engrossed
Primary sponsor: Brian Munroe
(20 total sponsors)
Last action: Referred to COMMUNICATIONS AND TECHNOLOGY (May 28, 2024)
Societal Impact
Data Governance (see reasoning)
The text primarily addresses legislation concerning the protection of minors on social media. It focuses on issues such as the monitoring of chats for flagged content and the requirement for parental consent when minors create social media accounts. These aspects indicate a focus on the social impact of AI systems as they relate to young users, particularly regarding emotional and psychological risk factors associated with social media. Therefore, it is deemed very relevant to the Social Impact category. The Data Governance category is also moderately relevant due to the references to data protection, consent, and the mining of data concerning minors. However, there are fewer elements that connect directly to System Integrity and Robustness, primarily because this act does not explicitly reference the security or performance metrics of AI systems, thus receiving lower or negligible relevance scores in these categories. Overall, it can be concluded that the legislation is aimed at addressing the social implications of AI-driven social media, rather than the operational integrity or robustness of AI systems themselves.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The bill discusses AI's relevance concerning minors engaging with social media platforms, which is closely related to the Private Enterprises, Labor, and Employment sector, as social media are business enterprises that operate under specific regulations concerning user data and protection. It mentions social media companies explicitly, emphasizing their accountability and the need for regulations to safeguard minors accessing their services. The relevance to Government Agencies and Public Services is also noticeable as it deals with legislation impacting public welfare, particularly concerning minors' safety online, thus earning a moderately high relevance score. However, sectors like Politics and Elections, Healthcare, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified do not find a strong connection to the text, resulting in low scores.
Keywords (occurrence): automated (5) recommendation system (4) show keywords in context
Description: Establishes the position of chief artificial intelligence officer and such person's functions, powers and duties; including, but not limited to, developing statewide artificial intelligence policies and governance, coordinating the activities of any and all state departments, boards, commissions, agencies and authorities performing any functions using artificial intelligence tools; makes related provisions.
Collection: Legislation
Status date: June 4, 2024
Status: Engrossed
Primary sponsor: Kristen Gonzalez
(3 total sponsors)
Last action: referred to governmental operations (June 4, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The legislation very directly relates to the implementation and oversight of artificial intelligence systems at the state level, which inherently involves considerations of social impact, data governance, system integrity, and robustness. It creates a dedicated role (Chief Artificial Intelligence Officer) to oversee the policies, governance, and usage of AI tools across various governmental departments. Such an establishment is critical for ensuring that societal impacts—like safety, privacy, and discrimination—are appropriately managed (Social Impact). The legislation outlines the duties of the AI officer to ensure data security, privacy, and compliance with laws (Data Governance). It also stipulates oversight measures and audits for AI systems, supporting transparency and accountability within systems (System Integrity). Lastly, it mandates the development of standards and metrics for AI performance, contributing to benchmarks for robustness. All these aspects are closely intertwined with AI governance and responsible usage, thus demanding a comprehensive approach to scoring across the categories.
Sector:
Government Agencies and Public Services
Academic and Research Institutions
International Cooperation and Standards (see reasoning)
The text addresses the role of AI across various government activities, ensuring that its application is ethical, secure, and effective. Consequently, it fits under 'Government Agencies and Public Services' as it centers on establishing a framework for AI governance within state operations and enhances the efficiency of services delivered. While it does not directly mention the judicial system or healthcare, the consideration for fairness and audit within government functions can hint at a broader impact affecting those sectors too. However, its primary focus remains on government functions, thereby warranting a higher relevance score in this sector.
Keywords (occurrence): artificial intelligence (36) machine learning (1) automated (12) show keywords in context
Description: Requiring each unit of State government to conduct certain inventories and assessments by December 1, 2024, and annually thereafter; prohibiting the Department of Information Technology from making certain information publicly available under certain circumstances; prohibiting a unit of State government from deploying or using a system that employs artificial intelligence under certain circumstances; etc.
Collection: Legislation
Status date: April 4, 2024
Status: Engrossed
Primary sponsor: Jazz Lewis
(24 total sponsors)
Last action: Referred Rules (April 5, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The Artificial Intelligence Governance Act of 2024 is highly relevant to the category of Social Impact as it directly addresses the implications and ethical concerns related to the use of AI systems in state government. It emphasizes responsible and trustworthy AI use, which highlights societal accountability and potential impacts on civil rights and liberties (found in the definitions of high-risk AI). Moreover, it discusses the need for assessments to guard against AI-driven discrimination, a direct concern about the social implications of AI technology. This alignment with principles of fairness and equity underscores its significant relevance to Social Impact. Data Governance is also very relevant as the Act mandates regular data inventories and impact assessments of AI systems used by state government. It ensures that data necessary for AI operation is collected accurately and responsibly, thus addressing potential pitfalls of data mismanagement and bias in AI. The reference to compliance with regulations, collection, and sharing of data further solidifies this category's relevance. System Integrity is relevant as the legislation sets forth requirements for human oversight and the monitoring of AI systems to ensure their safe and effective operation. It outlines the necessity of policies and procedures for the use of AI, discussing the integrity and the transparency of those implementations in state government. Robustness is somewhat relevant due to its focus on establishing performance evaluations and audits for AI systems, ensuring compliance with new benchmarks and standards. However, it is less pronounced compared to the other three categories, making it marginally relevant in the context of this legislation.
Sector:
Government Agencies and Public Services (see reasoning)
This legislation is closely tied to the sector of Government Agencies and Public Services since it explicitly involves requirements for state government units to conduct assessments and inventories regarding their AI systems. The intent is to enhance the operational efficiency of public services through proper governance of AI technologies, ensuring safe deployment in state functions. It is less relevant to sectors like Politics and Elections or Healthcare because there is no direct mention of AI's role in electoral processes or healthcare applications. The emphasis is firmly within public administration and governance contexts, highlighting the relevance of AI regulation in governmental operations.
Keywords (occurrence): artificial intelligence (49) machine learning (1) automated (5) show keywords in context
Description: An act to add Chapter 22.6 (commencing with Section 22602) to Division 8 of the Business and Professions Code, and to add Sections 11547.6 and 11547.6.1 to the Government Code, relating to artificial intelligence.
Collection: Legislation
Status date: Aug. 29, 2024
Status: Enrolled
Primary sponsor: Scott Wiener
(4 total sponsors)
Last action: Enrolled and presented to the Governor at 3 p.m. (Sept. 9, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text primarily discusses regulations and requirements for the development, safety, and management of artificial intelligence models, specifically 'covered models.' This clearly relates to all four categories: 1) Social Impact addresses potential risks and harms that AI systems can pose to safety, producing regulations around AI-driven innovations; 2) Data Governance highlights the importance of accurate data and compliance for AI models; 3) System Integrity relates to implementing safety protocols and the ability to shut down models; and 4) Robustness emphasizes compliance benchmarks and independent auditing for the AI models. Given the strong focus on safety, compliance, and societal implications tied to AI's usage and development, all four categories demonstrate relevance.
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions
International Cooperation and Standards
Nonprofits and NGOs
Hybrid, Emerging, and Unclassified (see reasoning)
The text addresses how AI is regulated in various contexts, such as government operations and compliance measures for AI models, suggesting a wide-ranging impact across several sectors. Specifically, 1) Politics and Elections is pertinent as regulations might influence electoral technologies; 2) Government Agencies and Public Services is relevant since the legislation pertains directly to government operations; 3) Judicial System could relate to legal frameworks governing AI use; 4) Healthcare may also be relevant where AI health technologies are employed, and the risk of harm mitigation is crucial; 5) Private Enterprises, Labor, and Employment ties to how businesses manage compliance for AI technologies; 6) Academic and Research Institutions are included due to provisions promoting equitable access for universities; 7) International Cooperation and Standards is relevant when discussing compliance across jurisdictions; 8) Nonprofits and NGOs, where applicable, can be influenced by these AI governance frameworks; while 9) Hybrid, Emerging, and Unclassified captures potential intersections of AI with new domains. However, the primary emphasis in the text is on government regulation and safety protocols rather than specific impacts on individual sectors. Therefore, while all categories could be considered, the strongest relevance is primarily found in the Government Agencies and Public Services sector.
Keywords (occurrence): artificial intelligence (39) show keywords in context
Description: An act to add Chapter 25 (commencing with Section 22756) to Division 8 of the Business and Professions Code, relating to artificial intelligence.
Collection: Legislation
Status date: May 21, 2024
Status: Engrossed
Primary sponsor: Rebecca Bauer-Kahan
(sole sponsor)
Last action: Read third time and amended. Ordered to second reading. (Aug. 28, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text primarily concerns automated decision tools which utilize artificial intelligence. It highlights requirements for impact assessments related to AI systems and addresses algorithmic discrimination, consumer rights concerning automated decisions, and establishes regulations to mitigate potential negative ramifications of AI decisions. The emphasis on algorithmic discrimination and consumer protections links strongly to Social Impact, while discussions of impact assessments and data management relate directly to Data Governance. System Integrity is relevant to the requirement of transparency and the ability for users to correct data used in decision-making. Robustness is addressed through guidelines for impact assessments and performance evaluations of AI systems. Overall, the legislation significantly pertains to issues surrounding AI’s societal implications, governance of data handling in AI applications, maintaining system integrity, and ensuring robustness during the deployment of AI tools. Therefore, I assign high relevance to all categories due to their extensive implications.
Sector:
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Hybrid, Emerging, and Unclassified (see reasoning)
The text outlines provisions related to the deployment of automated decision tools across a range of critical sectors such as government services, healthcare, housing, and employment. The implications for consumer rights and the regulation of AI in these diverse applications show strong relevance to Government Agencies and Public Services and Healthcare. It addresses protections in the context of algorithmic discrimination, indicating potential implications for the Judicial System as well due to the enforcement mechanisms described. The broader impacts on labor and employment processes highlight relevance to Private Enterprises, Labor, and Employment. The provisions may also carry implications for Academic and Research Institutions, as they require evaluations of AI tools that can influence educational settings. Thus, the text reflects significance across multiple sectors due to the multifaceted use of AI in decision-making processes. Thus, I assign high relevance primarily to Government Agencies and Public Services, Healthcare, and Private Enterprises, Labor, and Employment.
Keywords (occurrence): artificial intelligence (5) machine learning (1) automated (89) show keywords in context
Description: To provide for Federal civilian agency laboratory development for testing and certification of artificial intelligence for civilian agency use, and for other purposes.
Collection: Legislation
Status date: July 15, 2024
Status: Introduced
Primary sponsor: Sheila Jackson-Lee
(sole sponsor)
Last action: Referred to the Committee on Homeland Security, and in addition to the Committee on Oversight and Accountability, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (July 15, 2024)
System Integrity
Data Robustness (see reasoning)
The text explicitly mentions 'artificial intelligence' in the context of Federal civilian agency use, suggesting a direct relevance to AI. Given that the legislation focuses on testing and certification, it implies a concern for the integrity and performance of AI systems used in government agencies. However, without specific details, the extent of its relevance to the broader impacts, governance, and robustness of AI systems cannot be thoroughly assessed. The title suggests considerations of both social and governance aspects, primarily focusing on System Integrity and potentially Robustness.
Sector:
Government Agencies and Public Services (see reasoning)
The text pertains to Federal civilian agencies, indicating that it is closely related to the use and regulation of AI by government entities. The mention of laboratory development implies a focus on ensuring that AI meets certain standards and regulations necessary for public services. However, without specific applications or implications mentioned, the relevance to the broader governmental and public service sector remains limited, making it moderately relevant.
Keywords (occurrence): artificial intelligence (15) automated (2) show keywords in context
Description: Enacts the New York privacy act to require companies to disclose their methods of de-identifying personal information, to place special safeguards around data sharing and to allow consumers to obtain the names of all entities with whom their information is shared.
Collection: Legislation
Status date: June 8, 2023
Status: Engrossed
Primary sponsor: Kevin Thomas
(11 total sponsors)
Last action: referred to consumer affairs and protection (June 3, 2024)
Societal Impact
Data Governance (see reasoning)
The New York privacy act primarily addresses data governance by mandating companies to improve transparency in how they manage personal data, including de-identification methods, consumer rights regarding their data, and penalties for violations. It implicitly touches upon social impact as it emphasizes consumer rights, privacy as a fundamental right, and potential harms from opaque data processing policies. However, it does not specifically focus on system integrity or robustness, as these concepts pertain more to the technical implementation and standards of AI systems, which are less emphasized in the text.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation has clear implications for the Data Governance sector as it directly regulates the collection, processing, and sharing of personal data, ensuring that entities handle consumer data responsibly. It is also relevant to Government Agencies and Public Services because it involves the ways in which government entities may manage and protect personal data. There are fewer direct implications for the other sectors, as the text does not address specific use cases in healthcare, judicial systems, or political contexts. Thus, the highest relevancy is assigned to Data Governance, followed by government-related applications.
Keywords (occurrence): automated (1) show keywords in context