4163 results:
Description: Enacts the "political artificial intelligence disclaimer (PAID) act"; requires political communications that use synthetic media to disclose that they were created with the assistance of artificial intelligence; requires committees that use synthetic media to maintain records of such usage.
Collection: Legislation
Status date: Jan. 17, 2025
Status: Introduced
Primary sponsor: Kevin Parker
(sole sponsor)
Last action: REFERRED TO ELECTIONS (Jan. 17, 2025)
Societal Impact
Data Governance (see reasoning)
The text specifically addresses the impact of AI in the context of political communications, especially focusing on synthetic media and the necessity for disclosure regarding AI assistance. This touches on issues of accountability and transparency in AI use in political discourse, which are key elements of the Social Impact category. Given its focus on ensuring that political communications made with AI tools do not mislead voters, the relevance to Social Impact is extremely high. It also indirectly relates to Data Governance as it implies the management of records related to the use of AI in political contexts, but its primary emphasis is clearly on social implications. Although it discusses some aspects of oversight and record-keeping, it does not touch directly on system integrity or robustness in a significant way. Therefore, only Social Impact is rated as highly relevant, while Data Governance is considered moderately relevant due to its connection to record keeping but lacks a direct focus on data management issues.
Sector:
Politics and Elections (see reasoning)
The text deals specifically with the regulation of synthetic media in political communications, indicating a clear tie to the politics and elections sector. The need for disclosure about the use of artificial intelligence highlights the efforts to ensure transparency in political processes, making it extremely relevant to the political sector. It does not pertain to government agencies, the judicial system, healthcare, or other sectors because its primary focus lies within the political domain. Thus, it receives a high score in the Politics and Elections sector. Other sectors do not apply as they do not involve direct relevance to the AI's use in political contexts.
Keywords (occurrence): artificial intelligence (3) automated (1) synthetic media (5) show keywords in context
Description: Requires corporations, organizations, or individuals engaging to commercial transactions or trade practices to clearly and conspicuously notify consumers when the consumer is interacting with an artificial intelligence chatbot or other technology capable of mimicking human behaviors. Authorizes private rights of action. Establishes statutory penalties.
Collection: Legislation
Status date: Jan. 17, 2025
Status: Introduced
Primary sponsor: Jarrett Keohokalole
(8 total sponsors)
Last action: Passed First Reading. (Jan. 17, 2025)
Societal Impact
System Integrity (see reasoning)
The text explicitly addresses several social impacts of AI, particularly concerning consumer protection and transparency in interactions involving AI chatbots. It highlights the risks of deception that arise when consumers unknowingly interact with chatbots, including the potential for misinformation and manipulation. This directly relates to the category of Social Impact, as it emphasizes accountability for the outputs of AI systems and consumer rights related to AI interactions. The requirement for transparency also affects public trust in technology. In terms of Data Governance, there is limited relevance here; while the text discusses the management of consumer expectations around AI chatbots, it does not delve into data management issues such as data accuracy or ownership. The text touches on System Integrity by mentioning the necessity for regulation ensuring informed consumer interaction, but it does not specifically address security or oversight measures. Finally, Robustness is less relevant as there are no benchmarks or performance metrics discussed for AI systems. Overall, the primary focus is on the social responsibility surrounding the use of AI, particularly chatbots, and the implications for consumer rights.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text is highly relevant to the sector of Private Enterprises, Labor, and Employment, as it dictates how businesses must inform consumers about interactions with AI chatbots during commercial transactions, thus affecting business practices. It indirectly touches on Government Agencies and Public Services, as consumer notifications may influence how government services communicate with constituents regarding AI usage. However, it does not explicitly address the use of AI in judicial contexts or healthcare settings, which would be relevant for the Judicial System and Healthcare sectors. The text does not address educational contexts enough to classify under Academic and Research Institutions. International Cooperation and Standards, Nonprofits and NGOs, and Hybrid or Emerging sectors do not find a strong relevance. Therefore, the legislation largely sits within the nexus of consumer protection in private enterprise.
Keywords (occurrence): artificial intelligence (5) chatbot (10) show keywords in context
Description: Digital Content Authenticity and Transparency Act established; civil penalty. Requires a developer of an artificial intelligence system or service to apply provenance data to synthetic digital content that is generated by such developer's generative artificial intelligence system or service and requires a developer to make a provenance application tool and a provenance reader available to the public. The bill requires a controller of an online service, product, or feature to retain any availa...
Collection: Legislation
Status date: Jan. 16, 2025
Status: Introduced
Primary sponsor: Adam Ebbin
(sole sponsor)
Last action: Referred to Committee on General Laws and Technology (Jan. 16, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text clearly addresses legislation related to artificial intelligence, particularly focusing on requirements for developers of generative AI systems to apply provenance data to synthetic content. This directly relates to accountability and consumer protection, aligning with the Social Impact category. It discusses measures to ensure transparency in AI-generated content, addressing potential harms related to misinformation and public trust. Furthermore, it deals with the management of data, particularly provenance data, which ties into Data Governance. The mention of maintaining accuracy and transparency falls closely under System Integrity, as well. Lastly, the criteria for evaluating AI systems and the emphasis on compliance align with Robustness. Overall, the legislation covers themes critical to all four categories, albeit with a stronger focus on Social Impact, Data Governance, and System Integrity, which concern ethical implications and data management of AI outputs.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation significantly impacts the use of AI in digital content creation and transparency, which is particularly relevant to Private Enterprises, Labor, and Employment as it affects how businesses, especially those creating or managing AI-generated content, will operate. The requirements laid out for developers make it clear that it will affect commercial practices and potentially the labor market related to content creation and management. This legislation might not directly address healthcare, the judicial system, or nonprofit sectors, making those categories less relevant. It may have indirect implications for Government Agencies and Public Services given the call for public availability of tools and data, but these are less pronounced than in the private enterprise context. Overall, the strongest sector involvement appears to be in Private Enterprises, given the commercial focus of the legislation.
Keywords (occurrence): artificial intelligence (20) machine learning (2) foundation model (2) show keywords in context
Description: Establishes the New York workforce stabilization act; requires certain businesses to conduct artificial intelligence impact assessments on the application and use of such artificial intelligence and to submit such impact assessments to the department of labor prior to the implementation of the artificial intelligence; establishes a surcharge on certain corporations that use artificial intelligence or data mining or have greater than fifteen employees displaced by artificial intelligence of a ...
Collection: Legislation
Status date: Jan. 14, 2025
Status: Introduced
Primary sponsor: Michelle Hinchey
(3 total sponsors)
Last action: REFERRED TO LABOR (Jan. 14, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text outlines specific legislative measures related to the conduct of artificial intelligence impact assessments and the imposition of surcharges on corporations that utilize AI in ways that might displace workers or involve data mining. The focus on accountability, displacement assessments, and the potential psychological, material, or social harm of AI applications heavily relates to the 'Social Impact' category. The requirement for data handling and privacy considerations, as well as control over sensitive data, aligns closely with 'Data Governance', thus enhancing its relevance. 'System Integrity' receives a moderate score due to the mention of transparency through impact assessments, but it does not address broader security or oversight mandates. 'Robustness' is less relevant here, given that the focus is primarily on workforce impacts rather than performance benchmarks or certifications.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
This legislation significantly pertains to 'Private Enterprises, Labor, and Employment' as it regulates how businesses must assess and report AI's impact on employees and organizational structure, particularly in respect to worker displacement. The measures in this text are not directed toward 'Politics and Elections', 'Judicial System', 'Healthcare', or others. The 'Government Agencies and Public Services' sector relates slightly due to the involvement of the Department of Labor in overseeing the implementation of these policies, but it is not the primary focus. Given the implications for business operations and labor markets, 'Private Enterprises, Labor, and Employment' scores the highest, while other sectors receive minimal relevance.
Keywords (occurrence): artificial intelligence (12) show keywords in context
Description: Artificial Intelligence Transparency Act established. Requires developers of generative artificial intelligence systems made available in the Commonwealth to ensure that any generative artificial intelligence system that produces audio, images, text, or video content includes on such AI-generated content a clear and conspicuous disclosure that meets certain requirements specified in the bill. The bill also requires developers of generative artificial intelligence systems to implement reasonab...
Collection: Legislation
Status date: Jan. 11, 2025
Status: Introduced
Primary sponsor: Rozia Henson
(sole sponsor)
Last action: Fiscal Impact Statement from Department of Planning and Budget (HB2554) (Jan. 22, 2025)
Societal Impact
Data Governance (see reasoning)
The AI-related portions of the text explicitly discuss the requirement for developers of generative AI systems to ensure transparency and accountability by clearly disclosing when content is generated by AI. This addresses issues of consumer protection and the potential for misinformation, which directly relates to societal impacts of AI. The legislation also implies accountability for outputs of AI systems, which further aligns with challenges pertaining to social trust and the influence of AI-generated content on public discourse. Therefore, the Social Impact category is rated as very relevant. The text also emphasizes the importance of accurate disclosures and could relate to data governance indirectly, but mainly focuses on societal implications rather than on the secure management of data itself. System Integrity and Robustness are less emphasized, as the text does not delve into security measures, human oversight, or performance benchmarks; therefore both categories score lower.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily addresses the implications of generative AI in terms of consumer protection and misinformation which could be relevant across several sectors. Politics and Elections is not applicable since there's no mention of AI use in elections. However, Government Agencies and Public Services and Healthcare may be affected indirectly, but not directly considered. The impact of AI on Private Enterprises could also be noted due to implications on commercial practices. Academic and Research Institutions might be considered because of the transparency needed in AI, but the text focuses more on commercial than educational aspects. These non-direct references do not strongly emphasize their relevance in the context of the legislation.
Keywords (occurrence): artificial intelligence (25) foundation model (1) chatbot (2) show keywords in context
Description: Artificial Intelligence Transparency Act established. Requires developers of generative artificial intelligence systems made available in the Commonwealth to ensure that any generative artificial intelligence system that produces audio, images, text, or video content includes on such AI-generated content a clear and conspicuous disclosure that meets certain requirements specified in the bill. The bill also requires developers of generative artificial intelligence systems to implement reasonab...
Collection: Legislation
Status date: Jan. 11, 2025
Status: Introduced
Primary sponsor: Rozia Henson
(sole sponsor)
Last action: Committee Referral Pending (Jan. 11, 2025)
Societal Impact
Data Governance (see reasoning)
The AI-related portions of the text explicitly discuss the requirement for developers of generative AI systems to ensure transparency and accountability by clearly disclosing when content is generated by AI. This addresses issues of consumer protection and the potential for misinformation, which directly relates to societal impacts of AI. The legislation also implies accountability for outputs of AI systems, which further aligns with challenges pertaining to social trust and the influence of AI-generated content on public discourse. Therefore, the Social Impact category is rated as very relevant. The text also emphasizes the importance of accurate disclosures and could relate to data governance indirectly, but mainly focuses on societal implications rather than on the secure management of data itself. System Integrity and Robustness are less emphasized, as the text does not delve into security measures, human oversight, or performance benchmarks; therefore both categories score lower.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily addresses the implications of generative AI in terms of consumer protection and misinformation which could be relevant across several sectors. Politics and Elections is not applicable since there's no mention of AI use in elections. However, Government Agencies and Public Services and Healthcare may be affected indirectly, but not directly considered. The impact of AI on Private Enterprises could also be noted due to implications on commercial practices. Academic and Research Institutions might be considered because of the transparency needed in AI, but the text focuses more on commercial than educational aspects. These non-direct references do not strongly emphasize their relevance in the context of the legislation.
Keywords (occurrence): artificial intelligence (25) foundation model (1) chatbot (2) show keywords in context
Description: An Act To Regulate The Operation Of Utility-type Vehicles (utvs) Or Side-by-sides On The Public County And Municipal Roads And Streets Within The State Of Mississippi; To Define Terms Used In This Act; To Require The Registration Of Utvs With The Department Of Revenue In The Same Manner As Passenger Motor Vehicles; To Authorize The Operation Of On County And Municipal Public Roads And Streets With Posted Speed Limit Of 55 Miles Per Hour Or Less; To Require Owners Of Utvs And Side-by-sides To ...
Collection: Legislation
Status date: Jan. 10, 2025
Status: Introduced
Primary sponsor: Steve Massengill
(sole sponsor)
Last action: Referred To Transportation;Ways and Means (Jan. 10, 2025)
The text primarily focuses on the regulation of utility-type vehicles (UTVs) and side-by-sides on public roads. While it includes some safety features that might relate to system integrity, it does not pertain to AI systems or their societal impact, data governance, or robustness in any significant way. AI-specific language such as 'autonomous vehicle' is mentioned briefly; however, it is not the focal point of the legislation. Therefore, relevance to the categories is minimal in all respects, mostly falling into a slightly relevant or not relevant area.
Sector: None (see reasoning)
The text does not explicitly involve any of the specified sectors. It does mention the operation and regulation of vehicles, potentially suggesting some relevance to Government Agencies and Public Services, but not to the extent that it falls under a significant legislative change or regulatory framework focused on AI in those areas. It thus rates very low on sector relevance.
Keywords (occurrence): automated (1) autonomous vehicle (5) show keywords in context
Description: Creates a state office of algorithmic innovation to set policies and standards to ensure algorithms are safe, effective, fair, and ethical, and that the state is conducive to promoting algorithmic innovation.
Collection: Legislation
Status date: Jan. 9, 2025
Status: Introduced
Primary sponsor: Jenifer Rajkumar
(4 total sponsors)
Last action: referred to science and technology (Jan. 9, 2025)
Societal Impact
System Integrity
Data Robustness (see reasoning)
The text explicitly discusses the creation of a state office tasked with setting policies and standards for algorithms, which directly relates to the social impact of AI. Ensuring that algorithms are safe, effective, fair, and ethical indicates a focus on accountability and the prevention of harm to individuals and society, which is a key aspect of social impact legislation. Additionally, the use of algorithms in decision-making raises concerns regarding fairness, bias, and discrimination, which are integral to the category's focus. The mention of auditing algorithms also supports the idea of measuring and managing the social implications of AI systems. Moreover, the establishment of a dedicated office indicates a commitment to ongoing oversight and regulation concerning the social implications of algorithmic technologies. Regarding data governance, while the text refers to the regulation of algorithmic use, it does not explicitly focus on data management practices or privacy concerns, making this category less relevant. With respect to system integrity, the focus on safe and effective algorithms suggests a degree of importance; however, actual implementations of human oversight or security measures are not mentioned. Robustness is moderately relevant as the establishment of standards implies an intention to create benchmarks for algorithm performance, but it is not the primary focus of the legislation. Thus, the emphasis on social accountability places high relevance on the social impact category, while other categories reflect varying degrees of relevance.
Sector:
Government Agencies and Public Services (see reasoning)
The text involves the regulation of algorithms within state governance, which aligns closely with the 'Government Agencies and Public Services' sector, as the creation of a state office suggests direct application of AI in public service contexts. The legislation may also have implications for transparency and accountability in the use of AI systems within government functions. There are less clear connections to sectors like Politics and Elections or the Judicial System, as the text does not specifically address these areas. While the regulation of algorithms may affect 'Private Enterprises, Labor, and Employment', it seems to be more focused on governmental oversight rather than direct implications on employment practices. Overall, the relevance to the 'Government Agencies and Public Services' sector stands out, with minimal relevance to others.
Keywords (occurrence): artificial intelligence (1) algorithm (1) show keywords in context
Description: To protect consumers in this state from the risks of algorithmic discrimination and unfair treatment posed by artificial intelligence.
Collection: Legislation
Status date: Jan. 8, 2025
Status: Introduced
Primary sponsor: Martin Looney
(24 total sponsors)
Last action: Referred to Joint Committee on General Law (Jan. 8, 2025)
Societal Impact
Data Governance (see reasoning)
The text explicitly addresses algorithmic discrimination and unfair treatment associated with artificial intelligence, making it highly relevant to the Social Impact category, which encompasses issues of fairness and bias in AI systems. The legislation is aimed at consumer protection, particularly against the risks that AI may pose to individuals. This directly aligns with the category's focus on societal harm and fairness metrics. In terms of Data Governance, the text implies the need for accurate and fair AI algorithms to protect consumers but does not provide sufficient detail to warrant a higher score. It does not address data security or management practices explicitly, resulting in a lower score in this category. The System Integrity category is not applicable here, as there are no mentions of security measures or transparency standards related to AI systems within the text. Similarly, while some aspects of the legislation may hint at performance and compliance monitoring, the overarching focus is on safeguarding consumers from discrimination rather than robustness metrics, which results in a moderate relevance to Robustness. Overall, the most pertinent categorization is Social Impact due to the emphasis on accountability for AI systems and their treatment of consumers.
Sector:
Government Agencies and Public Services (see reasoning)
The text's primary focus on protecting consumers from algorithmic discrimination via the regulation of AI technologies suggests strong relevance to the Government Agencies and Public Services sector, as these regulations will likely impact how government agencies work with AI technologies to serve the public. However, it lacks specific mention of government operations or AI usage within these contexts, leading to a slightly lower relevance score. The legislation does not address aspects related to Politics and Elections or the Judicial System directly, consequently receiving lower scores in those sectors. Similarly, the text does not focus on healthcare, labor, or academic institutions, and while nonprofit concerns could be tangentially relevant, they are minimally addressed. Lastly, there's no significant mention of international cooperation, so that category is marked with the lowest score.
Keywords (occurrence): artificial intelligence (3) show keywords in context
Description: Imposes liability for misleading, incorrect, contradictory or harmful information to a user by a chatbot that results in financial loss or other demonstrable harm.
Collection: Legislation
Status date: Jan. 8, 2025
Status: Introduced
Primary sponsor: Clyde Vanel
(sole sponsor)
Last action: referred to consumer affairs and protection (Jan. 8, 2025)
Societal Impact
Data Governance (see reasoning)
The text explicitly addresses the accountability and liability of proprietors who operate chatbots, focusing on the implications of misleading or harmful information generated by these AI systems. This is highly relevant to the Social Impact category, as it pertains to consumer protection, the responsibility for AI outputs, and the potential for harm caused by AI interactions. The text also relates to data governance due to the requirement for chatbots to provide accurate information and adhere to policies, but the primary focus on liability suggests a stronger connection to the social implications of AI use. System Integrity is touched upon in terms of human oversight of chatbot operations, particularly regarding the information provided. However, it lacks explicit mandates for technical methods or transparency standards, ultimately minimizing its relevance in this category. Robustness is not directly addressed as it doesn't delve into performance benchmarks or compliance with standards of AI systems. Overall, the text emphasizes social accountability resulting from AI technology use.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text addresses the liability and operational requirements of chatbots, placing it primarily within the sphere of Private Enterprises, Labor, and Employment, as it deals directly with businesses employing AI in consumer interactions and the implications of those roles. It has moderate relevance to Government Agencies and Public Services, particularly given the mention of liability and the potential impact on government entities utilizing chatbots. However, its focus is on business operations rather than direct regulations related to government services. There's minimal connection to the Judicial System, as while legal accountability is mentioned, it does not specifically pertain to judicial use. The text offers no significant insight into Healthcare, Academic Institutions, or Nonprofits, leading to low relevance scores in those areas. International Cooperation and Standards do not apply here, and the text does not fit within the Hybrid, Emerging, and Unclassified category.',
Keywords (occurrence): artificial intelligence (1) chatbot (12) show keywords in context
Description: Establishes the position of chief artificial intelligence officer and such person's functions, powers and duties; including, but not limited to, developing statewide artificial intelligence policies and governance, coordinating the activities of any and all state departments, boards, commissions, agencies and authorities performing any functions using artificial intelligence tools; makes related provisions.
Collection: Legislation
Status date: Jan. 8, 2025
Status: Introduced
Primary sponsor: Kristen Gonzalez
(3 total sponsors)
Last action: REFERRED TO INTERNET AND TECHNOLOGY (Jan. 8, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This legislation establishes the role of a Chief Artificial Intelligence Officer, which pertains directly to governance, oversight, and the formulation of policies on the use of AI across state departments. It explicitly mentions the development of policies and guidelines related to AI, automated decision-making systems, and risk management related to the rights, liberties, and welfare of individuals. This clearly fits the 'Social Impact' category as it addresses consumer protections, potential psychological impacts, and AI's implications on discrimination and misinformation. The required oversight and governance also relate strongly to 'System Integrity' since it mandates human oversight and accountability for AI usage in the state's operations. Furthermore, as the legislation outlines the development of standards and procedures for the use of AI systems, it is highly relevant to 'Data Governance', ensuring that AI implementations comply with data privacy laws and mitigate risks of discrimination and misinformation. Lastly, the aspect of developing benchmarks and accountability structures aligns with 'Robustness', focusing on regulatory compliance and performance assessment of AI systems. Overall, the legislation is primarily focused on ensuring responsible AI usage and governance, making it very relevant to all categories, mainly Social Impact, System Integrity, and Data Governance.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
This legislation primarily addresses the use of AI within state government operations, including the establishment of standardized practices for AI usage, data privacy, and governance across state agencies. It explicitly mentions responsibilities that will impact government agencies and public services by coordinating AI activities and ensuring regulations are applied correctly. Because of its emphasis on the central role of AI in public services and governance, it is highly relevant to the 'Government Agencies and Public Services' sector. The role of determining the implications of AI and establishing best practices also suggests relevance to the 'Judicial System' sector, particularly regarding AI's roles in legal decision-making processes in the governmental context. While the mention of AI does not specifically address the other sectors directly, the implications of AI governance and its integration into public services place its relevance in the Government Agencies and Public Services sector above others.
Keywords (occurrence): artificial intelligence (36) machine learning (1) automated (12) show keywords in context
Description: To reauthorize wildlife habitat and conservation programs, and for other purposes.
Collection: Legislation
Status date: Dec. 23, 2024
Status: Passed
Primary sponsor: David Joyce
(13 total sponsors)
Last action: Became Public Law No: 118-159. (Dec. 23, 2024)
Societal Impact
System Integrity
Data Robustness (see reasoning)
The legislation primarily focuses on military activities, personnel management, and the authorization of funds for various defense programs. However, it includes specific references to artificial intelligence, such as Section 221, which addresses defining the AI workforce, and Section 225, which discusses the duties related to AI models and technologies. Given that these references indicate a clear connection to AI's role in defense strategies and personnel requirements, several categories are relevant. The strongest relevance is to 'System Integrity' due to mandates for controlling AI systems, followed closely by 'Robustness' given the focus on AI performance benchmarks and compliance. 'Social Impact' is relevant but to a lesser degree, and 'Data Governance' has weak associations without explicit connections to data management.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
This legislation is primarily related to 'Government Agencies and Public Services' as it lays out provisions for military operations, funding, and the enhancement of defense capabilities. It directly involves the Department of Defense and military personnel, making it highly relevant. There are also associations with 'Private Enterprises, Labor, and Employment' concerning workforce definitions and AI integration, but these connections are less direct. Other sectors like Healthcare and Judiciary show little to no relevance.
Keywords (occurrence): artificial intelligence (136) machine learning (8) automated (32) large language model (1) algorithm (1) show keywords in context
Description: An act to add Chapter 22.7 (commencing with Section 22650) to Division 8 of the Business and Professions Code, to amend Section 3344 of the Civil Code, to add Article 2.5 (commencing with Section 1425) to Chapter 1 of Division 11 of the Evidence Code, and to add Chapter 9 (commencing with Section 540) to Title 13 of Part 1 of the Penal Code, relating to artificial intelligence technology.
Collection: Legislation
Status date: Dec. 2, 2024
Status: Introduced
Primary sponsor: Angelique Ashby
(sole sponsor)
Last action: From printer. May be acted upon on or after January 2. (Dec. 3, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly discusses various legal frameworks concerning artificial intelligence (AI) technology, particularly focusing on accountability, consumer protection regarding synthetic content, and implications for legal proceedings. This aligns well with the Social Impact category, as it addresses accountability and harm related to the misuse of AI technology. The Data Governance category is relevant due to mentions of consumer warnings and the handling of AI-generated synthetic content which connects to data management and consent. System Integrity is also relevant since it highlights the necessity of judicial assessments of AI evidence, indicating concerns over security and control of AI systems. Robustness is less central as the text does not primarily address benchmarks for AI performance but rather legal definitions and implications. Overall, the relevance of the Social Impact is strong due to provisions on misuse and consumer rights, while Data Governance and System Integrity are moderately relevant as they touch upon data management and legal standards for evidence respectively.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The text pertains primarily to the sectors of Government Agencies and Public Services, as it discusses legislation that directs state regulatory bodies on AI-related consumer rights and judicial practices. It indirectly touches on the Judicial System due to its focus on evidence and verification processes concerning AI. The discussion about consumer warnings and liabilities is particularly relevant to Private Enterprises, Labor, and Employment, as it relates to businesses dealing with AI technology. While there are implications for healthcare and academic institutions, these are much less pronounced, thus scoring lower. The legislation primarily focuses on governmental and consumer implications arising from AI technology, giving it a distinct connection to governmental functions and legal statutes.
Keywords (occurrence): artificial intelligence (14) show keywords in context
Description: An act to amend Section 38750 of the Vehicle Code, relating to autonomous vehicles.
Collection: Legislation
Status date: Dec. 2, 2024
Status: Introduced
Primary sponsor: Cecilia Aguiar-Curry
(sole sponsor)
Last action: From printer. May be heard in committee January 2. (Dec. 3, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text focuses on the specific legislation concerning the operation and regulation of autonomous vehicles, which directly relates to the development and implications of AI technologies. It addresses aspects such as definitions of autonomous technology and vehicles, requirements for testing, safety standards, and manufacturer responsibilities. The social impact may be relevant, as it touches on public safety and regulatory frameworks governing AI implementations in transportation, potentially affecting individuals' trust and interactions with these technologies. Data governance may also be relevant due to the mention of data collection protocols and compliance with privacy laws. System integrity is highly relevant since it discusses oversight and regulations ensuring the secure operation and accountability of autonomous vehicles. Robustness is somewhat less relevant since the bill mainly outlines operational standards without extensive benchmarks or performance metrics.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
This text is primarily relevant to the sector of Government Agencies and Public Services, as it outlines regulations that will govern the use of autonomous vehicles by the government and the processes established by the Department of Motor Vehicles. It also touches on the implications for private enterprises related to manufacturing and deploying autonomous vehicle technology. The judicial system could be slightly relevant, as regulations may touch on liability issues but the primary focus is not on legal adjudications. Healthcare and politics are not directly referenced or significantly implicated by this legislation. Academic and research institutions might have an indirect connection through research partnerships mentioned, but it is not a primary focus of this bill. International cooperation is not mentioned, making it less relevant.
Keywords (occurrence): automated (1) autonomous vehicle (26) show keywords in context
Description: Creates the Artificial Intelligence Systems Use in Health Insurance Act. Provides that the Department of Insurance's regulatory oversight of insurers includes oversight of an insurer's use of AI systems to make or support adverse determinations that affect consumers. Provides that any insurer authorized to operate in the State is subject to review by the Department in an investigation or market conduct action regarding the development, implementation, and use of AI systems or predictive model...
Collection: Legislation
Status date: Nov. 25, 2024
Status: Introduced
Primary sponsor: Bob Morgan
(sole sponsor)
Last action: Referred to Rules Committee (Jan. 4, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly pertains to the use of AI systems within health insurance, directly addressing consumer impacts, oversight by regulatory bodies, and the need for accountability in decision-making processes involving AI. This clearly relates to the Social Impact category, as it addresses consumer protections against adverse outcomes based solely on AI determinations. Data Governance is highly relevant due to its focus on ensuring the accuracy and accountability of data used by insurers in AI systems, emphasizing the need for oversight of predictive models and algorithms. There is also a strong connection to System Integrity, as the legislation mandates human review of AI-driven decisions, ensuring transparency and control. Robustness is less relevant, as the text does not focus significantly on benchmarking AI performance or regulatory compliance assessments for AI outcomes.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The legislation specifically addresses the use of AI within the insurance sector, primarily focusing on health insurance practices. It establishes regulatory oversight for insurers' use of AI systems and predictive models, ensuring these practices adhere to fair standards impacting consumers. This makes it highly relevant to the healthcare sector, as it aims to protect patients and policyholders from adverse decisions made by AI systems. It is less relevant to sectors like Politics and Elections or International Cooperation and Standards, as there's no focus on political activities or global standards in AI regulation presented within the text.
Keywords (occurrence): artificial intelligence (3) machine learning (4) algorithm (1) show keywords in context
Description: Relating to the prosecution and punishment of the offense of unlawful production or distribution of certain sexually explicit media; increasing a criminal penalty.
Collection: Legislation
Status date: Nov. 21, 2024
Status: Introduced
Last action: Filed (Nov. 21, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
This bill explicitly addresses the production and distribution of deep fake media, which directly relates to AI technologies such as machine learning and artificial intelligence. The relevance to social impact comes from the implications of deep fake media on individual consent, misinformation, and public trust, thus warranting a high score. Data governance is relevant due to the necessity of managing consent and accurate representation in AI-generated media. System integrity is applicable since it concerns the legality and ethical use of AI-generated content. Robustness receives a lower relevance as it primarily focuses on performance benchmarks, not specifically on the legal ramifications of deep fake technologies.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The bill primarily concerns the regulation of AI-generated deep fake media, making it significantly relevant to the sector of legal systems, particularly regarding how AI can affect judicial outcomes and the legal treatment of such media. While it does not directly address political campaigns, it could influence public perceptions in politics, which also ties to social impact. Government agencies may also need to enforce this legislation, categorizing it under government services, but it is less relevant than the judicial system. The healthcare and other listed sectors are not applicable in this scenario, as they do not deal with deep fake technologies.
Keywords (occurrence): artificial intelligence (1) machine learning (1) automated (1) deepfake (7) show keywords in context
Description: To Prohibit Deceptive And Fraudulent Deepfakes In Election Communications.
Collection: Legislation
Status date: Nov. 20, 2024
Status: Introduced
Primary sponsor: Andrew Collins
(sole sponsor)
Last action: Filed (Nov. 20, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
This legislation explicitly addresses the creation and distribution of deceptive and fraudulent deepfakes, particularly in the context of election communications, highlighting the social impact of such technologies on electoral integrity and misinformation. The proposed law aims to manage psychological and material harm caused by AI-generated content that misrepresents candidates, which is a direct engagement with issues of AI's effects on society, trust in democratic processes, and potential harm to reputations. It also proposes civil penalties, indicating accountability for developers or users of these technologies, further solidifying its relevance to social impact. Consequently, it is extremely relevant to the category of Social Impact. The legislation mentions synthetic media and generative adversarial networks, which are closely related to AI, thus bridging directly to concerns regarding fairness, bias, and the societal effects of algorithmically driven content creation. Its focus on preventing misinformation aligns heavily with the category’s broader description, warranting a very high score here.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The legislation primarily targets the use of deceptive and fraudulent deepfakes in the context of elections and political communications. As such, it is directly relevant to the sector of Politics and Elections due to its explicit focus on safeguarding electoral integrity against AI-generated misinformation. The requirement for clear disclosures and the imposition of civil penalties indicate an engagement with governance within electoral processes, thereby impacting how elections may be conducted and regulated using AI technologies. The mentions of civil penalties and enforcement mechanisms also tie into regulatory frameworks around elections. While there are some implications for Government Agencies and Public Services in terms of enforcement, the primary focus remains on electoral integrity, leading to a more moderate score for that category. As such, Politics and Elections is assigned a high score, while other sectors, including Government Agencies and Public Services, receive moderate relevance due to implications beyond elections.
Keywords (occurrence): artificial intelligence (1) deepfake (11) synthetic media (10) show keywords in context
Collection: Congressional Record
Status date: Nov. 18, 2024
Status: Issued
Source: Congress
Societal Impact
Data Governance
System Integrity (see reasoning)
The text contains references to artificial intelligence (AI) in the context of consumer protection from AI-enabled fraud and scams, as well as the promotion of AI technologies by the Department of Energy. The relevance of these aspects can be evaluated through the provided categories. For 'Social Impact', the text indicates a concern for AI's influence on consumers and the risks associated with AI technologies, justifying a score of 4. 'Data Governance' is moderately relevant, as the discussions around consumer protection imply an underlying need for secure data practices related to AI, leading to a score of 3. 'System Integrity' is again moderately relevant due to implications of ensuring AI systems don't enable fraud, meriting a score of 3. Lastly, 'Robustness' appears slightly relevant, as it relates to promoting the use of AI but lacks direct references to performance benchmarks, earning a score of 2.
Sector:
Government Agencies and Public Services (see reasoning)
The text references committee meetings that specifically tackle AI-related concerns, particularly within the context of commerce and energy, indicating a wide-ranging impact on 'Government Agencies and Public Services', which warrants a high relevance score of 4. 'Politics and Elections' does not feature directly in the provided text regarding AI, resulting in a score of 1. The text does not address the use of AI within 'Judicial System', 'Healthcare', or 'Academic and Research Institutions', each scoring a 1 as well. 'Private Enterprises, Labor, and Employment' has a slight connection due to the consumer protection aspect, leading to a score of 2. 'International Cooperation and Standards' is not addressed, resulting in a score of 1. While 'Nonprofits and NGOs' could relate slightly, it generally does not apply, hence a score of 1. Lastly, no hybrid or emerging sectors are discussed, also resulting in a score of 1.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Collection: Congressional Record
Status date: Nov. 18, 2024
Status: Issued
Source: Congress
The text primarily focuses on the contributions of Roy Hansen in the context of public service and technology, emphasizing his efforts to integrate emergent AI technologies. Since the mention of AI is along the lines of improving governmental efficiency rather than addressing broader social impact, specific governance, or system integrity issues, the relevance to each category varies. The Social Impact category contains elements concerning broader societal effects of AI, which the text does not seemingly address in depth. Data Governance is similarly not applicable as the text does not discuss data privacy, accuracy, or bias in data sets. System Integrity factors are not addressed either, and while there is mention of improving services, there are no explicit references to benchmarks or standards. Robustness is minimally relevant as there's a vague connection to performance through technology advancements, but specifics are lacking. Overall, because the text focuses more on individual contributions and leadership in technology rather than systemic issues within AI legislation, the relevance is low across all categories.
Sector: None (see reasoning)
The text discusses Roy Hansen's involvement in public service through technology, including the integration of AI technologies. However, it largely remains anecdotal and personal, focusing on his career rather than concrete legislation or systemic analysis. There is no mention of how AI affects political processes or governance comprehensively. The Healthcare and Private Enterprises sectors are entirely irrelevant due to the lack of focus on healthcare applications or business-related AI impacts. The relevance to Government Agencies and Public Services is somewhat present due to the mention of technology use in government but remains focused on an individual rather than broader governmental policies. Judicial System, Academic and Research Institutions, Nonprofits and NGOs, International Cooperation and Standards, and Hybrid, Emerging, and Unclassified sectors show no relevance, as those elements are not discussed in the text. Acknowledging the brief focus on technology within government services, the score reflects this low-relevance context.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Establishes Artificial Intelligence Apprenticeship Program and artificial intelligence apprenticeship tax credit program.
Collection: Legislation
Status date: Nov. 18, 2024
Status: Introduced
Primary sponsor: Angela Mcknight
(sole sponsor)
Last action: Introduced in the Senate, Referred to Senate Labor Committee (Nov. 18, 2024)
Societal Impact (see reasoning)
The text establishes an Artificial Intelligence Apprenticeship Program focused on training individuals for roles in the AI industry. It references the 'artificial intelligence industry' multiple times, emphasizing the development of AI technologies and applications. Therefore, this legislation is highly relevant to the Social Impact category, as it aims to equip the workforce with the skills necessary for growth within AI sectors, potentially reducing unemployment and increasing opportunities. Regarding Data Governance, while the legislation touches on training in data analytics, it does not specifically address secure data practices or oversight. For System Integrity, there is no mention of security, transparency, or control mandates relating to AI systems. The Robustness category also seems not applicable, as the focus is primarily on apprenticeships rather than benchmarks or performance standards for AI development. Thus, only Social Impact is robustly relevant.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text focuses on establishing an apprenticeship program within the artificial intelligence sector. It is directly relevant to Private Enterprises, Labor, and Employment by addressing workforce development in AI, creating job opportunities, and incentivizing employers to train apprentices. It does not directly touch upon Governance Agencies or the Judicial System, nor does it pertain to Healthcare, Academic Institutions, or International Cooperation explicitly in relation to AI. Therefore, its most significant relevance lies under Private Enterprises, Labor, and Employment. Other categories receive lower scores due to the lack of direct mention or implications.
Keywords (occurrence): artificial intelligence (28) show keywords in context