4180 results:


Description: An Act amending the act of December 17, 1968 (P.L.1224, No.387), known as the Unfair Trade Practices and Consumer Protection Law, further providing for definitions and for unlawful acts or practices and exclusions.
Collection: Legislation
Status date: Feb. 5, 2025
Status: Introduced
Primary sponsor: Craig Williams (7 total sponsors)
Last action: Referred to CONSUMER PROTECTION, TECHNOLOGY AND UTILITIES (Feb. 5, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text primarily discusses amendments to the Unfair Trade Practices and Consumer Protection Law, focusing on the definitions and implications of AI in consumer protections. It notably defines 'artificial intelligence' and outlines unfair practices involving AI. Regarding 'Social Impact,' the text's provisions include regulations aimed at holding AI-driven businesses accountable, which intends to protect consumers from deceptive practices, hence it is very relevant. The 'Data Governance' category is also applicable as it pertains to the accurate definitions and compliance regarding AI-generated guarantees and policies, indicating responsibilities related to data usage in these contexts. 'System Integrity' is not quite as applicable as it does not specifically address security measures or oversight beyond the definitions of unfair practices. 'Robustness' is not relevant because it does not cover benchmarks or performance standards for AI systems. Overall, Social Impact and Data Governance are significantly addressed, while System Integrity and Robustness show minimal to no relevance.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The text's primary scope includes consumer protection laws that affect general market practices involving AI. 'Politics and Elections' is not relevant as there’s no mention of electoral processes or campaigns. 'Government Agencies and Public Services' does not apply as it does not focus on government applications of AI. The 'Judicial System' category is unrelated; it doesn't address legal frameworks within judicial practices. 'Healthcare' does not connect due to no specified mention of health-related technologies. The 'Private Enterprises, Labor, and Employment' sector is moderately relevant as it relates to consumer interactions with businesses, particularly in how they handle AI technologies and their compliance with non-deceptive practices; however, it's more about consumer rights than employment contexts. 'Academic and Research Institutions' is not applicable, as the text doesn't reference these contexts. 'International Cooperation and Standards' is irrelevant as there are no discussions of international agreements. 'Nonprofits and NGOs' is also irrelevant based on the content. Finally, 'Hybrid, Emerging, and Unclassified' isn't a fit as the text fits our understanding of consumer law rather than emerging sectors. Thus, the most relevant sector here is Private Enterprises, Labor, and Employment on a lower scale due to its focus on protecting consumer interactions with businesses that utilize AI.


Keywords (occurrence): artificial intelligence (2) machine learning (1) neural network (1) show keywords in context

Description: Creating an artificial intelligence grant program.
Collection: Legislation
Status date: Feb. 4, 2025
Status: Introduced
Primary sponsor: Michael Keaton (5 total sponsors)
Last action: First reading, referred to Appropriations. (Feb. 4, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly addresses the creation of an artificial intelligence grant program, focusing on economic development through innovative AI applications. Discussions of assessing risks associated with AI, ethical uses of AI, and the prioritization of small businesses, all signal a concern for societal impacts that may emerge as AI becomes more integrated into industries. Therefore, the 'Social Impact' category is very relevant. The text also discusses grant supports that relate to using data to drive innovation, which aligns with some aspects of 'Data Governance,' although it primarily emphasizes economic development over data management. 'System Integrity' is relevant due to discussions surrounding ethical uses of AI, risk evaluation, and transparency requirements for the technologies in development. While 'Robustness' is less emphasized directly in the text, the overarching goal to build strong, effective AI systems touches on the need for benchmarks and standards. Therefore, it could be somewhat relevant but less so than the other categories.


Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs (see reasoning)

The text primarily pertains to the economic development sector through the establishment of the artificial intelligence grant program, which directly supports innovation and job creation. Discussions about recruiting input from various stakeholders signal a comprehensive approach to fostering technological advancement, indicative of engagement across sectors such as government and private enterprises. As such, there is a strong connection to 'Government Agencies and Public Services' and 'Private Enterprises, Labor, and Employment.' However, while healthcare applications are mentioned, the text does not specifically address healthcare legislation or policies, indicating a less direct relevance to that sector. The text involves many stakeholders and considerations, including civil rights and transparency, indicating a general relevance to sectors of 'Nonprofits and NGOs' aside from the focused sectors.


Keywords (occurrence): artificial intelligence (26) machine learning (5) foundation model (1) show keywords in context

Description: To amend sections 1331.01, 1331.04, and 1331.16 and to enact sections 1331.05 and 1331.50 of the Revised Code to regulate the use of pricing algorithms.
Collection: Legislation
Status date: Feb. 4, 2025
Status: Introduced
Primary sponsor: Louis Blessing (2 total sponsors)
Last action: Introduced (Feb. 4, 2025)

Category:
Societal Impact (see reasoning)

The text primarily discusses the regulation of pricing algorithms, which incorporates artificial intelligence and machine learning techniques. This directly relates to how AI impacts business practices and pricing strategies, which can have social repercussions regarding fairness and bias in pricing decisions. The legislation does not focus explicitly on data governance, system integrity, or the establishment of performance benchmarks, which would typically characterize robustness. Therefore, the most relevant category for this text would fall under Social Impact due to its implications on fairness and accountability in pricing decisions influenced by AI algorithms, with more limited relevance to other categories.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

This text primarily pertains to the private sector and its use of AI within business contexts, particularly focusing on pricing algorithms that can affect competitive practices. It emphasizes business regulations that will affect how commercial entities set their prices and interact with market data. However, it does not specifically address sectors like government agencies, healthcare, academia, or international standards. Hence, the relevance to the Private Enterprises, Labor, and Employment sector is moderate. There is no significant emphasis on political, legal, or institutional use of AI, leading to the lower relevancy scores across those sectors.


Keywords (occurrence): artificial intelligence (1) algorithm (19) show keywords in context

Description: Artificial intelligence; Responsible Deployment of AI Systems Act; AI Council; AI Regulatory Sandbox Program; Artificial Intelligence Workforce Development Program; effective date.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Arturo Alonso-Sandoval (sole sponsor)
Last action: Authored by Representative Alonso-Sandoval (Feb. 3, 2025)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text explicitly focuses on the development, regulation, and responsible deployment of artificial intelligence systems. It outlines various evaluations, audits, and risk classifications that AI systems must undergo, thereby addressing potential societal impacts, risks to individuals, and calls for transparency. This directly aligns with aspects of Social Impact, especially concerning accountability, bias, and ethical use of AI. The sections on governance and oversight indicate a strong relevance to System Integrity as they require qualified oversight, documentation, and independent audits of AI systems, ensuring security and compliance with existing laws. Additionally, it touches on elements of Data Governance by requiring the identification of potential biases in AI data sets and compliance with data privacy laws. Robustness is also relevant since the bill mandates assessments and audits that function as benchmarks for AI systems' performance. Overall, Social Impact and System Integrity are the two most relevant categories, with Data Governance and Robustness following closely behind.


Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs (see reasoning)

The text explicitly addresses the use and regulation of artificial intelligence across various sectors. It primarily relates to Government Agencies and Public Services, where AI systems are deployed, monitored, and evaluated according to the established guidelines. It also involves the Judicial System as it touches on potential impacts on civil liberties and rights, emphasizing accountability in AI systems used in decision-making processes. The legislation likely differs from sectors such as Healthcare and Private Enterprises mainly because it focuses on regulatory frameworks rather than specific applications or implications of AI therein. It could impact Academic and Research Institutions through the introduction of the Artificial Intelligence Workforce Development Program, promoting AI-related education and training. However, due to its broad scope regarding governance and oversight of AI systems and implications across public services, its strongest alignment is with Government Agencies and Public Services.


Keywords (occurrence): artificial intelligence (13) deepfake (1) show keywords in context

Description: Student data; creating the Oklahoma Education and Workforce Statewide Longitudinal Data System. Effective date. Emergency.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Ally Seifried (sole sponsor)
Last action: Authored by Senator Seifried (Feb. 3, 2025)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

In this text, several key aspects related to AI are present, particularly in the mention of 'advanced analytics capabilities including, but not limited to, artificial intelligence, machine learning, forecasting, and data mining.' This indicates a direct involvement with AI systems in the data management process. The legislation outlines the setup of a data system aimed at leveraging AI technologies for improving education and workforce outcomes, reflecting a clear intent to consider social implications and governance around AI usage. Additionally, the focus on privacy and security enhances the relevance to both system integrity and data governance, particularly as it governs data access and management within the new system.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

The legislation is highly relevant to the 'Government Agencies and Public Services' sector as it establishes a statewide data system that will be used by various governmental agencies involved in education and workforce development. Moreover, it pertains to the 'Academic and Research Institutions' sector, as the data-sharing agreements pave the way for researchers and educational stakeholders to utilize the system for analysis and improvement of educational outcomes. The structure for oversight and governance also indicates significant engagement with these sectors, focusing on optimizing public service through data integration and analysis.


Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context

Description: Schools; media literacy and cybersecurity to be taught in sixth, seventh, or eighth grades; State Department of Education to adopt curriculum standards; effective date.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Trish Ranson (sole sponsor)
Last action: Authored by Representative Ranson (Feb. 3, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text primarily addresses the implementation of media literacy and cybersecurity education in Oklahoma schools. The relevance of AI is primarily derived from the specific mention of identifying deepfake images, videos, audio, and artificial intelligence as part of the curriculum. This suggests a focus on understanding and combating misinformation potentially generated or propagated by AI technologies. Given this context, the relevant categories vary in terms of their applicability to the legislation. Social Impact relates to the societal implications of misinformation and the ethical use of AI tools in media. Data Governance touches on aspects of protecting personal information and navigating digital environments. System Integrity aligns with the emphasis on teaching critical evaluation of media content which bears on the requirement for reliable information sources. Robustness is less relevant here as there are no direct implications regarding benchmarks or performance of AI systems discussed in the text.


Sector:
Academic and Research Institutions (see reasoning)

The sectors potentially affected by this legislation largely relate to Academia due to its focus on educational standards and curriculum development tailored for students. The relevance to Government Agencies is minimal, despite the State Department of Education being involved. Other sectors, such as Healthcare or Judicial System, have no connection to the content of this text. Therefore, the scoring reflects a significant focus on Academic and Research Institutions, with a marginal consideration of others according to context.


Keywords (occurrence): deepfake (1) show keywords in context

Description: An Act amending Title 18 (Crimes and Offenses) of the Pennsylvania Consolidated Statutes, in forgery and fraudulent practices, providing for the offense of unauthorized dissemination of artificially generated impersonation of individual.
Collection: Legislation
Status date: Jan. 31, 2025
Status: Introduced
Primary sponsor: Robert Merski (9 total sponsors)
Last action: Referred to COMMUNICATIONS AND TECHNOLOGY (Jan. 31, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text explicitly addresses the unauthorized dissemination of artificially generated impersonations, which falls squarely within the scope of AI applications that have social implications, particularly concerning the misuse of AI-generated media. It highlights potential harm to individuals and considers consent, discrimination, and fraud through AI technologies. This affects societal trust and presents issues of bias and mental health, leading to a strong relevance to the 'Social Impact' category. The text also outlines definitions of artificial intelligence and how it relates to impersonation, which ties into aspects of data correctness and security, making 'Data Governance' relevant. However, issues pertaining specifically to system integrity or robust performance benchmarks of the AI systems do not explicitly emerge from the text, warranting lower relevance for those categories.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Nonprofits and NGOs
Hybrid, Emerging, and Unclassified (see reasoning)

The legislation specifically addresses concerns regarding AI-generated impersonations, which can have significant implications for personal privacy and fraud. It might influence how AI is regulated in the public sphere and raise questions about employment practices related to digital impersonation, but it is not expressly focused on procedural applications within the political system or government. There is minimal reference to healthcare or judicial systems, thus lowering the relevance of those sectors. However, the businesses and nonprofit realm could see implications, hence the moderate scoring. The text primarily addresses how this act will apply in general without narrowing it down significantly to one sector, hence scores for 'Hybrid, Emerging, and Unclassified' serve as a catch-all for overlapping themes.


Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context

Description: Establishes criteria for the use of automated employment decision tools; provides for enforcement for violations of such criteria.
Collection: Legislation
Status date: Jan. 30, 2025
Status: Introduced
Primary sponsor: George Alvarez (sole sponsor)
Last action: referred to labor (Jan. 30, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

This text relates significantly to AI and its application in automated employment decision tools, particularly addressing issues of fairness and bias through disparate impact analysis. It also emphasizes accountability and regulations governing AI systems used in hiring processes. Therefore, the relevance of each category can be assessed as follows: 1. Social Impact - The legislation's focus on reducing the potential negative impacts of automated employment decisions on protected classes makes it extremely relevant. It aims to protect individuals from discrimination arising from the use of biased AI systems in employment, scoring a 5. 2. Data Governance - While the text discusses the impact of automated tools on hiring, it does not emphasize data accuracy, management, or collection processes explicitly. However, understanding the data used in these systems is implicit in ensuring fairness; therefore, it scores a 3. 3. System Integrity - The section outlines necessary analyses and transparency related to the tools used, which implies a need for reliability and oversight within these automated systems. However, it lacks detailed measures for security and transparency beyond the analysis itself, hence a score of 4. 4. Robustness - The act centers primarily around the lawful use of AI tools in employment decisions but does not address benchmarks or performance certifications. Thus, it scores a 2 due to a lack of emphasis on auditing or performance standards.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The legislation specifically deals with automated tools used for employment decisions, making it particularly relevant to the sector of Private Enterprises, Labor, and Employment. The mention of employment candidates and the criteria for decision-making directly indicates its applicability in this sector. Here’s how it scores for each sector: 1. Politics and Elections - Not related as it does not address political campaigns or electoral processes, scoring a 1. 2. Government Agencies and Public Services - This law does not discuss the use of AI by government entities for public services, scoring a 1. 3. Judicial System - There’s no mention of AI in legal contexts or the judicial process, resulting in a score of 1. 4. Healthcare - No reference to AI's application in healthcare settings, so it scores 1. 5. Private Enterprises, Labor, and Employment - Directly addresses how AI is implemented in employment contexts, hence scoring a 5. 6. Academic and Research Institutions - No references are made to education or research use of AI, therefore a score of 1. 7. International Cooperation and Standards - There is no mention of international agreements or standards, resulting in a score of 1. 8. Nonprofits and NGOs - The text does not pertain to nonprofit use of AI, so it scores a 1. 9. Hybrid, Emerging, and Unclassified - The text fits specifically within employment practices and doesn't apply to hybrid or emerging sectors, hence scoring a 1.


Keywords (occurrence): machine learning (1) automated (9) show keywords in context

Description: Prohibit the use of a deepfake to influence an election and to provide a penalty therefor.
Collection: Legislation
Status date: Jan. 30, 2025
Status: Introduced
Primary sponsor: Liz Larson (9 total sponsors)
Last action: First Reading Senate S.J. 137 (Jan. 30, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text specifically addresses the use of deepfakes, which relates directly to the social impact by attempting to mitigate misinformation and potential harm to candidates through AI-generated content. It emphasizes the need for responsible use of technology, aiming to protect individuals and uphold the integrity of electoral processes. In addition, specifics about legal penalties and defenses indicate a strong relevance to legal frameworks, thereby touching on accountability and protection against harm. Therefore, it is crucial in the context of social impact legislation. Data governance is somewhat relevant as it discusses the integrity of the information being disseminated but doesn't directly address data management practices. System integrity is mentioned, as the legislation implicates ethical use of AI tools, but the primary focus remains on the social implications and the need for integrity within election processes. Similarly, robustness has marginal relevance but is overshadowed by more pressing concerns about misinformation and its societal ramifications.


Sector:
Politics and Elections
Judicial system (see reasoning)

The text addresses deepfakes in the context of their influence on elections, thus falling squarely within the relevance of the Politics and Elections sector. The implications of AI technology, specifically deepfakes, are discussed in a legislative context designed to regulate their use in the electoral process, aiming to protect candidates from manipulation and misinformation. Government Agencies and Public Services has slight relevance due to possible implications regarding enforcement by government entities, while Judicial System pertains moderately because it outlines legal instruments for redress. While there may be tangential connections to Private Enterprises, Labor, and Employment if considering marketing or campaigning practices, this connection isn’t strong. The text does not explicitly connect to other sectors like Healthcare or Academic and Research Institutions but does highlight the need for ethical norms around AI usage in public spheres, making it a clear fit for the Politics and Elections category.


Keywords (occurrence): artificial intelligence (1) deepfake (21) show keywords in context

Description: Restricts the use by an employer or an employment agency of electronic monitoring or an automated employment decision tool to screen a candidate or employee for an employment decision unless such tool has been the subject of an impact assessment within the last year; requires notice to employment candidates of the use of such tools; provides remedies for violations.
Collection: Legislation
Status date: Jan. 30, 2025
Status: Introduced
Primary sponsor: George Alvarez (sole sponsor)
Last action: referred to labor (Jan. 30, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text directly addresses the use of automated employment decision tools, highlighting the need for impact assessments, data access, and accuracy related to these tools. It emphasizes standards for accountability in the usage of such tools, which is inherently tied to social implications such as fairness and potential discrimination in employment practices. Therefore, both Social Impact and Data Governance categories are strongly relevant. The System Integrity and Robustness categories are less relevant, as they do not primarily concern the specific frameworks and rules for AI tools indicated in the text which focuses more on the impact and data handling aspects rather than overarching system integrity measures or performance benchmarks.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

The legislation regulates the application of AI tools in employment decision-making, impacting the Private Enterprises sector directly due to its implications for employers and employment practices. It also has relevance to the Government Agencies and Public Services sector, as the guidelines could be used by government agencies when dealing with employment practices or public sector hiring. While there are aspects related to Academic and Research Institutions in terms of understanding bias and impact assessments, the primary focus is not on research-specific contexts but rather applied employment practices. Therefore, the highest relevance is for Private Enterprises and Government sectors.


Keywords (occurrence): artificial intelligence (1) automated (30) algorithm (1) show keywords in context

Description: Establishes requirements for the use of artificial intelligence, algorithm, or other software tools in utilization review and management; defines artificial intelligence.
Collection: Legislation
Status date: Jan. 30, 2025
Status: Introduced
Primary sponsor: Phara Souffrant Forrest (sole sponsor)
Last action: referred to insurance (Jan. 30, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly addresses the use of artificial intelligence in healthcare context, specifically in utilization review and management. There are provisions related to ethical guidelines, accountability for outputs, discrimination, and oversight of AI systems, all of which connect strongly to social impact and system integrity. Additionally, it includes requirements for data handling regarding patient information, which pertains to data governance. The robustness category is less applicable here since the text focuses on the operational usage of AI rather than the performance benchmarks associated with that technology.


Sector:
Healthcare (see reasoning)

The legislation specifically deals with the use of AI in the healthcare sector, outlining requirements for how AI tools should operate within health care service plans. It discusses ethical standards and accountability, making it highly relevant to healthcare. The text does not address sectors outside of healthcare, so it does not relate to politics, judicial systems, or other sectors like education or nonprofits, thus scoring low on those aspects. However, it showcases implications profound to healthcare and the management of patient information.


Keywords (occurrence): artificial intelligence (9) algorithm (10) show keywords in context

Description: STATE AFFAIRS AND GOVERNMENT -- ARTIFICIAL INTELLIGENCE ACCOUNTABILITY ACT - Requires DOA provide inventory of all state agencies using artificial intelligence (AI); establishes a 13 member permanent commission to monitor the use of AI in state government and makes recommendations for state government policy and other decisions.
Collection: Legislation
Status date: Jan. 22, 2025
Status: Introduced
Primary sponsor: John Lombardi (6 total sponsors)
Last action: Introduced, referred to House Innovation, Internet, & Technology (Jan. 22, 2025)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text encompasses legislation aimed at monitoring and regulating the use of artificial intelligence (AI) within state government. It highlights accountability by requiring an inventory of AI systems used across state agencies and establishes a commission to assess their impacts, ensuring they do not lead to unlawful discrimination or other societal issues. This aligns strongly with themes of social impact due to the focus on fairness, discrimination, and societal effects of AI systems. Data governance is also relevant given the text's emphasis on data usage, security, and assessment procedures around AI systems. System integrity is significant due to mandates for transparency and assessment of AI systems and their processes. Robustness is also a consideration, albeit to a lesser extent, as it implies the establishment of benchmarks and policies for continuous assessment of AI performance, which is indirectly referenced but is critical for ongoing compliance and improvement.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

This legislation is particularly relevant to the 'Government Agencies and Public Services' sector, as it establishes a framework for how artificial intelligence is used within state government, overseeing its implementation, effects, and compliance with ethical guidelines. It is also somewhat relevant to the 'Academic and Research Institutions' sector, considering the commission includes experts from academic backgrounds to inform best practices and guidelines. Other sectors like Politics and Elections, Healthcare, and Private Enterprises do not appear as directly relevant based on the text provided, making this legislation primarily focused on governmental use of AI.


Keywords (occurrence): artificial intelligence (28) machine learning (2) neural network (1) show keywords in context

Description: An Act relating to disclosure of election-related deepfakes; relating to use of artificial intelligence by state agencies; and relating to transfer of data about individuals between state agencies.
Collection: Legislation
Status date: Jan. 22, 2025
Status: Introduced
Primary sponsor: Shelley Hughes (sole sponsor)
Last action: REFERRED TO STATE AFFAIRS (Jan. 22, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text outlines legislation that pertains to the use of artificial intelligence in electoral contexts (specifically addressing deepfakes related to elections) and regulates data transfer among state agencies. This suggests significant social implications, highlighting accountability and the need for ethical standards in AI-generated media. Additionally, it emphasizes data governance aspects, including consent and the security of personal data when handling AI systems. Measures for system integrity, including oversight and assessments of AI systems used by state agencies, also comprise an important part of the legislation. Robustness is relevant here as well, but to a lesser extent, since the text does not explicitly discuss performance benchmarks or auditing for compliance. Therefore, the scores across categories vary: Social Impact is highly relevant due to the implications for public trust and electoral integrity; Data Governance is also highly relevant given the focus on personal data and consent; System Integrity retains significance as the legislation addresses oversight, but is slightly less emphasized here; Robustness has limited relevance since the text does not focus on performance benchmarks.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The text addresses how artificial intelligence is applied in political contexts (deepfakes in elections) and also mentions its application within state agencies, making it highly relevant to both the Politics and Elections sector and Government Agencies and Public Services sector. These legislative measures aim to govern and regulate the use of AI by state agencies while directly influencing electoral integrity and transparency, hence the high relevance to these sectors. Other sectors like Judicial System, Healthcare, Private Enterprises, Labor and Employment, and Academic and Research Institutions are not explicitly mentioned in the text, while International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified remain marginally relevant due to the lack of clear thematic connections. Therefore, the scores reflect these connections: Politics and Elections and Government Agencies and Public Services are rated as highly relevant, while other sectors received lower scores due to their lack of direct mention or applicability.


Keywords (occurrence): artificial intelligence (12) machine learning (1) deepfake (6) algorithm (1) show keywords in context

Description: Establishes and appropriates funds for a data and artificial intelligence governance and decision intelligence center and necessary positions to improve data quality and data sharing statewide.
Collection: Legislation
Status date: Jan. 21, 2025
Status: Introduced
Primary sponsor: Ikaika Hussey (6 total sponsors)
Last action: Referred to ECD, FIN, referral sheet 2 (Jan. 21, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly includes terms like 'data and artificial intelligence governance', 'machine learning', and 'decision intelligence', indicating a strong focus on the implications of AI on data quality and sharing. It discusses the responsible use of AI, which is closely linked to the social impact that AI systems can have on citizens, such as promoting transparency and efficiency in government operations, thus aligning strongly with the Social Impact category. It also emphasizes the need for accurate data management and governance of AI technologies across state agencies, which is related to Data Governance. There are provisions for system integrity such as secured data sharing and proper access control, which relate to ensuring the robustness and accountability of AI systems, thus aligning moderately with System Integrity. However, the text does not directly address benchmarks or performance evaluation of AI technologies, making the relevance to Robustness less pronounced. Overall, the text falls strongly within the Social Impact and Data Governance categories, with some relevance noted for System Integrity.


Sector:
Government Agencies and Public Services (see reasoning)

There is significant relevance of the text to the Government Agencies and Public Services sector, as it focuses on establishing a governance center directly linked to the use of AI by state agencies for improving data collection and sharing for public services. The emphasis on increasing citizen satisfaction and improving government performance aligns closely with this sector. Although there are mentions of potential impacts on other sectors, such as labor and public interest concerns from AI applications, the primary focus remains within the governmental context. Therefore, the Government Agencies and Public Services sector will receive a high score whereas the connections to Political and Elections, Judicial System, Healthcare, Private Enterprises, Academic Institutions, International Cooperation, Nonprofits, or Hybrid sectors are less pronounced and not directly applicable to this text.


Keywords (occurrence): artificial intelligence (19) machine learning (3) show keywords in context

Description: Requires corporations, organizations, or individuals engaging to commercial transactions or trade practices to clearly and conspicuously notify consumers when the consumer is interacting with an artificial intelligence chatbot or other technology capable of mimicking human behaviors. Authorizes private rights of action. Establishes statutory penalties.
Collection: Legislation
Status date: Jan. 17, 2025
Status: Introduced
Primary sponsor: Jarrett Keohokalole (8 total sponsors)
Last action: The committee on CPN deferred the measure. (Feb. 7, 2025)

Category:
Societal Impact
System Integrity (see reasoning)

The text explicitly addresses several social impacts of AI, particularly concerning consumer protection and transparency in interactions involving AI chatbots. It highlights the risks of deception that arise when consumers unknowingly interact with chatbots, including the potential for misinformation and manipulation. This directly relates to the category of Social Impact, as it emphasizes accountability for the outputs of AI systems and consumer rights related to AI interactions. The requirement for transparency also affects public trust in technology. In terms of Data Governance, there is limited relevance here; while the text discusses the management of consumer expectations around AI chatbots, it does not delve into data management issues such as data accuracy or ownership. The text touches on System Integrity by mentioning the necessity for regulation ensuring informed consumer interaction, but it does not specifically address security or oversight measures. Finally, Robustness is less relevant as there are no benchmarks or performance metrics discussed for AI systems. Overall, the primary focus is on the social responsibility surrounding the use of AI, particularly chatbots, and the implications for consumer rights.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text is highly relevant to the sector of Private Enterprises, Labor, and Employment, as it dictates how businesses must inform consumers about interactions with AI chatbots during commercial transactions, thus affecting business practices. It indirectly touches on Government Agencies and Public Services, as consumer notifications may influence how government services communicate with constituents regarding AI usage. However, it does not explicitly address the use of AI in judicial contexts or healthcare settings, which would be relevant for the Judicial System and Healthcare sectors. The text does not address educational contexts enough to classify under Academic and Research Institutions. International Cooperation and Standards, Nonprofits and NGOs, and Hybrid or Emerging sectors do not find a strong relevance. Therefore, the legislation largely sits within the nexus of consumer protection in private enterprise.


Keywords (occurrence): artificial intelligence (5) chatbot (10) show keywords in context

Description: Digital Content Authenticity and Transparency Act established; civil penalty. Requires a developer of an artificial intelligence system or service to apply provenance data to synthetic digital content that is generated by such developer's generative artificial intelligence system or service and requires a developer to make a provenance application tool and a provenance reader available to the public. The bill requires a controller of an online service, product, or feature to retain any availa...
Collection: Legislation
Status date: Jan. 16, 2025
Status: Introduced
Primary sponsor: Adam Ebbin (sole sponsor)
Last action: Referred to Committee on General Laws and Technology (Jan. 16, 2025)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text clearly addresses legislation related to artificial intelligence, particularly focusing on requirements for developers of generative AI systems to apply provenance data to synthetic content. This directly relates to accountability and consumer protection, aligning with the Social Impact category. It discusses measures to ensure transparency in AI-generated content, addressing potential harms related to misinformation and public trust. Furthermore, it deals with the management of data, particularly provenance data, which ties into Data Governance. The mention of maintaining accuracy and transparency falls closely under System Integrity, as well. Lastly, the criteria for evaluating AI systems and the emphasis on compliance align with Robustness. Overall, the legislation covers themes critical to all four categories, albeit with a stronger focus on Social Impact, Data Governance, and System Integrity, which concern ethical implications and data management of AI outputs.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The legislation significantly impacts the use of AI in digital content creation and transparency, which is particularly relevant to Private Enterprises, Labor, and Employment as it affects how businesses, especially those creating or managing AI-generated content, will operate. The requirements laid out for developers make it clear that it will affect commercial practices and potentially the labor market related to content creation and management. This legislation might not directly address healthcare, the judicial system, or nonprofit sectors, making those categories less relevant. It may have indirect implications for Government Agencies and Public Services given the call for public availability of tools and data, but these are less pronounced than in the private enterprise context. Overall, the strongest sector involvement appears to be in Private Enterprises, given the commercial focus of the legislation.


Keywords (occurrence): artificial intelligence (20) machine learning (2) foundation model (2) show keywords in context

Description: Establishes the New York workforce stabilization act; requires certain businesses to conduct artificial intelligence impact assessments on the application and use of such artificial intelligence and to submit such impact assessments to the department of labor prior to the implementation of the artificial intelligence; establishes a surcharge on certain corporations that use artificial intelligence or data mining or have greater than fifteen employees displaced by artificial intelligence of a ...
Collection: Legislation
Status date: Jan. 14, 2025
Status: Introduced
Primary sponsor: Michelle Hinchey (3 total sponsors)
Last action: REFERRED TO LABOR (Jan. 14, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text outlines specific legislative measures related to the conduct of artificial intelligence impact assessments and the imposition of surcharges on corporations that utilize AI in ways that might displace workers or involve data mining. The focus on accountability, displacement assessments, and the potential psychological, material, or social harm of AI applications heavily relates to the 'Social Impact' category. The requirement for data handling and privacy considerations, as well as control over sensitive data, aligns closely with 'Data Governance', thus enhancing its relevance. 'System Integrity' receives a moderate score due to the mention of transparency through impact assessments, but it does not address broader security or oversight mandates. 'Robustness' is less relevant here, given that the focus is primarily on workforce impacts rather than performance benchmarks or certifications.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

This legislation significantly pertains to 'Private Enterprises, Labor, and Employment' as it regulates how businesses must assess and report AI's impact on employees and organizational structure, particularly in respect to worker displacement. The measures in this text are not directed toward 'Politics and Elections', 'Judicial System', 'Healthcare', or others. The 'Government Agencies and Public Services' sector relates slightly due to the involvement of the Department of Labor in overseeing the implementation of these policies, but it is not the primary focus. Given the implications for business operations and labor markets, 'Private Enterprises, Labor, and Employment' scores the highest, while other sectors receive minimal relevance.


Keywords (occurrence): artificial intelligence (12) show keywords in context

Description: Making supplemental transportation appropriations for the 2023-2025 fiscal biennium.
Collection: Legislation
Status date: Jan. 13, 2025
Status: Introduced
Primary sponsor: Marko Liias (3 total sponsors)
Last action: First reading, referred to Transportation. (Jan. 13, 2025)

Category: None (see reasoning)

The text primarily focuses on transportation funding, appropriations, and specific projects without a direct mention or substantial discussion about AI-related topics. The nature of the bill does not significantly engage with concepts regarding social impact of AI on individuals or society, data governance within AI systems, integrity of AI operations, or robustness benchmarks for AI systems. There are peripheral mentions of data and project planning, which do not center around AI technology, thus warranting low relevance scores across categories. Overall, the text is mostly concerned with traditional government budgetary processes and infrastructure.


Sector: None (see reasoning)

The text discusses transportation funding and appropriations without addressing specific applications of AI in governmental operations or public services. While there are references to data collection projects, they do not center on AI systems or practices; thus, it lacks the relevance to sectors mentioned. Overall, it does not connect fundamentally with the scope of these sectors, leading to low scores for all indicated areas.


Keywords (occurrence): automated (15) autonomous vehicle (2) show keywords in context

Description: Artificial Intelligence Transparency Act established. Requires developers of generative artificial intelligence systems made available in the Commonwealth to ensure that any generative artificial intelligence system that produces audio, images, text, or video content includes on such AI-generated content a clear and conspicuous disclosure that meets certain requirements specified in the bill. The bill also requires developers of generative artificial intelligence systems to implement reasonab...
Collection: Legislation
Status date: Jan. 11, 2025
Status: Introduced
Primary sponsor: Rozia Henson (sole sponsor)
Last action: Committee Referral Pending (Jan. 11, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The AI-related portions of the text explicitly discuss the requirement for developers of generative AI systems to ensure transparency and accountability by clearly disclosing when content is generated by AI. This addresses issues of consumer protection and the potential for misinformation, which directly relates to societal impacts of AI. The legislation also implies accountability for outputs of AI systems, which further aligns with challenges pertaining to social trust and the influence of AI-generated content on public discourse. Therefore, the Social Impact category is rated as very relevant. The text also emphasizes the importance of accurate disclosures and could relate to data governance indirectly, but mainly focuses on societal implications rather than on the secure management of data itself. System Integrity and Robustness are less emphasized, as the text does not delve into security measures, human oversight, or performance benchmarks; therefore both categories score lower.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The text primarily addresses the implications of generative AI in terms of consumer protection and misinformation which could be relevant across several sectors. Politics and Elections is not applicable since there's no mention of AI use in elections. However, Government Agencies and Public Services and Healthcare may be affected indirectly, but not directly considered. The impact of AI on Private Enterprises could also be noted due to implications on commercial practices. Academic and Research Institutions might be considered because of the transparency needed in AI, but the text focuses more on commercial than educational aspects. These non-direct references do not strongly emphasize their relevance in the context of the legislation.


Keywords (occurrence): artificial intelligence (25) foundation model (1) chatbot (2) show keywords in context

Description: Artificial Intelligence Transparency Act established. Requires developers of generative artificial intelligence systems made available in the Commonwealth to ensure that any generative artificial intelligence system that produces audio, images, text, or video content includes on such AI-generated content a clear and conspicuous disclosure that meets certain requirements specified in the bill. The bill also requires developers of generative artificial intelligence systems to implement reasonab...
Collection: Legislation
Status date: Jan. 11, 2025
Status: Introduced
Primary sponsor: Rozia Henson (sole sponsor)
Last action: Left in Communications, Technology and Innovation (Feb. 4, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The AI-related portions of the text explicitly discuss the requirement for developers of generative AI systems to ensure transparency and accountability by clearly disclosing when content is generated by AI. This addresses issues of consumer protection and the potential for misinformation, which directly relates to societal impacts of AI. The legislation also implies accountability for outputs of AI systems, which further aligns with challenges pertaining to social trust and the influence of AI-generated content on public discourse. Therefore, the Social Impact category is rated as very relevant. The text also emphasizes the importance of accurate disclosures and could relate to data governance indirectly, but mainly focuses on societal implications rather than on the secure management of data itself. System Integrity and Robustness are less emphasized, as the text does not delve into security measures, human oversight, or performance benchmarks; therefore both categories score lower.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The text primarily addresses the implications of generative AI in terms of consumer protection and misinformation which could be relevant across several sectors. Politics and Elections is not applicable since there's no mention of AI use in elections. However, Government Agencies and Public Services and Healthcare may be affected indirectly, but not directly considered. The impact of AI on Private Enterprises could also be noted due to implications on commercial practices. Academic and Research Institutions might be considered because of the transparency needed in AI, but the text focuses more on commercial than educational aspects. These non-direct references do not strongly emphasize their relevance in the context of the legislation.


Keywords (occurrence): artificial intelligence (25) foundation model (1) chatbot (2) show keywords in context
Feedback form