4827 results:
Description: An Act amending Titles 18 (Crimes and Offenses) and 61 (Prisons and Parole) of the Pennsylvania Consolidated Statutes, in sexual offenses, further providing for the offense of unlawful dissemination of intimate image; in minors, further providing for the offense of sexual abuse of children and for the offense of transmission of sexually explicit images by minor; and making editorial changes to replace references to the term "child pornography" with references to the term "child sexual abuse m...
Summary: This bill amends Pennsylvania laws regarding sexual offenses, enhancing penalties for unlawful dissemination of intimate images and redefining child pornography to "child sexual abuse material," addressing modern technology implications, including AI-generated content.
Collection: Legislation
Status date: June 10, 2024
Status: Engrossed
Primary sponsor: Tracy Pennycuick
(19 total sponsors)
Last action: Signed in Senate (Oct. 9, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This legislation contains explicit references to 'Artificial Intelligence' and its role in the criminalization of the unlawful dissemination of intimate images and depictions, particularly in the context of sexually explicit material generated by AI. It establishes definitions that clarify the implications of AI technology in potentially harmful contexts and establishes legal consequences for its misuse. As such, the relevance to the Social Impact category is significant due to the societal issues associated with AI-generated intimate depictions. The role of accountability for developers and the implications for minors further bolster this relevance. Similarly, it pertains to Data Governance due to mentions of the need to manage the data used to generate such imagery responsibly. System Integrity also comes into play with measures for human oversight and security as it relates to accountability for AI systems being misused. Lastly, the discussion of defining standards for AI-generated content aligns with the Robustness category, although it is not as strongly detailed within the text. Therefore, I would assign high relevance to Social Impact (5) and Data Governance (4), moderate to System Integrity (3) and Robustness (3).
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system (see reasoning)
This legislation addresses the implications of AI in laws pertaining to sexual offenses, particularly in how AI technology can facilitate harmful behaviors like the unlawful dissemination of intimate images. This directly impacts the Political and Elections sector due to discussions around the use of AI in potentially influencing electoral processes when targeting young individuals. It also indirectly impacts the Government Agencies and Public Services sector as it presents a need for regulation by government entities concerning AI and its application in law enforcement. While it mentions elements relevant to the Judicial System in terms of enforcement and legal definitions, it does not explicitly address judicial applications of AI. Sectors such as Healthcare, Private Enterprises, Labor, Education, and others are less relevant on the face of this text as they do not prominently feature discussions about AI applications. Overall, the most relevant sectors would be categorized as Politics and Elections (3) due to potential implications for governance and regulation, and Government Agencies and Public Services (4) for the necessity of oversight in applying this legislation. The Judicial System sees moderate relevance (3) due to enforcement aspects, while other sectors rank lower.
Keywords (occurrence): artificial intelligence (11) automated (1) show keywords in context
Description: This Act creates a new elections crime: use of deep fake technology to influence an election. Under this statute it would be a crime to distribute within 90 days of an election a deep fake that is an audio or visual depiction that has been manipulated or created with generative adversarial network techniques, with the intent of harming a party or candidate or otherwise deceiving voters. It is not a crime, nor is there a penalty, if the altered media contains a disclaimer stating This audio/v...
Summary: The bill amends Delaware election law to prohibit the distribution of deep fakes intended to mislead voters about candidates, establishing penalties and allowing for injunctive relief.
Collection: Legislation
Status date: June 27, 2024
Status: Enrolled
Primary sponsor: Cyndie Romer
(10 total sponsors)
Last action: Passed By House. Votes: 40 YES 1 NO (June 30, 2024)
Societal Impact (see reasoning)
The text primarily addresses the implications of deep fake technology in the context of elections. It outlines laws regarding the distribution of deep fakes that could mislead voters, highlighting accountability and consumer protection in electoral processes. This connection with misinformation, deception in public discourse, and implications for trust in democratic institutions aligns closely with the 'Social Impact' category. The legislation indirectly addresses potential biases arising from AI-generated content affecting political representation, which reinforces its relevance to addressing social impact. In contrast, the other categories, such as Data Governance, System Integrity, and Robustness, do not relate as strongly to the core focus of the legislation on election integrity and the social consequences of misinformation. The measures proposed do not delve into data handling practices, system security, or benchmarking the technologies used, which are the crux of the other categories, and therefore score lower on relevance.
Sector:
Politics and Elections
Judicial system (see reasoning)
The legislation specifically addresses how deep fake technology can be used in political contexts to influence election outcomes, making it extremely relevant to the 'Politics and Elections' sector. It establishes clear legal ramifications for the distribution of misleading media that could alter voter perception or election integrity. While the implications of AI in relation to public service delivery or nonprofit operations may tangentially touch on the legislation, the primary emphasis on election integrity and the role of misleading information directly aligns the text with the political sector. Consequently, sectors such as Government Agencies and Public Services or Nonprofits and NGOs do not receive higher scores, as the focus remains squarely on electoral processes and misinformation rather than broader social governance or nonprofit applications.
Keywords (occurrence): deepfake (11) synthetic media (5) show keywords in context
Description: Relating to the use of an automated employment decision tool by an employer to assess a job applicant's fitness for a position.
Summary: The bill regulates the use of automated employment decision tools by requiring bias audits and applicant notification, promoting fair hiring practices and preventing discrimination in Texas.
Collection: Legislation
Status date: March 13, 2025
Status: Introduced
Primary sponsor: Salman Bhojani
(sole sponsor)
Last action: Filed (March 13, 2025)
Societal Impact
Data Governance (see reasoning)
The text discusses the use of an automated employment decision tool, which directly involves artificial intelligence and algorithmic processes in assessing job applicants. This incorporates AI technologies, their impact on employment practices, and emphasizes fairness and bias audits, making it particularly relevant to the Social Impact category. Also, it addresses the auditing of the algorithms for biases, linking closely to Data Governance. The legislation does not address system integrity or robustness in terms of compliance or benchmarking but focuses on employment decisions, fairness, disclosure, and bias reduction in automated tools. Hence, it is most relevant to Social Impact and Data Governance.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation is substantially oriented toward the use of AI in hiring processes, which clearly ties into Private Enterprises, Labor, and Employment. It specifies how AI tools should be used by employers to evaluate job applicants, which directly engages with labor market regulations and corporate governance regarding fairness in hiring practices. While there could be a tangential connection to Government Agencies through the involvement of the Texas Workforce Commission for oversight, the primary relevance lies in the employment sector. Therefore, the strongest sector fit is with Private Enterprises, Labor, and Employment, with lesser relevance to Government Agencies.
Keywords (occurrence): artificial intelligence (2) machine learning (2) automated (9) algorithm (1) show keywords in context
Description: To provide for the establishment of a program to certify artificial intelligence software used in connection with producing agricultural products.
Summary: The Farm Tech Act establishes a certification program for artificial intelligence software used in agriculture, ensuring compliance with federal and state standards for safety and accuracy in agricultural production.
Collection: Legislation
Status date: Dec. 14, 2023
Status: Introduced
Primary sponsor: Randy Feenstra
(4 total sponsors)
Last action: Referred to the House Committee on Agriculture. (Dec. 14, 2023)
Data Governance
System Integrity
Data Robustness (see reasoning)
The text is highly relevant to the category of Robustness because it establishes a program to certify artificial intelligence software used in agriculture, which implies a focus on performance benchmarks and compliance standards. The mention of adherence to the AI Risk Management Framework indicates an emphasis on operational robustness and safety in AI systems within agricultural applications. Similarly, it relates to System Integrity as certification inherently involves measures for ensuring the accuracy and reliability of AI software, fostering accountability and oversight. However, its relevance to Social Impact is limited because while the legislation addresses AI's application in agriculture, it does not focus on its broader societal effects or ethical implications. Data Governance is somewhat relevant as the certification process might involve data management concerns but is not the primary focus of the text. Overall, the text strongly emphasizes certification related to performance standards and operational integrity for specific AI applications.
Sector:
Government Agencies and Public Services (see reasoning)
The sector relevance of the text is primarily tied to the Agriculture sector because it explicitly addresses the use of artificial intelligence within agricultural practices. It emphasizes the certification of AI software in performing tasks related to agricultural products, which is critical for enhancing agricultural operations. While it might slightly touch upon regulatory concerns that could involve Government Agencies and Public Services, the primary focus remains on agriculture, giving less relevance to other sectors listed. Therefore, Agriculture should receive a high score, while other sectors receive lower scores due to their minimal impact or relevance to the text.
Keywords (occurrence): artificial intelligence (4) automated (1) show keywords in context
Description: Amends the Freedom of Information Act. Defines "automated request" as a request that a public body has a reasonable belief was drafted with the assistance of artificial intelligence or submitted without any specific, affirmative action taken by a human. Provides that a public body shall respond to an automated request within 5 business days after receipt and provide certain types of notice to the requester. Provides procedures for the requester to dispute having the request treated as an auto...
Summary: The bill amends the Freedom of Information Act in Illinois to define and establish procedures for "automated requests" submitted with AI assistance, requiring responses within five business days and allowing cost recovery for public bodies, enhancing transparency and accountability.
Collection: Legislation
Status date: Jan. 9, 2025
Status: Introduced
Primary sponsor: Daniel Didech
(sole sponsor)
Last action: Assigned to Executive Committee (Feb. 4, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text outlines legislation related to the processing of automated requests that may involve the utilization of artificial intelligence (AI) in their drafting or submission. This shows a recognition of the impact AI has on public service operations, particularly in how public bodies handle requests for information. The explicit reference to 'artificial intelligence' indicates significant relevance to the Social Impact category, as it deals with accountability and new considerations in the context of AI's interaction with public transparency laws. In terms of Data Governance, the legislation appears to set procedures for how data (in this case, records) is managed concerning requests that may not be traditionally human-generated, which implies an indirect yet notable relevance. The System Integrity is relevant due to the mention of mandated procedures for automated requests ensuring responses within specific timelines, thereby reinforcing security and accountability in data management. Robustness is less relevant as this legislation does not directly address benchmarks or auditing of AI systems, focusing instead on the procedural side of record requests that may involve AI. Overall, the categories of Social Impact and Data Governance show the most significant relevance due to their focus on societal implications and the management of public records in light of automated actions from AI. System Integrity gets a moderate score due to procedural considerations but lacks deeper implications for the security of AI systems.
Sector:
Government Agencies and Public Services (see reasoning)
This text primarily concerns the legislation related to the handling of information requests by public bodies, which directly implicates Government Agencies and Public Services. Since the automated requests will involve submissions to government entities for public records, the relevance to the Government Agencies and Public Services sector is solid. While aspects like accountability resonate with Politics and Elections, this text does not directly address electoral processes, making its relevance to that sector slight. Therefore, the Government Agencies and Public Services sector is rated the highest due to the nature of the legislation affecting public body operations. Other sectors, such as Judicial System, Healthcare, or Private Enterprises, Labor, and Employment, do not seem to be directly affected by this legislation, showing minimal to no relevance.
Keywords (occurrence): artificial intelligence (2) automated (20) show keywords in context
Description: Making appropriations for the fiscal year 2025 for the maintenance of the departments, boards, commissions, institutions, and certain activities of the commonwealth, for interest, sinking fund, and serial bond requirements, and for certain permanent improvements
Summary: The bill appropriates funds for Massachusetts' fiscal year 2025 to maintain government operations, fulfill debt obligations, and support improvements, promoting equal opportunity in state employment practices.
Collection: Legislation
Status date: April 26, 2024
Status: Engrossed
Last action: Accompanied H4800 (July 19, 2024)
The text concerns appropriations for various governmental departments and institutions in the Commonwealth of Massachusetts for the fiscal year 2025. The document primarily outlines financial allocations without specifically focusing on the implications or applications of AI technologies or legislative matters concerning AI. There are references to nondiscrimination and equal opportunity policies, but these are more about equitable resource allocation rather than any direct impact on AI systems or their governance. Consequently, none of the categories are strongly relevant, as they require a direct connection to AI-related legislation.
Sector: None (see reasoning)
The text does not address any specific sector that would connect to AI applications or implications. It outlines financial allocations across various sectors like education, healthcare, and judiciary, but without a mention of AI systems or their regulation. Therefore, no connections to any of the predefined sectors can be established.
Keywords (occurrence): artificial intelligence (2) automated (2) synthetic media (5) show keywords in context
Description: An act to add Part 5.6 (commencing with Section 1520) to Division 2 of the Labor Code, relating to employment.
Summary: Senate Bill No. 7 mandates transparency and accountability for automated decision systems (ADS) in employment, requiring employers to notify workers about ADS usage, appeal processes, and data access rights. It aims to protect workers' rights and mitigate potential biases in AI-driven employment decisions.
Collection: Legislation
Status date: Dec. 2, 2024
Status: Introduced
Primary sponsor: Jerry McNerney
(3 total sponsors)
Last action: From committee with author's amendments. Read second time and amended. Re-referred to Com. on RLS. (March 6, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This bill focuses on the governance of automated decision systems (ADS) specifically in the context of employment, which is a clear and direct application of artificial intelligence (AI). Given its emphasis on notification, transparency, accountability, and the rights of workers in relation to AI systems utilized in making employment decisions, the relevance to Social Impact is very high. Moreover, the employer's obligations around the management and oversight of ADS include strong elements of data governance relating to how data is handled, collected, and maintained. System Integrity is also significantly relevant as the bill ensures human oversight in decision-making processes when using ADS, while it is related to the development of benchmarks for performance in AI applications related to employment decisions but is less emphasized than the other aspects. For all these reasons, the scores reflect strong relevances in Social Impact and Data Governance, with notable relevance in System Integrity, and less for Robustness, which is not specifically addressed.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
This legislation primarily targets the employment sector, addressing how AI and automated decision systems affect workers' rights and employment decisions. It calls for notifications about these systems, access to data used in decisions, and a framework for appeals, all tailored to workers and their employment situations. While elements of government oversight and compliance mechanisms are present, they still fall under employment regulation rather than direct governance of public services or other sectors. Therefore, appropriate scores reflect strong relevance for Private Enterprises, Labor, and Employment, and moderate relevance for Government Agencies and Public Services due to oversight roles. The other sectors such as Politics, Judicial System, Healthcare, Academic, International Standards, or Nonprofits are less relevant to this specific text.
Keywords (occurrence): artificial intelligence (5) machine learning (1) automated (7) show keywords in context
Description: Relating to the use of an automated employment decision tool by a state agency to assess a job applicant's fitness for a position.
Summary: The bill regulates the use of automated employment decision tools by Texas state agencies, requiring applicant notification, transparency about assessment criteria, and measures to prevent discrimination.
Collection: Legislation
Status date: March 13, 2025
Status: Introduced
Primary sponsor: Jose Menendez
(sole sponsor)
Last action: Filed (March 13, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly mentions the use of an automated employment decision tool that leverages algorithms and artificial intelligence. It discusses the definitions related to AI systems and algorithms, the disclosure requirements for applicants, and the emphasis on preventing discrimination through the mitigation of biases in the AI tools. This aligns closely with 'Social Impact,' as it pertains to how such tools may affect applicants and addresses concerns around bias and fairness in hiring processes, reflecting societal implications. 'Data Governance' is relevant as it involves the management of data in these employment decision tools, although less directly than with social impact. 'System Integrity' is relevant since the legislation addresses the deployment of AI tools with a focus on accountability and transparency. Finally, 'Robustness' is touched upon since the bill mentions measures to ensure that tools are free from bias, though this is not the primary focus. Overall, the text has significant significance regarding the social impact on individuals regarding job applications and relevant considerations of data governance but is not as strong in the integrity and robustness aspects.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily pertains to the 'Private Enterprises, Labor, and Employment' sector as it relates to the use of an automated employment decision tool to assess job applicants and addresses the role of AI in employment practices. While there are broad implications for 'Government Agencies and Public Services' since it involves a state agency utilizing these tools, and there could be slight overlaps with issues in 'Judicial System' concerning discrimination complaints, the core of the text focuses on employment contexts. 'Academic and Research Institutions,' 'International Cooperation and Standards,' and 'Nonprofits and NGOs' are not directly relevant here. There is no significant component about politics or other sectors mentioned. Therefore, the primary, pertinent sector is clearly 'Private Enterprises, Labor, and Employment' with secondary relevance to 'Government Agencies and Public Services.'
Keywords (occurrence): artificial intelligence (2) machine learning (2) automated (9) algorithm (1) show keywords in context
Description: A BILL to be entitled an Act to amend Chapter 8 of Title 13 of the Official Code of Georgia Annotated, relating to illegal and void contracts generally, so as to prohibit certain agreements involving parallel pricing coordination as unenforceable contracts in general restraint of trade with respect to residential rental properties; to provide for a civil penalty; to provide for educational materials informing residents in this state of the provisions of this Act; to provide for statutory cons...
Summary: Senate Bill 318 prohibits agreements involving parallel pricing coordination among landlords for residential properties, deeming such contracts unenforceable. It imposes civil penalties and mandates educational outreach about these provisions.
Collection: Legislation
Status date: March 3, 2025
Status: Introduced
Primary sponsor: Nikki Merritt
(10 total sponsors)
Last action: Senate Hopper (March 3, 2025)
Societal Impact
Data Governance (see reasoning)
The text explicitly mentions the use of 'artificial intelligence' and 'machine learning' techniques in the context of analyzing data for rental price coordination. This indicates a significant relevance to AI, especially in the framework of algorithms that could influence economic behaviors and pricing within the real estate market. However, the implications on societal impact and potential harms related to fairness or discrimination are more indirect, reducing its relevance to certain categories like Social Impact. Data Governance is moderately relevant, as the bill does not address data management or privacy directly but does involve data collection. System Integrity and Robustness are less relevant as the focus here is primarily on trade practices and not on systems' security or performance benchmarks.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The legislation is related to the real estate sector and its regulations, primarily focusing on trade practices and agreements surrounding rental properties. Therefore, it has a specific relevance to Private Enterprises, Labor, and Employment regarding how AI tools might be deployed in pricing strategies within this sector. However, there is no direct reference to government agencies, the healthcare sector, or others, which diminishes the relevance to other areas. The text does address the coordination functions performed by landlords, linking it specifically to the real estate enviroment, which is critically tied to economic practices rather than broader public service or judicial topics.
Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context
Description: An act to add Section 12100.1 to the Public Contract Code, relating to public contracts.
Summary: The bill mandates the California Department of Technology to develop procurement standards for automated decision systems (ADS), focusing on risk assessment, equity, and incident monitoring before state agencies can procure or contract for ADS.
Collection: Legislation
Status date: Aug. 31, 2024
Status: Enrolled
Primary sponsor: Steve Padilla
(3 total sponsors)
Last action: Enrolled and presented to the Governor at 3 p.m. (Sept. 11, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text revolves around the procurement and risk management standards for automated decision systems (ADS) or automated decision tools (ADT) used by state agencies in California. It explicitly mentions the development of regulations concerning artificial intelligence (AI) and details about the risk management for AI systems in public contracts. This directly taps into issues relating to social impacts of AI systems on various critical sectors, making it very relevant to the category of Social Impact. Furthermore, there are clear processes described for data governance through requirements for risk assessment, security controls, and compliance with privacy laws, thus making it similarly relevant for Data Governance. The inclusion of clauses mandating risk assessments and monitoring for system integrity outlines the need for control over these AI systems, which positions this text strongly in the System Integrity category. The legislation also discusses ensuring that AI systems are effective and compliant with established standards, thus making it pertinent to the Robustness category as well. All these reasons lead us to conclude that the text warrants high relevance across all categories mentioned.
Sector:
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs (see reasoning)
The text refers to the implications of automated decision systems in various social domains, including access to employment, education, and even housing. This indicates a substantial relevance to sectors such as Government Agencies and Public Services, where AI may be utilized in delivering public services and managing government operations. The clear delineation of AI roles in areas such as health care, criminal justice, and higher education also suggests relevance to those sectors, particularly as the implications of AI in these areas can be critical. However, the text does not explicitly touch on all sectors, leaving Healthcare, Private Enterprises, and International Cooperation as less relevant. Judicial System relevance is also moderate but based on its focus on how decisions derived from AI affect legal outcomes and due process. Overall, it’s fair to assign high relevance to Government Agencies and Public Services, and moderately high to Judicial System and Healthcare due to the broader implications.
Keywords (occurrence): artificial intelligence (5) machine learning (1) automated (5) show keywords in context
Description: An act to add Section 8592.51 to the Government Code, relating to state government.
Summary: Senate Bill No. 833 mandates human oversight for artificial intelligence systems managing California's critical infrastructure, requiring real-time monitoring, approval of actions, and annual assessments of AI systems' safety and compliance.
Collection: Legislation
Status date: Feb. 21, 2025
Status: Introduced
Primary sponsor: Jerry McNerney
(sole sponsor)
Last action: From committee with author's amendments. Read second time and amended. Re-referred to Com. on RLS. (March 26, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly pertains to AI through its discussion of artificial intelligence and automated decision systems, particularly in the context of critical infrastructure and the necessity of human oversight. This focus directly relates to the impact of AI on society and governance, which aligns with the Social Impact category. The legislation stresses the importance of safety protocols, training, oversight, and risk management, highlighting societal concerns over AI's role in critical services. In terms of innovation and security, this legislation invokes elements related to System Integrity due to its emphasis on human oversight and assessment processes in AI systems. Additionally, data governance is relevant as it discusses compliance and risk evaluation which inherently ties into data accuracy and management within AI systems, though it is less pronounced. However, the Robustness category is weaker as the legislation does not explicitly cover performance benchmarking or auditing compliance beyond the oversight mechanisms. Overall, the text has a strong emphasis on oversight and social considerations accompanied by civil safety protections. The focus on human oversight and risk management suggests it most closely aligns with Social Impact and System Integrity.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
This text outlines legislation relevant primarily to government agencies and public services by mandating procedures for the deployment of AI within state-operated critical infrastructure. Its discussion of human oversight and the role of state agencies in evaluating AI systems indicates a clear connection to Government Agencies and Public Services. There is also a minor connection to the Judicial System, considering the implications of AI governance on legal compliance measures, although this is secondary to its primary focus. There are non-direct mentions related to other sectors, but without the explicit context or focus needed for direct categorization. Given its specific aim at public infrastructure regulation and oversight mechanisms, the text is less relevant to other sectors such as Healthcare or Private Enterprises. Overall, the primary sector relevance lies with Government Agencies, while the text touches upon themes that could impact the Judicial System.
Keywords (occurrence): artificial intelligence (16) machine learning (1) automated (6) show keywords in context
Description: A BILL for an Act to create and enact a new section to chapter 15-11 and a new chapter to title 54 of the North Dakota Century Code, relating to the state information technology research center, advanced technology review committee, compute credits grant program, and advanced technology grant fund.
Summary: The bill establishes the North Dakota State Information Technology Research Center, creates an Advanced Technology Review Committee, and introduces grant programs to support research and development in advanced technologies like AI and cybersecurity.
Collection: Legislation
Status date: Feb. 25, 2025
Status: Engrossed
Primary sponsor: Josh Christy
(10 total sponsors)
Last action: Received from House (Feb. 25, 2025)
Societal Impact
Data Robustness (see reasoning)
The text relates to the establishment of a state information technology research center focusing on advancements in various advanced technologies, including AI and machine learning. The development of a compute credits grant program also highlights the focus on funding initiatives that support advanced technology solutions, which explicitly includes AI applications. The legislation does not directly tackle issues like bias, accountability, or other societal impacts (that would fit under Social Impact), nor does it focus on data governance or the security of AI systems. However, it deals broadly with advancing the capabilities and oversight of new technologies, linking to the economic development aspects concerning AI integration and innovation within state services, thus having relevance across various categories.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
This bill predominantly addresses the function of state institutions in enhancing research, development, and application of advanced technologies, particularly within the context of the state’s information technology operations. It does not delve directly into political campaign uses of AI or specific legal implications in the judicial system. However, it engages government agencies and public services through the referencing of state research centers, making it relevant to that sector. There is a moderate aspect of private sector engagement through grant provisions and considerations for startups, giving it slight relevance in that area as well.
Keywords (occurrence): artificial intelligence (1) machine learning (3) show keywords in context
Description: Lower Healthcare Costs
Summary: The bill aims to reduce healthcare costs and increase price transparency in North Carolina, requiring providers to disclose pricing information to help consumers make informed choices and promote competition.
Collection: Legislation
Status date: March 27, 2025
Status: Engrossed
Primary sponsor: Jim Burgin
(22 total sponsors)
Last action: Engrossed (March 27, 2025)
The text is primarily focused on lowering healthcare costs and increasing price transparency within the healthcare system in North Carolina. While it discusses transparency and efficiency in healthcare delivery, it does not explicitly reference Artificial Intelligence (AI) or related technologies like algorithms, automated decision systems, or data processing techniques related to AI. Thus, the relevance to the categories of Social Impact, Data Governance, System Integrity, and Robustness is minimal. The discussion on healthcare transparency does suggest potential indirect applications of data analysis and management but does not align strongly with the specified AI categories.
Sector:
Healthcare (see reasoning)
The text is most relevant to the Healthcare sector, as it directly addresses issues pertaining to healthcare costs, price transparency, and regulatory measures for health service facilities. However, it does not mention AI specifically in relation to healthcare applications or technologies that could be categorized under the legislation affecting healthcare. Given the lack of AI-related content, it scores a moderate relevance in the healthcare context but does not cross into relevance for other sectors outlined.
Keywords (occurrence): artificial intelligence (1) algorithm (1) show keywords in context
Description: Schools; subject matter standards; computer science courses; curriculum; rules; effective date; emergency.
Summary: House Bill 1304 mandates the inclusion of a computer science course in public school curriculums, ensures such courses count towards graduation requirements, and requires the State Department of Education to establish related rules.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Dick Lowe
(2 total sponsors)
Last action: Referred to Common Education (Feb. 4, 2025)
Societal Impact (see reasoning)
The text focuses on establishing standards for computer science courses within school curriculums. Since it explicitly refers to computer science, which is closely related to AI, there is relevant content about the impact of technology education in shaping future competencies in AI and its applications. However, the text does not directly address ethical or social implications of AI, data governance, or systemic integrity in AI development, which diminishes its relevance to those categories.
Sector:
Academic and Research Institutions (see reasoning)
The text primarily discusses educational standards and curricula related to computer science. While it supports the education sector's adaptation to emerging technologies, it does not directly pertain to how AI is regulated within the education framework or its implications on other sectors. Thus, while it touches on technology's role in education, its relevance to the specific sectors considered is minimal.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Requiring the National Center for School Mental Health at the University of Maryland School of Medicine, in consultation with the State Department of Education, to develop and publish a student technology and social media resource guide by the 2027-2028 school year; requiring the Governor to include an appropriation of $100,000 for fiscal year 2027 and $125,000 for fiscal years 2028 and 2029 in the annual budget bill; and requiring the Center to report on the expenditure of funds on or before...
Summary: House Bill 1316 mandates the creation of a youth-centric technology and social media resource guide for public school students, aimed at promoting safe and informed technology usage.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: Sarah Wolek
(4 total sponsors)
Last action: Second Reading Passed with Amendments (March 15, 2025)
Societal Impact (see reasoning)
The text relates to technology and social media in an educational context, with a specific mention of 'Artificial Intelligence products'. However, it primarily focuses on resource development for students regarding technology use rather than on broader societal AI implications, data governance, system integrity, or benchmarking. Therefore, while AI is acknowledged, its relevance is limited to the context of mental health and education rather than extensive impacts or governance structures. Consequently, the categories of Social Impact and Data Governance appear relevant, but the others are not significant based on the text’s emphasis on education and mental health rather than on those broader factors. On multiple passes, the emphasis on technology usage in an educational setting rather than AI governance led to mid-level significance.
Sector:
Government Agencies and Public Services (see reasoning)
The text makes a pertinent mention of AI products within an educational resource guide, indicating a potential relevance to the education sector, especially as it seeks to educate students on safe technology usage. However, the focus is more on student resources and mental health rather than legislative actions that govern or regulate AI specifically. Overall, references to AI are limited to specific applications without a broad application across multiple areas of governance or legislation. Hence, the scoring reflects limited relevance in the context of sectors. Additionally, it indicates a marginal connection to educational guidelines rather than a robust policy framework for AI's integration.
Keywords (occurrence): algorithm (1)
Description: Relating to the disclosure and use of artificial intelligence.
Summary: The bill establishes regulations for transparency in the use of artificial intelligence, requiring users to provide explanations for AI decisions, maintain best practices, and avoid bias, effective September 1, 2025.
Collection: Legislation
Status date: March 14, 2025
Status: Introduced
Primary sponsor: Salman Bhojani
(sole sponsor)
Last action: Filed (March 14, 2025)
Societal Impact
System Integrity (see reasoning)
This text pertains directly to artificial intelligence, specifically with a focus on transparency, accountability, and implications for society in the context of AI usage. Elements such as detecting AI usage, explaining AI-based decisions, and preventing bias and discrimination within AI systems show a strong emphasis on the social impact of AI technologies. Additionally, this legislation addresses best practices and standards, indicating a degree of concern for system integrity. However, it does not delve deeply into data governance or performance benchmarks, focusing more on the transparency and ethical usage of AI.
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The legislation outlines the use of AI in various sectors including business, social media, and political advertising. This crosses into several relevant sectors: the regulation of AI in advertising (which ties into Politics and Elections), the provision of goods and services through AI (which relates to Government Agencies and Public Services), and concerns over bias impacting individuals which can connect to Judicial System and healthcare implications. The clarity of AI's use and the related accountability to users is also crucial for Private Enterprises, Labor, and Employment to ensure ethical practices. Overall, it covers a broad array of impacts across multiple sectors, though not deeply rooted in any single area.
Keywords (occurrence): artificial intelligence (10) show keywords in context
Description: Crimes and punishments; sexual obscenity; making certain acts unlawful; effective date.
Summary: This bill criminalizes the nonconsensual dissemination of private sexual images and artificially generated sexual depictions in Oklahoma, establishing penalties for offenders and defining relevant terms. It aims to protect individuals from privacy violations.
Collection: Legislation
Status date: March 26, 2025
Status: Engrossed
Primary sponsor: Toni Hasenbeck
(3 total sponsors)
Last action: First Reading (March 26, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text deals with the regulation of artificial intelligence in relation to obscenity and nonconsensual dissemination of private sexual images, specifically focusing on the implications of generative artificial intelligence. This legislation addresses the societal impacts of using AI-generated content in harmful ways, which ties directly into the Social Impact category. There are references to responsible use and labeling of AI, implying a need for accountability and ethical considerations regarding outputs, aligning with the Data Governance category as well. Additionally, the obligations imposed on dissemination processes introduce an aspect of System Integrity, ensuring that there are mechanisms for oversight and compliance in AI-generated content. However, there is limited emphasis on performance benchmarks or robustness of AI systems, which makes the Robustness category less relevant.
Sector:
Judicial system (see reasoning)
The text primarily addresses the implications of AI in the context of obscenity and creation of synthetic content, particularly within the realm of personal rights and privacy concerns. It discusses the legal frameworks surrounding the dissemination of both real and AI-generated sexual depictions. This aligns directly with the Judicial System sector, as it introduces legal definitions and consequences related to the use of technology in crimes against individuals. Conversely, while the text likely has implications for the healthcare and public services sectors regarding the wellbeing of individuals, it does not directly address their specific regulatory frameworks, leading to lower relevance scores for those sectors. Overall, the core concerns appear most relevant to the Judicial System.
Keywords (occurrence): artificial intelligence (3) automated (1) show keywords in context
Description: An act relating to the use of synthetic media in elections
Summary: This bill requires the disclosure of deceptive synthetic media related to elections within 90 days of voting, aiming to protect electoral integrity and inform voters about manipulated content.
Collection: Legislation
Status date: March 20, 2025
Status: Engrossed
Primary sponsor: Ruth Hardy
(7 total sponsors)
Last action: Read 3rd time & passed (March 20, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text primarily discusses the use of synthetic media in the electoral process, with a specific emphasis on the disclosure of deceptive and fraudulent uses of such media. This clearly pertains to the social impact of AI, as it addresses potential misinformation and the integrity of elections, which are significant societal issues. Data governance is relevant as the bill discusses the management of information integrity in synthetic media, emphasizing accountability and transparency in communications that can mislead voters. System integrity is involved due to the need for regulatory measures that ensure transparent disclosure concerning the use of AI-generated content in political communications, while robustness is less applicable as the text deals primarily with disclosure rather than performance metrics or benchmarks. Overall, the relevance of social impact is highest, followed closely by data governance and system integrity, while robustness is less emphasized.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The text explicitly concerns the regulation of synthetic media within an electoral context, making it directly relevant to the Politics and Elections sector. It also relates to Government Agencies and Public Services, as the enforcement of this legislation would fall under state or federal authorities that oversee elections, ensuring compliance with these regulations. Other sectors such as Healthcare, Judicial System, and Private Enterprises, Labor, and Employment, don't apply in this context, as they do not deal with synthetic media in elections. The relevance of Politics and Elections is thus very high, followed by a moderate association with Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (1) synthetic media (16) show keywords in context
Description: AN ACT relating to health; requiring a public school to provide certain information relating to mental health to pupils; prohibiting certain uses of artificial intelligence in public schools; requiring that a pupil be allowed credit or promotion to the next higher grade despite absences from school in certain circumstances; deeming certain absences from school to be approved absences; imposing certain restrictions relating to the marketing and programming of artificial intelligence systems; p...
Summary: Assembly Bill 406 mandates public schools to provide mental health information and conducts regular assemblies on the subject, restricts AI use in counseling, and allows credit for students with mental health-related absences.
Collection: Legislation
Status date: March 11, 2025
Status: Introduced
Primary sponsor: Jovan Jackson
(2 total sponsors)
Last action: Read first time. Referred to Committee on Education. To printer. (March 11, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text addresses specific prohibitions on the use of artificial intelligence in public schools, particularly in the context of mental health support from school counselors, psychologists, and social workers. This indicates a direct impact of AI on social structures and individual experiences within educational settings, which ties into the social impact category by highlighting concerns related to mental health and the autonomy of educational professionals in dealing with such issues. Additionally, it imposes restrictions on how AI can be marketed and programmed, suggesting a regulatory approach to AI's role and influence, further supporting its relevance to the social implications of AI technology. Therefore, the Social Impact category should be scored highly due to its explicit discussion on mental health and AI. The Data Governance category could be considered relevant due to the implications of managing sensitive mental health information in the context of AI usage, but it's not the text's primary focus. The System Integrity category is of moderate relevance as it discusses oversight of AI functions within the educational context but lacks extensive regulatory detail. The Robustness category is less relevant as the text does not address performance benchmarks or compliance frameworks for AI systems. Overall, the text predominantly emphasizes social impact, with moderate relevance to system integrity and slight relevance to data governance.
Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)
The text has a strong connection to the educational sector as it discusses the use of AI in public schools and the implications for mental health services provided to students. It clearly specifies limitations on AI's application within educational institutions, which ties closely to regulations affecting school operations. The relevance to politics and elections is low, as there is no mention of AI's role in political campaigns or electoral processes. Government Agencies and Public Services category holds moderate importance as it shapes AI regulation within public education systems. The remaining sectors, such as Judicial System, Healthcare, and others do not find direct relevance in the context given, especially as the legislation refers primarily to public school settings. In summary, the strongest linkage is with the Academic and Research Institutions sector due to the bill's focus on school environments, with moderate mention of Government Agencies and Public Services, and low relevance for all other sectors.
Keywords (occurrence): artificial intelligence (27) machine learning (3) show keywords in context
Description: Criminalizing and creating a private right of action for the facilitation, encouragement, offer, solicitation, or recommendation of certain acts or actions through a responsive generative communication to a child.
Summary: The bill establishes criminal liability and a private right of action against those who use generative AI communications to encourage children to engage in harmful or illegal activities.
Collection: Legislation
Status date: Jan. 23, 2025
Status: Introduced
Primary sponsor: Sharon Carson
(6 total sponsors)
Last action: Ought to Pass with Amendment #2025-0745s, Motion Adopted, Voice Vote; OT3rdg; 03/20/2025; Senate Journal 8 (March 20, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly addresses the role of AI technologies, particularly in the context of generative communication with children. It highlights the responsibility and potential liability of AI service providers (like chatbots or large language models) in cases where their systems may facilitate harmful interactions. This establishes a strong relevance to Social Impact, as it pertains to the protection of vulnerable groups (children) from AI-driven misconduct. Data Governance is also important, as it involves secure management of the data transmitted and collected via these AI systems, but it's not the primary focus. System Integrity is relevant since it hints at the need for secure and responsible AI operation to prevent harm, while Robustness is less relevant as the focus is not on performance benchmarks but rather legal accountability and ethics. Overall, the text's emphasis on protecting children aligns strongest with preventing harm caused by AI systems and the accountability of those who develop and operate them.
Sector:
Government Agencies and Public Services (see reasoning)
The bill primarily concerns the use and regulation of AI in communications with children, overlapping significantly with the Government Agencies and Public Services sector. It addresses how state services must handle AI interactions responsibly. However, it does not delve deeply into legislative actions directly affecting politics or elections, the judicial system is indirectly referenced but not a central focus, and healthcare applications of AI are not addressed here. These aspects limit the scores for those sectors. Overall, the focus on generative AI in relation to legal actions and child safety implies a minimal but noticeable connection to public service regulation without direct involvement of the mentioned sectors beyond that.
Keywords (occurrence): artificial intelligence (6) large language model (2) show keywords in context