4828 results:


Description: Amends the Courses of Study Article of the School Code. In provisions concerning bullying and cyber-bullying, provides that bullying includes posting or distributing sexually explicit images. Provides that, beginning with the 2026-2027 school year, the term "cyber-bullying" includes the posting or distribution of a digital replica by electronic means. Defines "artificial intelligence", "digital replica", and "generative artificial intelligence". Effective July 1, 2026.
Summary: The bill amends the Illinois School Code to enhance bullying prevention efforts, specifically addressing cyber-bullying and requiring schools to develop comprehensive policies for reporting and addressing incidents.
Collection: Legislation
Status date: April 9, 2025
Status: Engrossed
Primary sponsor: Janet Yang Rohr (14 total sponsors)
Last action: Referred to Assignments (April 10, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text discusses the inclusion of terms related to artificial intelligence in the context of cyber-bullying and its definition. The definitions provided for 'artificial intelligence' and its implications in generating content, such as digital replicas and generative AI, indicate relevance to the social impact category as it addresses potential harms caused by AI (in the form of cyber-bullying). Data governance is also pertinent, as it discusses the responsible management of data associated with electronic communications and bullying. System integrity receives a moderate score because the mention of AI indirectly ties into the security and ethical implications of digital content. Robustness ranks lower, as it is not focused on performance or benchmarks but rather on definitions of bullying and prevention policy, which makes its association less direct.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

The text is highly relevant to the Academic and Research Institutions sector since it concerns educational settings and policies around bullying, cyber-bullying, and the implications of AI in those contexts. It also relates to the Government Agencies and Public Services sector, emphasizing government oversight through the State Board of Education and the requirement for schools to file anti-bullying policies. The Private Enterprises, Labor, and Employment sector is less relevant, as the text does not address employment directly, but it touches upon issues that affect students in schools. There is no significant connection to other sectors such as Healthcare, Politics, Judicial, or International Cooperation, making the scores there low.


Keywords (occurrence): artificial intelligence (2) show keywords in context

Description: The Children First Act
Summary: The Children First Act prioritizes child welfare in North Carolina by expanding affordable child care access, establishing health and safety protections, and addressing workforce challenges in child care.
Collection: Legislation
Status date: March 25, 2025
Status: Introduced
Primary sponsor: Lindsey Prather (34 total sponsors)
Last action: Ref To Com On Rules, Calendar, and Operations of the House (March 26, 2025)

Category:
Societal Impact (see reasoning)

The text of The Children First Act discusses various measures aimed at enhancing the welfare of children, particularly in the domains of child care, health, safety, and digital protections. While it addresses critical issues such as digital exploitation and algorithmic manipulation by social media platforms, this is primarily contextual to the traditional concerns about the ramifications of technological interactions rather than focused legislation on AI itself. Therefore, it touches slightly on the broader social impact of AI through concerns about children's safety in digital environments and how algorithmic strategies affect their well-being, gaining a higher relevance score in the Social Impact category than the others, which do not find explicit mention of data governance, system integrity, or robustness in the legislative text. The significance of the AI-related concerns is rooted in algorithmic exploitation, creating slight concerns about data governance and system integrity regarding children's vulnerabilities without fully aligning with more defined standards in those categories.


Sector:
Government Agencies and Public Services (see reasoning)

This legislation is focused primarily on child welfare and safety, addressing access to child care and protections against digital exploitation. The references to how algorithms engage with children and the implications thereof suggest a relevance to the Government Agencies and Public Services sector, as it involves state responses and initiatives for safeguarding minors. However, there are limited connections to specific guidelines or legislative aspects concerning political campaigns, healthcare, the judiciary, private enterprise, research, or international standards. The legislation is very relevant to children and does have implications in public policy discussions but does not focus on explicit regulation by sector beyond the scope of public health initiatives. Therefore, while significant, the overall associations with the sectors are moderate, particularly in the context of governmental scenarios.


Keywords (occurrence): artificial intelligence (2) algorithm (2) show keywords in context

Description: Establishes the artificial intelligence training data transparency act requiring developers of generative artificial intelligence models or services to post on the developer's website information regarding the data used by the developer to train the generative artificial intelligence model or service, including a high-level summary of the datasets used in the development of such system or service.
Summary: The Artificial Intelligence Training Data Transparency Act mandates developers of generative AI models to disclose data sources and usage on their websites, enhancing transparency and accountability in AI development.
Collection: Legislation
Status date: March 27, 2025
Status: Introduced
Primary sponsor: Andrew Gounardes (sole sponsor)
Last action: REFERRED TO INTERNET AND TECHNOLOGY (March 27, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text explicitly addresses the transparency requirements for data used in training generative AI models, making it highly relevant to the Data Governance category. The legislation focuses on the secure and accurate management of data within AI systems, as it outlines the necessity for developers to disclose various aspects of the datasets used in AI training, including ownership, types, and whether personal information is included. The Social Impact category also scores highly due to the act's implications for consumer protection and accountability in the use of AI, especially since the disclosure of data sources can potentially mitigate biases and unfair practices in AI outputs. The System Integrity and Robustness categories are less relevant; while the act promotes some aspect of transparency and data handling which reflects on integrity, it does not address secure coding practices or benchmarks for AI performance directly.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

This legislation pertains primarily to government oversight over AI technologies and their applications, making it relevant to the Government Agencies and Public Services sector as it mandates accountability from developers concerning AI data usage. However, it intersects with the Academic and Research Institutions sector as well, as it may affect how AI research entities manage their datasets. Other sectors like Politics and Elections, Judicial System, Healthcare, Private Enterprises, Labor and Employment, International Cooperation and Standards, Nonprofits and NGOs are less relevant because the act does not specifically address issues like political campaigning, judicial assessments, healthcare applications, or international regulations directly.


Keywords (occurrence): artificial intelligence (20) automated (1) show keywords in context

Description: To establish the Task Force on Artificial Intelligence in the Financial Services Sector to report to Congress on issues related to artificial intelligence in the financial services sector, and for other purposes.
Summary: The Preventing Deep Fake Scams Act establishes a Task Force on Artificial Intelligence in the Financial Services Sector to address AI-related issues and enhance consumer protections against fraud and identity theft.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Brittany Pettersen (7 total sponsors)
Last action: Referred to the House Committee on Financial Services. (Feb. 27, 2025)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text of the Preventing Deep Fake Scams Act explicitly discusses artificial intelligence, particularly in the context of its implications and applications in the financial services sector. The mention of deep fakes specifically relates to potential scams and fraud, which directly connects to social impact as it addresses consumer protection and the psychological effect of scams on individuals. Additionally, the act emphasizes the importance of establishing standards for AI use in financial services, which aligns with concerns regarding data governance, as the safety and accuracy of customer data is paramount to prevent financial crimes. The establishment of the Task Force also speaks to the necessity of maintaining system integrity by overseeing the implementation of AI solutions in a regulated manner. Finally, there is an indirect link to robustness through the recommendations for ensuring AI systems are performant and compliant within financial sectors, although this is a less direct connection compared to other categories.


Sector:
Government Agencies and Public Services (see reasoning)

The Preventing Deep Fake Scams Act pertains significantly to the financial sector, where AI is being leveraged for various services, including fraud detection and customer service innovations such as voice banking. The legislation is focused on addressing the particular challenges that arise from the use of AI in finance, such as deep fake scams. There is a clear emphasis on legislation affecting consumer protection within the financial services context, showcasing how AI technology integrates into financial operations and raises unique threats that necessitate regulatory oversight. Consequently, this text directly addresses the Government Agencies and Public Services sector, as it outlines the role of a task force made up of government officials from financial oversight entities. While there is some relevance to other sectors like Private Enterprises due to the mention of third-party vendors providing AI services, the emphasis is predominantly on government-led initiatives and consumer protection related to financial services.


Keywords (occurrence): artificial intelligence (11) machine learning (1) deepfake (1) show keywords in context

Description: Requiring each unit of State government to conduct certain inventories and assessments by December 1, 2024, and annually thereafter; prohibiting the Department of Information Technology from making certain information publicly available under certain circumstances; prohibiting a unit of State government from deploying or using a system that employs artificial intelligence under certain circumstances; etc.
Summary: The Artificial Intelligence Governance Act of 2024 mandates Maryland state agencies to conduct annual inventories and impact assessments of AI systems, aiming to ensure ethical and responsible AI use in government functions.
Collection: Legislation
Status date: April 4, 2024
Status: Engrossed
Primary sponsor: Jazz Lewis (24 total sponsors)
Last action: Referred Rules (April 5, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The Artificial Intelligence Governance Act of 2024 is highly relevant to the category of Social Impact as it directly addresses the implications and ethical concerns related to the use of AI systems in state government. It emphasizes responsible and trustworthy AI use, which highlights societal accountability and potential impacts on civil rights and liberties (found in the definitions of high-risk AI). Moreover, it discusses the need for assessments to guard against AI-driven discrimination, a direct concern about the social implications of AI technology. This alignment with principles of fairness and equity underscores its significant relevance to Social Impact. Data Governance is also very relevant as the Act mandates regular data inventories and impact assessments of AI systems used by state government. It ensures that data necessary for AI operation is collected accurately and responsibly, thus addressing potential pitfalls of data mismanagement and bias in AI. The reference to compliance with regulations, collection, and sharing of data further solidifies this category's relevance. System Integrity is relevant as the legislation sets forth requirements for human oversight and the monitoring of AI systems to ensure their safe and effective operation. It outlines the necessity of policies and procedures for the use of AI, discussing the integrity and the transparency of those implementations in state government. Robustness is somewhat relevant due to its focus on establishing performance evaluations and audits for AI systems, ensuring compliance with new benchmarks and standards. However, it is less pronounced compared to the other three categories, making it marginally relevant in the context of this legislation.


Sector:
Government Agencies and Public Services (see reasoning)

This legislation is closely tied to the sector of Government Agencies and Public Services since it explicitly involves requirements for state government units to conduct assessments and inventories regarding their AI systems. The intent is to enhance the operational efficiency of public services through proper governance of AI technologies, ensuring safe deployment in state functions. It is less relevant to sectors like Politics and Elections or Healthcare because there is no direct mention of AI's role in electoral processes or healthcare applications. The emphasis is firmly within public administration and governance contexts, highlighting the relevance of AI regulation in governmental operations.


Keywords (occurrence): artificial intelligence (49) machine learning (1) automated (5) show keywords in context

Description: An act to amend Sections 3273.65, 3273.66, 3273.67, and 3345.1 of the Civil Code, relating to social media platforms.
Summary: Assembly Bill 1137 mandates social media platforms in California to improve reporting mechanisms for child sexual abuse material, enhance user accessibility, and establish penalties for noncompliance. The goal is to protect minors and ensure accountability in reporting such material.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Maggy Krell (2 total sponsors)
Last action: From committee chair, with author's amendments: Amend, and re-refer to Com. on P. & C.P. Read second time and amended. (April 10, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text discusses amendments to civil code relating to the responsibilities of social media platforms in the context of child sexual abuse material. The use of 'artificial intelligence' in the context of data transparency is present, indicating AI's relevance in ensuring compliance with regulations, which aligns with concerns surrounding social impact due to the potential consequences of AI on vulnerable populations (like minors depicted in harmful content). System Integrity is also heavily implied through the requirement for human intervention in report assessments, indicating a need for reliability in AI systems that manage sensitive content. However, the text does not advocate for performance benchmarks or robust guidelines for AI systems, which minimizes its relevance to Robustness. Data Governance is relevant as it entails liabilities for the handling of data and mechanisms for reporting, emphasizing accountability in data management practices related to AI usage. Overall, the text is significant concerning societal implications of AI and the governance of data within AI systems, resulting in moderately high scores across social impact and data governance, while system integrity also carries relevance due to the necessity for human oversight.


Sector:
Government Agencies and Public Services (see reasoning)

The text primarily relates to the Government Agencies and Public Services sector as it pertains to the legislative measures that federal and state government agencies must adhere to when dealing with social media platforms. It addresses the legal obligations of these platforms to ensure the safety of minors and protect against the facilitation of child sexual abuse, thus reflecting the government's role in protecting citizens through regulation. There is no direct relevance to Politics and Elections, the Judicial System, or Healthcare as those sectors are not explicitly mentioned or implicated in the text's provisions. Private Enterprises encompass social media companies, but the context of the bill aligns more with public service responsibilities rather than corporate governance, making Government Agencies and Public Services the most relevant sector for this text.


Keywords (occurrence): artificial intelligence (1) algorithm (1) show keywords in context

Description: An Act providing for parental consent for virtual mental health services provided by a school entity.
Summary: The bill mandates parental consent for virtual mental health services provided by school entities in Pennsylvania, ensuring that students under 18 receive appropriate approval before accessing these services.
Collection: Legislation
Status date: April 11, 2025
Status: Introduced
Primary sponsor: Wayne Langerholc (13 total sponsors)
Last action: Referred to EDUCATION (April 11, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text references several aspects of artificial intelligence, particularly in the context of virtual mental health services. Specifically, the act defines artificial intelligence and includes it as a component of behavioral health support. This directly relates to the potential impacts of AI on society as mental health services may be influenced or augmented by AI-driven tools, touching upon concerns such as accountability and the psychological effects on individuals. Given the mention of AI's role in behavioral health, there are clear implications for both social impact (pertaining to mental health and the ethical use of AI) and system integrity (related to the oversight and safety of AI-assisted services). The legislation demonstrates the necessity for oversight and establishes a parental consent framework, indicating relevant governance of AI applications. However, there are no indications of new benchmarking or auditing processes specifically for AI performance, which would affect the robustness category.


Sector:
Healthcare
Academic and Research Institutions (see reasoning)

The text primarily relates to the education sector as it discusses virtual mental health services provided in a school context, necessitating parental consent. However, it also touches on aspects relevant to healthcare due to the mental health support provided, which is interconnected with both education and healthcare sectors. The inclusion of AI in these services could represent intersectional relevance. Yet, the main focus here is on schools and mental health services for students, which does not deeply engage with other sectors like politics, government agencies, or the judiciary.


Keywords (occurrence): artificial intelligence (1) automated (1) show keywords in context

Description: Requiring each unit of State government to conduct certain inventories and a certain assessment on or before certain dates; prohibiting the Department of Information Technology from making certain information publicly available under certain circumstances; requiring the Department, in consultation with a certain subcabinet, to adopt certain policies and procedures concerning the development, procurement, deployment, use, and assessment of systems that employ artificial intelligence by units o...
Summary: The Artificial Intelligence Governance Act of 2024 mandates Maryland state agencies to inventory AI systems, conduct impact assessments, and adopt governing policies for responsible AI use, ensuring accountability and ethical practices in implementation.
Collection: Legislation
Status date: May 9, 2024
Status: Passed
Primary sponsor: Katie Hester (19 total sponsors)
Last action: Approved by the Governor - Chapter 496 (May 9, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The AI Governance Act of 2024 addresses multiple facets of artificial intelligence use within state government. For Social Impact, the act emphasizes the responsible, ethical, and beneficial use of AI technologies, which includes considerations for algorithmic discrimination and civil rights, making it very relevant as it highlights the societal implications of AI. In terms of Data Governance, the act mandates the conduct of annual data inventories specifically for data used in AI systems to ensure secure management of information related to these technologies, thus scoring highly. System Integrity is also highly relevant as it requires transparency, oversight, and regulations about AI system implementation and monitoring, directly relating to the security of these systems. Lastly, while Robustness is relevant due to the emphasis on compliance and procedural frameworks for AI systems, it has a slightly lower linkage as it does not focus explicitly on performance benchmarks for AI systems, thus cannot be rated as highly as the other categories.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

This legislation pertains primarily to Government Agencies and Public Services, as it outlines the responsibilities of state governmental units regarding the use and assessment of AI technologies. There is less direct relevance to sectors like Politics and Elections, Judicial System, Healthcare, etc., though the implications of AI use in overall governance and public service delivery are significant. Therefore, while it touches on various sectors indirectly, the core focus remains on enhancing governmental processes, making it very relevant to the Government Agencies and Public Services sector. Other sectors scored lower due to a lack of direct focus in the text.


Keywords (occurrence): artificial intelligence (46) machine learning (1) automated (6) show keywords in context

Description: Adopt the Age-Appropriate Online Design Code Act
Summary: The Age-Appropriate Online Design Code Act establishes protections for minors using online services by regulating data collection, advertising practices, and user interface designs, ensuring user safety and promoting responsible digital interaction.
Collection: Legislation
Status date: April 9, 2025
Status: Enrolled
Primary sponsor: Carolyn Bosn (sole sponsor)
Last action: Enrollment and Review ST16 recorded (April 11, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The Age-Appropriate Online Design Code Act explicitly addresses the impact of AI on minors, particularly in the context of online services. It establishes protections against compulsive usage, psychological harm, and discrimination—all of which can be exacerbated by AI-driven recommendation systems. As such, the legislation pertains closely to issues of fairness and transparency in AI outputs, thereby aligning strongly with the Social Impact category. Similarly, given the focus on personal data management within AI systems, including data minimization and opt-in frameworks for minors, this text is also relevant to Data Governance. System Integrity receives a moderate rating due to mentions of ensuring user control over AI applications and protections afforded to minors, while Robustness has minimal direct relevance to the text.


Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Hybrid, Emerging, and Unclassified (see reasoning)

This piece of legislation is highly relevant to sectors such as Government Agencies and Public Services, as it establishes rules for online services that are likely to be accessed by minors, impacting public safety and well-being. The focus on user protections, especially for vulnerable populations like children, signals considerable relevance to healthcare in terms of psychological well-being. However, it does not directly address sectors like Politics and Elections or the Judicial System, which is why those categories have lower scores. Its implications for online services and consumer protections place it squarely within Private Enterprises, while the need for independent auditing may lightly connect to Academic and Research Institutions. Overall, it reflects a strong relevance to sectoral themes concerning societal impacts of AI.


Keywords (occurrence): automated (3) recommendation system (1) show keywords in context

Description: Ai Legislative Task Force
Summary: The bill establishes a Joint Legislative Task Force in Alaska to assess artificial intelligence's impact, evaluate its applications, address ethical concerns, and recommend policies for its responsible use.
Collection: Legislation
Status date: Feb. 26, 2025
Status: Introduced
Primary sponsor: George Rauscher (2 total sponsors)
Last action: REFERRED TO FINANCE (April 11, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text primarily discusses the establishment of a task force focused on artificial intelligence (AI) and its implications on various sectors and legislative oversight. The relevance of the categories is evaluated as follows: For Social Impact, the text explicitly mentions ethical considerations and the potential societal impacts of AI,making it extremely relevant. Data Governance is also very relevant as it addresses concerns about data privacy and security related to AI. System Integrity is moderately relevant given the focus on oversight and regulatory aspects but lacks direct mentions of security measures. Robustness is less relevant since the document doesn't significantly address performance benchmarks or auditing processes for AI systems, though some points about the responsible use of AI are implied.


Sector:
Government Agencies and Public Services (see reasoning)

The legislation outlines the establishment of a task force concerning AI in various sectors, making it relevant across multiple sectors. For Politics and Elections, it is slightly relevant as it pertains to legislative activities regarding AI. Government Agencies and Public Services is highly relevant due to the focus on AI's applications in state government operations and public services. The healthcare sector is slightly relevant as healthcare is mentioned as a sector where AI could be integrated, but it doesn't delve deeply into healthcare-specific challenges or regulations. Other sectors like Judicial System, Private Enterprises, Labor, Academic and Research Institutions, International Cooperation and Standards, and Nonprofits and NGOs are not explicitly addressed in this text, leading to low relevance scores. Overall, the highest relevancy is noted in Government Agencies and Public Services.


Keywords (occurrence): artificial intelligence (10) machine learning (1) show keywords in context

Description: As enacted, defines and adds "voice" as a protected personal right; adds commercial availability of a sound recording or audiovisual work in which the individual's name, voice, likeness, or image is readily identifiable to considerations for determining whether non-use has occurred; makes other related changes. - Amends TCA Title 39, Chapter 14, Part 1 and Title 47.
Summary: The bill amends Tennessee law to enhance protection of individuals’ rights regarding their name, likeness, and voice, ensuring unauthorized commercial use requires consent, effective July 1, 2024.
Collection: Legislation
Status date: March 26, 2024
Status: Passed
Primary sponsor: William Lamberth (44 total sponsors)
Last action: Effective date(s) 07/01/2024 (March 26, 2024)

Category:
System Integrity (see reasoning)

This legislation primarily focuses on the protection of personal rights, particularly as it relates to individuals' names, voices, and likenesses. Since there is mention of algorithms, software, and technology that can produce identifiable outputs (photographs, voices, likenesses), it may touch on areas related to data management and automated decisions. However, the emphasis remains largely on personal rights rather than broader social impacts directly linked to AI technologies or their governance. Therefore, while there is some relevance in terms of 'System Integrity' due to the mention of algorithms, the direct implications are limited. There's also limited reference regarding environmental impact, fairness in AI outputs, or complex issues of misinformation and public trust, leading to a lower score in 'Social Impact' but still acknowledging some degree of relevance for potential bias or misuse. Other categories do not have explicit connections; notably, 'Data Governance' would need clear mandates on data handling, which this text does not provide.


Sector: None (see reasoning)

The bill primarily addresses personal rights protection and does not inherently apply to the governance of AI in sectors like politics, public services, or healthcare. The references to technology and algorithms have more to do with the use of personal identifiers than with systemic applications of AI within these sectors. Thus, although there is a tangential connection, particularly with regard to 'Government Agencies and Public Services' in terms of regulated use of technology, it does not explicitly fit into any assigned sector category directly.


Keywords (occurrence): algorithm (2) show keywords in context

Description: A BILL to be entitled an Act to amend Part 2 of Article 6 of Chapter 2 of Title 20 of the Official Code of Georgia Annotated, relating to competencies and core curriculum under the "Quality Basic Education Act," so as to provide that, beginning in the 2031-2032 school year, a computer science course shall be a high school graduation requirement; to provide for certain computer science courses to be substituted for units of credit graduation requirements in certain other subject areas; to prov...
Summary: The "Quality Basic Education Act" mandates that starting in the 2031-2032 school year, a computer science course becomes a high school graduation requirement in Georgia, addressing critical education needs.
Collection: Legislation
Status date: Feb. 24, 2025
Status: Introduced
Primary sponsor: Clint Dixon (6 total sponsors)
Last action: Senate Hopper (Feb. 24, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text outlines a legislative bill aimed at incorporating computer science education, including artificial intelligence concepts, into high school curricula. This directly relates to the Social Impact category as it seeks to enhance educational competencies and addresses the skills needed for the future workforce, potentially affecting the societal structure and preparing students for modern technological environments. The Data Governance category has moderate relevance as the bill implicitly suggests careful management of educational content, but does not explicitly address data privacy or accuracy issues. System Integrity is relevant as it mentions standards for education, though in a limited sense, while Robustness has lesser relevance since the focus is on curriculum rather than benchmarks for AI system performance.


Sector:
Academic and Research Institutions (see reasoning)

The bill pertains most closely to the Academic and Research Institutions sector, as it deals with the education system and introduces computer science as a graduation requirement, thus directly affecting curriculum design. It is less relevant to sectors such as Government Agencies and Public Services since it is focused more on educational institutions rather than public service delivery or regulatory frameworks. The Healthcare, Private Enterprises, Labor, and Employment sectors are not directly affected as the bill does not deal with healthcare provisions or labor markets explicitly. Similarly, sectors like Nonprofits and NGOs or the Judicial System do not find a direct application in this legislative text, leading to scores of 2 or lower for those categories.


Keywords (occurrence): artificial intelligence (1) algorithm (1) show keywords in context

Description: To amend sections 3.15, 9.03, 9.07, 9.239, 9.24, 9.27, 9.28, 9.312, 9.331, 9.334, 9.821, 101.352, 101.82, 101.83, 101.84, 102.02, 103.41, 103.414, 103.71, 103.76, 103.77, 103.78, 107.032, 107.033, 107.12, 109.02, 109.73, 109.77, 113.05, 113.13, 113.40, 113.51, 113.53, 113.78, 120.06, 120.08, 121.02, 121.03, 121.085, 121.22, 121.35, 121.36, 121.37, 121.93, 121.931, 122.09, 122.14, 122.175, 122.1710, 122.4041, 122.41, 122.42, 122.47, 122.49, 122.53, 122.571, 122.59, 122.631, 122.632, 122.633, 1...
Summary: The bill establishes operating appropriations for Ohio's fiscal years 2026-27, modifies various sections of the Revised Code, and sets conditions for state program operations.
Collection: Legislation
Status date: April 9, 2025
Status: Engrossed
Primary sponsor: Brian Stewart (19 total sponsors)
Last action: Passed (April 9, 2025)

Category: None (see reasoning)

The text does not contain any explicit mentions of AI technologies or related keyword terms. It largely encompasses a series of legislative amendments and appropriations relating to state operations. As such, it lacks relevance to any discussions surrounding AI impacts on society, data governance practices, system integrity, or robustness of AI technologies.


Sector: None (see reasoning)

Similarly, the text does not indicate any sector-specific applications or legislation addressing the use of AI in areas such as politics, public services, healthcare, or any of the other defined sectors. As it primarily focuses on appropriations and administrative amendments, it does not provide insights or directives that pertain to AI use in public, legal, or private sector contexts.


Keywords (occurrence): artificial intelligence (9) automated (46) show keywords in context

Description: Creating the Agency for State Systems and Enterprise Technology (ASSET); requiring that the Division of Elections comprehensive risk assessment comply with the risk assessment methodology developed by ASSET; requiring agencies and the judicial branch to include a cumulative inventory and a certain status report of specified projects with their legislative budget requests; revising the powers, duties, and functions of the Department of Management Services, through the Florida Digital Service; ...
Summary: The bill establishes the Agency for State Systems and Enterprise Technology (ASSET) in Florida, outlining its governance, roles, and responsibilities in managing state information technology and cybersecurity initiatives.
Collection: Legislation
Status date: March 24, 2025
Status: Introduced
Primary sponsor: Appropriations (sole sponsor)
Last action: Placed on Calendar, on 2nd reading (March 24, 2025)

Category:
Data Governance
System Integrity (see reasoning)

The text primarily discusses the establishment of the Agency for State Systems and Enterprise Technology (ASSET) and outlines its powers, duties, and responsibilities, particularly regarding information technology within the state. The focus is on governance, risk assessment, and technology management. While AI is not explicitly mentioned, the legislation involves overarching technology frameworks that may inherently incorporate AI systems, especially given ASSET's role in overseeing information technology initiatives. However, as AI-specific discussions or applications are not detailed in the text, relevance to the categories remains limited.


Sector:
Government Agencies and Public Services
Judicial system (see reasoning)

The text is centered on the establishment and governance of the Agency for State Systems and Enterprise Technology. There are implications for various sectors, notably Government Agencies and Public Services, as it outlines responsibilities relevant to state IT governance and operations. The focus on risk assessments and compliance with standards also connects with cybersecurity aspects pertinent to the judicial system, yet the specificity of AI applications in these contexts isn't clearly elaborated. Thus, strong connections can be made mostly with the Government Agencies and Public Services sector, with moderate ties to others.


Keywords (occurrence): artificial intelligence (6) automated (1) show keywords in context

Description: Relating to electronic health record requirements; authorizing a civil penalty.
Summary: The bill establishes requirements for the storage and management of electronic health records in Texas, ensuring patient data security, accuracy, and access, while imposing penalties for violations.
Collection: Legislation
Status date: April 7, 2025
Status: Engrossed
Primary sponsor: Lois Kolkhorst (sole sponsor)
Last action: Received from the Senate (April 8, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text primarily deals with the requirements for electronic health records (EHR), with a notable focus on how artificial intelligence (AI) is used in these records. Specifically, section 183.005 outlines responsibilities for health care practitioners using AI for diagnostic recommendations, mandating that they verify AI-provided information before recorded in EHRs, which addresses accountability and accuracy. This relates directly to the Social Impact category regarding consumer protection and potential health disparities stemming from AI use. It also sits within Data Governance, as it incorporates elements of data management in AI applications in healthcare. Moreover, there are references to maintaining the integrity of information and ensuring access controls under certain statutory frameworks, which connects to System Integrity. However, the text does not delve into the robustness standards or certification practices related to AI benchmarks, aligning less with the Robustness category.


Sector:
Healthcare (see reasoning)

The text specifically addresses the use of AI in healthcare settings, emphasizing electronic health records and AI responsibilities for healthcare practitioners. It is targeted towards rules and guidelines affecting medical facilities and practitioners in terms of their handling of health records that include AI-generated recommendations, which establishes a link to the Healthcare sector. The references to governmental entities controlling access to these records also add relevance, but it does not explicitly target other sectors such as Politics and Elections or the Judicial System, which might directly relate to AI regulation outside healthcare. Other sectors like Private Enterprises or Academia are touched upon only marginally if at all. Therefore, the greatest applicability remains within Healthcare.


Keywords (occurrence): artificial intelligence (3) algorithm (1) show keywords in context

Description: Creating an artificial intelligence grant program.
Summary: The bill establishes the Spark Act grant program in Washington to fund innovative artificial intelligence projects, promoting economic development, job creation, and addressing statewide challenges such as public safety and healthcare.
Collection: Legislation
Status date: Feb. 4, 2025
Status: Introduced
Primary sponsor: Michael Keaton (5 total sponsors)
Last action: Referred to Rules 2 Review. (Feb. 28, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly addresses the creation of an artificial intelligence grant program, focusing on economic development through innovative AI applications. Discussions of assessing risks associated with AI, ethical uses of AI, and the prioritization of small businesses, all signal a concern for societal impacts that may emerge as AI becomes more integrated into industries. Therefore, the 'Social Impact' category is very relevant. The text also discusses grant supports that relate to using data to drive innovation, which aligns with some aspects of 'Data Governance,' although it primarily emphasizes economic development over data management. 'System Integrity' is relevant due to discussions surrounding ethical uses of AI, risk evaluation, and transparency requirements for the technologies in development. While 'Robustness' is less emphasized directly in the text, the overarching goal to build strong, effective AI systems touches on the need for benchmarks and standards. Therefore, it could be somewhat relevant but less so than the other categories.


Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs (see reasoning)

The text primarily pertains to the economic development sector through the establishment of the artificial intelligence grant program, which directly supports innovation and job creation. Discussions about recruiting input from various stakeholders signal a comprehensive approach to fostering technological advancement, indicative of engagement across sectors such as government and private enterprises. As such, there is a strong connection to 'Government Agencies and Public Services' and 'Private Enterprises, Labor, and Employment.' However, while healthcare applications are mentioned, the text does not specifically address healthcare legislation or policies, indicating a less direct relevance to that sector. The text involves many stakeholders and considerations, including civil rights and transparency, indicating a general relevance to sectors of 'Nonprofits and NGOs' aside from the focused sectors.


Keywords (occurrence): artificial intelligence (26) machine learning (5) foundation model (1) show keywords in context

Description: Use of tenant screening software that uses nonpublic competitor data to set rent prohibition
Summary: The bill prohibits landlords from using tenant screening software that relies on nonpublic competitor data for setting rents and bans biased screening algorithms against protected classes. It aims to promote fair housing practices.
Collection: Legislation
Status date: March 3, 2025
Status: Introduced
Primary sponsor: Erin Maye Quade (5 total sponsors)
Last action: Referred to Judiciary and Public Safety (March 3, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The bill addresses tenant screening software that uses algorithms and artificial intelligence, especially concerning biases against protected classes. It specifically prohibits using such software that has a disparate impact on vulnerable populations, indicating a clear connection to social implications of AI. The mention of algorithmic devices and definitions of artificial intelligence directly ties into concerns regarding bias, fairness, and discrimination stemming from AI applications. Thus, the relevance to Social Impact is extremely pertinent. While it includes aspects of data governance and system integrity due to the nature of data use and the need for oversight, these are less central to the bill's main provisions compared to its focus on societal consequences.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The bill primarily pertains to the housing sector, focusing on landlord practices and tenant screening. Therefore, it is most relevant to Private Enterprises, Labor, and Employment, given its implications for landlords and their use of AI and algorithms in rental decisions. There is a moderate connection to Government Agencies and Public Services, as the enforcement of tenant rights and anti-discrimination measures may involve public agencies. However, it does not directly relate to political processes or healthcare, reducing relevance in those sectors significantly.


Keywords (occurrence): algorithm (2) show keywords in context

Description: Schools; media literacy and cybersecurity to be taught in sixth, seventh, or eighth grades; State Department of Education to adopt curriculum standards; effective date.
Summary: The bill mandates that public schools in Oklahoma teach media literacy and cybersecurity to students in sixth to eighth grades, establishing curriculum standards and instructional guidelines by November 1, 2025.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Trish Ranson (sole sponsor)
Last action: Authored by Representative Ranson (Feb. 3, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text primarily addresses the implementation of media literacy and cybersecurity education in Oklahoma schools. The relevance of AI is primarily derived from the specific mention of identifying deepfake images, videos, audio, and artificial intelligence as part of the curriculum. This suggests a focus on understanding and combating misinformation potentially generated or propagated by AI technologies. Given this context, the relevant categories vary in terms of their applicability to the legislation. Social Impact relates to the societal implications of misinformation and the ethical use of AI tools in media. Data Governance touches on aspects of protecting personal information and navigating digital environments. System Integrity aligns with the emphasis on teaching critical evaluation of media content which bears on the requirement for reliable information sources. Robustness is less relevant here as there are no direct implications regarding benchmarks or performance of AI systems discussed in the text.


Sector:
Academic and Research Institutions (see reasoning)

The sectors potentially affected by this legislation largely relate to Academia due to its focus on educational standards and curriculum development tailored for students. The relevance to Government Agencies is minimal, despite the State Department of Education being involved. Other sectors, such as Healthcare or Judicial System, have no connection to the content of this text. Therefore, the scoring reflects a significant focus on Academic and Research Institutions, with a marginal consideration of others according to context.


Keywords (occurrence): deepfake (1) show keywords in context

Description: STATE AFFAIRS AND GOVERNMENT -- ARTIFICIAL INTELLIGENCE ACCOUNTABILITY ACT - Requires DOA provide inventory of all state agencies using artificial intelligence (AI); establishes a 13 member permanent commission to monitor the use of AI in state government and makes recommendations for state government policy and other decisions.
Summary: The bill establishes the Artificial Intelligence Accountability Act, mandating an inventory of AI usage by state agencies and creating a commission to monitor AI's impact and recommend policies.
Collection: Legislation
Status date: Jan. 22, 2025
Status: Introduced
Primary sponsor: John Lombardi (6 total sponsors)
Last action: Introduced, referred to House Innovation, Internet, & Technology (Jan. 22, 2025)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text encompasses legislation aimed at monitoring and regulating the use of artificial intelligence (AI) within state government. It highlights accountability by requiring an inventory of AI systems used across state agencies and establishes a commission to assess their impacts, ensuring they do not lead to unlawful discrimination or other societal issues. This aligns strongly with themes of social impact due to the focus on fairness, discrimination, and societal effects of AI systems. Data governance is also relevant given the text's emphasis on data usage, security, and assessment procedures around AI systems. System integrity is significant due to mandates for transparency and assessment of AI systems and their processes. Robustness is also a consideration, albeit to a lesser extent, as it implies the establishment of benchmarks and policies for continuous assessment of AI performance, which is indirectly referenced but is critical for ongoing compliance and improvement.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

This legislation is particularly relevant to the 'Government Agencies and Public Services' sector, as it establishes a framework for how artificial intelligence is used within state government, overseeing its implementation, effects, and compliance with ethical guidelines. It is also somewhat relevant to the 'Academic and Research Institutions' sector, considering the commission includes experts from academic backgrounds to inform best practices and guidelines. Other sectors like Politics and Elections, Healthcare, and Private Enterprises do not appear as directly relevant based on the text provided, making this legislation primarily focused on governmental use of AI.


Keywords (occurrence): artificial intelligence (28) machine learning (2) neural network (1) show keywords in context

Description: Student data; creating the Oklahoma Education and Workforce Statewide Longitudinal Data System. Effective date. Emergency.
Summary: This bill establishes the Oklahoma Education and Workforce Efficiency Data System to securely manage and analyze education and workforce data, enhancing decision-making and accountability while protecting taxpayer interests and personal privacy.
Collection: Legislation
Status date: March 26, 2025
Status: Engrossed
Primary sponsor: Ally Seifried (2 total sponsors)
Last action: First Reading (March 26, 2025)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

In this text, several key aspects related to AI are present, particularly in the mention of 'advanced analytics capabilities including, but not limited to, artificial intelligence, machine learning, forecasting, and data mining.' This indicates a direct involvement with AI systems in the data management process. The legislation outlines the setup of a data system aimed at leveraging AI technologies for improving education and workforce outcomes, reflecting a clear intent to consider social implications and governance around AI usage. Additionally, the focus on privacy and security enhances the relevance to both system integrity and data governance, particularly as it governs data access and management within the new system.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

The legislation is highly relevant to the 'Government Agencies and Public Services' sector as it establishes a statewide data system that will be used by various governmental agencies involved in education and workforce development. Moreover, it pertains to the 'Academic and Research Institutions' sector, as the data-sharing agreements pave the way for researchers and educational stakeholders to utilize the system for analysis and improvement of educational outcomes. The structure for oversight and governance also indicates significant engagement with these sectors, focusing on optimizing public service through data integration and analysis.


Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context
Feedback form