5015 results:
Description: To direct the Secretary of Agriculture and the Director of the National Science Foundation to carry out cross-cutting and collaborative research and development activities focused on the joint advancement of Department of Agriculture and National Science Foundation mission requirements and priorities, and for other purposes.
Summary: The NSF and USDA Interagency Research Act mandates collaboration between the Department of Agriculture and the National Science Foundation to enhance research and development on agriculture and related technologies.
Collection: Legislation
Status date: June 4, 2025
Status: Introduced
Primary sponsor: James Baird
(2 total sponsors)
Last action: Referred to the Committee on Science, Space, and Technology, and in addition to the Committee on Agriculture, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (June 4, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text discusses the collaboration between the Secretary of Agriculture and the Director of the National Science Foundation to advance agricultural and scientific objectives, with explicit mention of emerging technology areas such as artificial intelligence, machine learning, and automation. These areas are poised to affect various sectors, including agriculture and education, making it relevant to all of the categories defined. The impact on society and the integration of fair practices in these technologies link it strongly to 'Social Impact.' The mention of secure data sharing and the need for quality in methodologies speaks to 'Data Governance.' The emphasis on interagency collaboration and the technical details of research channels resonate with 'System Integrity.' Lastly, the focus on new benchmarks and methods of research reflects the theme of 'Robustness.' Overall, the bill demonstrates a harmonious blend of concerns across all categories with its focus on advancing technology through careful governance and research integrity.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions
International Cooperation and Standards
Hybrid, Emerging, and Unclassified (see reasoning)
The act shows explicit intentions to enhance research activities not only within the USDA but also through collaboration with the NSF and education entities, highlighting the role of AI and automation in public services and research. The implications extend to agricultural practices, workforce development, and potential impacts on the economy, appealing to sectors like Government Agencies and Public Services, and Academic and Research Institutions. The collaborative efforts presented could influence politics and, indirectly, labor markets as they introduce new technologies to enhance efficiency and effectiveness in various sectors. Despite the obvious focus on agriculture, the integration of emerging technologies like AI and collaboration with various educational institutions bring relevance across several sectors, thereby enhancing the sophistication and scope of government capabilities. The text does not specifically address the judicial system, healthcare, or nonprofits directly, but could have implications in those areas through its broader impacts.
Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context
Description: An act to add Part 5.6 (commencing with Section 1520) to Division 2 of the Labor Code, relating to employment.
Summary: Senate Bill No. 7 mandates California employers to disclose the use of automated decision systems (ADS) in employment decisions, ensuring workers' rights to access data, appeal decisions, and prevents discrimination.
Collection: Legislation
Status date: June 2, 2025
Status: Engrossed
Primary sponsor: Jerry McNerney
(3 total sponsors)
Last action: From committee with author's amendments. Read second time and amended. Re-referred to Com. on L. & E. (June 19, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This bill focuses on the governance of automated decision systems (ADS) specifically in the context of employment, which is a clear and direct application of artificial intelligence (AI). Given its emphasis on notification, transparency, accountability, and the rights of workers in relation to AI systems utilized in making employment decisions, the relevance to Social Impact is very high. Moreover, the employer's obligations around the management and oversight of ADS include strong elements of data governance relating to how data is handled, collected, and maintained. System Integrity is also significantly relevant as the bill ensures human oversight in decision-making processes when using ADS, while it is related to the development of benchmarks for performance in AI applications related to employment decisions but is less emphasized than the other aspects. For all these reasons, the scores reflect strong relevances in Social Impact and Data Governance, with notable relevance in System Integrity, and less for Robustness, which is not specifically addressed.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
This legislation primarily targets the employment sector, addressing how AI and automated decision systems affect workers' rights and employment decisions. It calls for notifications about these systems, access to data used in decisions, and a framework for appeals, all tailored to workers and their employment situations. While elements of government oversight and compliance mechanisms are present, they still fall under employment regulation rather than direct governance of public services or other sectors. Therefore, appropriate scores reflect strong relevance for Private Enterprises, Labor, and Employment, and moderate relevance for Government Agencies and Public Services due to oversight roles. The other sectors such as Politics, Judicial System, Healthcare, Academic, International Standards, or Nonprofits are less relevant to this specific text.
Keywords (occurrence): artificial intelligence (2) machine learning (1) automated (8) show keywords in context
Description: An Act regulating autonomous vehicles; and providing for an effective date.
Summary: House Bill No. 217 regulates autonomous vehicles in Alaska, establishing requirements for registration, liability in accidents, and the necessity of a human operator for commercial transport until full automation is safe and verified.
Collection: Legislation
Status date: May 2, 2025
Status: Introduced
Primary sponsor: Transportation
(sole sponsor)
Last action: REFERRED TO TRANSPORTATION (May 2, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly addresses various aspects of autonomous vehicles, which involves AI technologies such as automated driving systems. This relates to Social Impact as there are implications for public safety, liability, and the regulatory framework that balances innovation with accountability. Data Governance is relevant given the need for secure data management in autonomous vehicles, especially regarding their operational data. System Integrity comes into play through mandates ensuring safe operation and oversight, while Robustness is somewhat less central but still relevant in terms of performance standards for AI driving systems. Overall, the legislation is particularly strong in Social Impact and System Integrity, where safety and accountability measures are clearly articulated.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
This legislation is primarily focused on Government Agencies and Public Services, given that it regulates autonomous vehicles, which are used in public transportation contexts. Additionally, it touches upon aspects of the Private Enterprises, Labor, and Employment sector as it affects businesses involved in autonomous vehicle technology. The text does not explicitly relate to Politics and Elections, Healthcare, Judicial systems, Academic Institutions, International standards, Nonprofits, or emerging sectors, leading to lower scores in those areas.
Keywords (occurrence): automated (6) autonomous vehicle (4) show keywords in context
Description: Relates to the use of automated lending decision-making tools by banks for the purposes of making lending decisions; allows loan applicants to consent to or opt out of such use.
Summary: The bill regulates the use of automated lending decision-making tools by banks, requiring consent from applicants and annual impact assessments to ensure fairness, transparency, and the prevention of discrimination in lending practices.
Collection: Legislation
Status date: Jan. 8, 2025
Status: Introduced
Primary sponsor: Linda Rosenthal
(2 total sponsors)
Last action: reported referred to rules (June 5, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This legislative text is highly relevant to the category of Social Impact because it explicitly addresses the use of automated decision tools (including machine learning and AI) in banking, particularly in lending decisions, which can significantly affect individuals' lives and financial security. The focus on conducting disparate impact analyses to ensure fairness and prevent discrimination by evaluating the impact on protected classes directly connects with societal concerns about equity and accountability in AI systems. The requirement for banks to notify applicants about the use of these tools also ties into consumer protection, enhancing transparency and the ability of individuals to give informed consent, which further supports the relevance to social impact. Data Governance is also relevant as the text mandates informing applicants about data collection practices and allows for the correction of inaccuracies, aligning with the principles of protecting individuals' data rights and ensuring accurate data management. System Integrity is moderately relevant since there are implications for the robustness of the decision-making processes via automated tools, but the primary focus is not on security measures or oversight of the tools themselves. Robustness is less relevant as it doesn’t directly address performance benchmarks or compliance standards but rather focuses on ensuring fair outcomes from the systems used.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily applies to the sector of Private Enterprises, Labor, and Employment since it addresses how automated decision-making tools impact lending practices in banks, which are private enterprises. The legislation aims to regulate how these tools are utilized within the business context of banking, specifically impacting consumers’ ability to secure loans and how the banks operate. There is a secondary relevance to Government Agencies and Public Services due to the involvement of the attorney general in overseeing compliance, but the main thrust remains within private enterprise regulation. Other sectors such as Politics and Elections, Judicial System, Healthcare, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified do not find direct relevance as they deal with broader or different applications than what this text specifies.
Keywords (occurrence): artificial intelligence (2) automated (29) show keywords in context
Summary: The bill summarizes various congressional committee meetings and hearings focused on topics like American Indian affairs, civil works programs, national security, and veterans' issues, aiming to gather public testimony and inform policy decisions.
Collection: Congressional Record
Status date: Feb. 25, 2025
Status: Issued
Source: Congress
Societal Impact
System Integrity (see reasoning)
In this text, the relevant portion pertaining to AI is found in the mention of the Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet. This indicates a recognition of the role that AI plays within discussions of judicial matters, though the specifics of the discussion are not provided here. However, the mere mention of 'Artificial Intelligence' implies there may be regulatory considerations that could fit under several of the categories presented. Given that the text discusses involvement in oversight and the implications of AI technologies in legal contexts, it warrants consideration across the categories identified.
Sector:
Judicial system (see reasoning)
The text includes a hearing specifically focused on the intersection of AI and the judiciary, highlighting the regulatory context of AI within legal processes and intellectual property. However, there are no specific applications or guidelines directly mentioned regarding how AI impacts elections, healthcare, or other sectors, thus reducing its relevance to those areas and limiting the score to moderate levels for the relevant sectors identified. Therefore, while there is a clear implication of AI in the legislative environment of the judicial sector, it lacks specific details for more in-depth categorization.
Keywords (occurrence): artificial intelligence (1)
Description: An act to add Section 1339.76 to the Health and Safety Code, relating to health care services.
Summary: Senate Bill No. 503 mandates healthcare facilities to disclose AI-generated communications and establishes an advisory board to develop standardized testing for AI tools to mitigate bias and ensure fairness in patient care.
Collection: Legislation
Status date: May 29, 2025
Status: Engrossed
Primary sponsor: Akilah Weber Pierson
(12 total sponsors)
Last action: July 1 hearing postponed by committee. (June 24, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text primarily addresses the use of artificial intelligence within healthcare settings, particularly the responsibilities of developers and deployers of AI tools in terms of bias mitigation and regulatory compliance. Given that the legislation directly deals with how AI impacts healthcare delivery and patient outcomes, it is highly relevant to the Social Impact category due to its focus on discrimination, ethical considerations, and patient safety. The Data Governance category is also very relevant as it addresses standards for testing AI for bias and the accurate management of patient data in the context of AI usage. System Integrity and Robustness are moderately relevant; while there are mentions related to testing and oversight, the emphasis is more on AI biases and ethical considerations rather than security and performance benchmarks. Thus, while Security and performance are touched upon, they do not dominate the concerns outlined in the text.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text explicitly addresses the implications of AI in the healthcare sector, detailing regulations and practices for using AI in clinical settings, particularly focusing on patient care decision support tools. The legislation mandates specific actions related to AI in health care services, from testing for biased impacts to addressing protected characteristics. Consequently, it falls squarely within the Healthcare sector. While there may be tangential relevance to Government Agencies and Public Services due to the involvement of government entities in overseeing the implementation, it does not directly address broader public service regulations, earning a lower score in that regard. Other sectors like Politics and Elections, Judicial System, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified are not relevant as the legislation does not pertain to these areas.
Keywords (occurrence): artificial intelligence (5) automated (1) show keywords in context
Description: Enacts the "New York artificial intelligence consumer protection act", in relation to preventing the use of artificial intelligence algorithms to discriminate against protected classes.
Summary: The "New York Artificial Intelligence Consumer Protection Act" aims to prevent algorithmic discrimination against protected classes by regulating the deployment of high-risk AI systems and requiring transparency and risk management practices from developers and deployers.
Collection: Legislation
Status date: Jan. 14, 2025
Status: Introduced
Primary sponsor: Kristen Gonzalez
(sole sponsor)
Last action: REFERRED TO INTERNET AND TECHNOLOGY (Jan. 14, 2025)
Description: Synthetic media; penalty. Expands the applicability of provisions related to defamation, slander, and libel to include synthetic media, defined in the bill. The bill makes it a Class 1 misdemeanor for any person to use any synthetic media for the purpose of committing any criminal offense involving fraud, constituting a separate and distinct offense with punishment separate and apart from any punishment received for the commission of the primary criminal offense. The bill also authorizes the ...
Summary: The bill introduces penalties for using synthetic media to commit fraud or other criminal offenses in Virginia, allowing for civil actions and establishing a work group to study enforcement related to such technology.
Collection: Legislation
Status date: Feb. 7, 2024
Status: Engrossed
Primary sponsor: Michelle Maldonado
(5 total sponsors)
Last action: Continued to 2025 in Courts of Justice (11-Y 2-N) (Feb. 19, 2024)
Societal Impact
Data Governance (see reasoning)
The text explicitly mentions synthetic media, generative artificial intelligence, and penalties related to their misuse in committing fraud. This is highly relevant to Social Impact, as it addresses the potential ramifications of synthetic media on personal rights and fraud, reflecting societal issues like misinformation and defamation. It also touches on accountability measures for the production and use of AI technologies with a direct societal effect. Data Governance is somewhat relevant since the definition of synthetic media intersects with data accuracy and potential restrictions on data usage, although the text doesn't delve deeply into those governance aspects. System Integrity receives a lower relevance score, as it doesn't deal much with security or operational integrity of AI systems, but rather with legal definitions and implications. Robustness only somewhat pertains as it lacks focus on performance benchmarking or auditing of AI systems. Thus, Social Impact likely takes precedence, considering the societal consequences outlined in the text.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The legislation primarily concerns the use of synthetic media in the context of legal actions, specifically pertaining to defamation and fraud. This ties closely to the Judicial System, which deals with legal implications and case management regarding crimes committed using AI technologies. Furthermore, the text touches on findings and recommendations which could guide future legislative actions, suggesting relevance to Government Agencies and Public Services as they may be involved in the enforcement of such regulations and legal standards. The focus on synthetic media's misuse in fraudulent circumstances may also implicate Private Enterprises, Labor, and Employment indirectly, particularly regarding employment practices influenced by such technologies. Academic and Research Institutions may have a slight relevance due to the potential study of AI impacts raised in the text, but it is not direct. Overall, the strongest categories here are Judicial System and Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (7) synthetic media (7) foundation model (1) show keywords in context
Description: As enacted, enacts the "Modernization of Towing, Immobilization, and Oversight Normalization (MOTION) Act." - Amends TCA Title 4; Title 5; Title 6; Title 7; Title 39; Title 47; Title 48; Title 55; Title 56; Title 62; Title 66 and Title 67.
Summary: This bill amends multiple sections of Tennessee law to modernize regulations on towing and parking practices, establishing the "MOTION Act" to enhance oversight and protect consumers against unfair towing and booting practices.
Collection: Legislation
Status date: May 31, 2024
Status: Passed
Primary sponsor: Jake McCalmon
(22 total sponsors)
Last action: Comp. became Pub. Ch. 1017 (May 31, 2024)
The text primarily addresses revisions to parking regulations in Tennessee and does not explicitly mention AI, algorithms, or any related technology associated with the categories of Social Impact, Data Governance, System Integrity, or Robustness. The single mention of an 'automatic license plate reader' describes a tool that utilizes an algorithm but does not engage with any concepts directly related to AI ethics or governance as outlined in the categories. Overall, the core content of the act focuses on parking enforcement rather than the implications of AI technology.
Sector: None (see reasoning)
The text does not address specific sectors such as Politics and Elections, Government Agencies and Public Services, or any others that involve the use or regulation of AI technology. Instead, it proposes amendments relevant to parking enforcement and vehicle management, which do not inherently involve AI applications in any sector. The mention of an 'automatic license plate reader' does not align with the broader discussions typically associated with the defined sectors.
Keywords (occurrence): automated (1) show keywords in context
Description: Establishes the "AI Non-Sentience and Responsibility Act"
Summary: The "AI Non-Sentience and Responsibility Act" declares AI systems as non-sentient, clarifying liability for harm caused by AI. It assigns responsibility to developers, manufacturers, and owners, ensuring accountability and safety measures in AI deployment.
Collection: Legislation
Status date: Feb. 25, 2025
Status: Introduced
Primary sponsor: Phil Amato
(sole sponsor)
Last action: Read Second Time (H) (Feb. 26, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The 'A.I. Non-Sentience and Responsibility Act' explicitly addresses AI systems, outlining definitions and responsibilities for developers, manufacturers, and owners. The provisions make it clear that AI systems are not considered legal entities or sentient beings, impacting accountability for actions taken by these systems. Thus, it is highly relevant to the Social Impact category, as it discusses the societal implications regarding accountability and patient safety that arise from the deployment of AI systems. Data Governance is also relevant, as the act outlines responsibilities relating to the management and oversight of AI outputs that can affect human welfare. System Integrity is moderately relevant, as it focuses on maintaining control and oversight of AI systems but does not delve into broader security measures. Robustness does not appear to be directly relevant since the text does not discuss performance benchmarks or auditing requirements related to AI systems.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The bill is primarily focused on defining the legal treatment and responsibilities related to AI, which relates closely to the realm of Government Agencies and Public Services, as state laws will directly govern the use of AI in these settings. The legislation has implications for accountability in various sectors, including Private Enterprises, Labor, and Employment, as well as general implications across all sectors, but it does not specifically mention regulation in areas like Healthcare or the Judicial System. Therefore, a rating of 4 is assigned to Government Agencies and Public Services, with a score of 3 for Private Enterprises, Labor, and Employment. The rest of the sectors are scored lower as they do not fit the primary focus of the bill.
Keywords (occurrence): artificial intelligence (1)
Description: Relating to the unlawful production or distribution of sexually explicit media using deep fake technology.
Summary: H.B. No. 449 makes it illegal to produce or distribute sexually explicit deep fake media without consent, addressing concerns over misuse of technology for non-consensual exploitation.
Collection: Legislation
Status date: May 29, 2025
Status: Enrolled
Primary sponsor: Mary Gonzalez
(9 total sponsors)
Last action: Sent to the Governor (May 31, 2025)
Description: Prohibit distributing deepfakes under the Nebraska Political Accountability and Disclosure Act
Summary: The bill prohibits the distribution of deceptive deepfakes targeting political candidates 90 days before elections, providing exceptions for disclosures and certain media types, while allowing candidates to seek legal relief.
Collection: Legislation
Status date: Jan. 22, 2025
Status: Introduced
Primary sponsor: John Cavanaugh
(sole sponsor)
Last action: Referred to Government, Military and Veterans Affairs Committee (Jan. 24, 2025)
Societal Impact (see reasoning)
The text primarily addresses the distribution of deepfakes and synthetic media within the electoral process, focusing on preventing misinformation and protecting political candidates' reputations. It directly discusses identified terms like 'deepfake' and 'synthetic media,' demonstrating the impact of AI on society, particularly in political contexts. This indicates a significant relevance to the Social Impact category, as it touches on misinformation's effects on public trust and electoral integrity. It does not discuss data governance, system integrity, or robustness directly, so those categories receive lower scores.
Sector:
Politics and Elections (see reasoning)
The text is highly relevant to the Politics and Elections sector, given its focus on the regulation of deepfakes in political campaigns and electoral processes. It highlights legal considerations specific to election-related misinformation, which places it squarely within this sector. Although it may touch on aspects relevant to other sectors, they are not explicitly addressed or central to the text's purpose, leading to low scores for those sectors.
Keywords (occurrence): artificial intelligence (1) deepfake (7) synthetic media (4) show keywords in context
Description: Allowing bargaining over matters related to the use of artificial intelligence.
Summary: The bill allows collective bargaining regarding the adoption and modification of artificial intelligence technologies affecting employee wages or performance evaluations at Washington's higher education institutions. It aims to protect employee interests in an evolving technological landscape.
Collection: Legislation
Status date: March 8, 2025
Status: Engrossed
Primary sponsor: Lisa Parshley
(47 total sponsors)
Last action: First reading, referred to Labor & Commerce. (March 11, 2025)
Societal Impact (see reasoning)
The provided text primarily focuses on legislation that pertains to the usage of artificial intelligence within the context of collective bargaining agreements. The terms and references to 'artificial intelligence' and related technologies are directly related to the legislation's aim of ensuring that the adoption and modification of AI technologies are subject to collective bargaining if they impact employee wages, hours, or working conditions. Therefore, the relevance of the categories can be evaluated based on the implications of AI on social impact, data governance, system integrity, and robustness in the context of labor relations. Overall, while there are mentions of technology, the act structurally aligns more with social implications rather than data governance, system integrity, or robustness. The legislation's relevance to societal aspects of AI usage, fairness, and employee rights can be considered very relevant.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text explicitly addresses the role of artificial intelligence in the context of collective bargaining, which is primarily relevant to the workforce, labor relations, and government interactions with employees. The mentions of decisions about adopting AI affecting employee conditions directly tie to the implications and interactions within the Labor market. However, the broad aspects like healthcare and other specific sectors are not directly discussed, thus scoring lower in those respects. This text does suggest some level of engagement with public services through government agencies overseeing employment matters, which gives it slight relevance there. Ultimately, the legislation's core focus on the intersection between AI and labor relations is evident.
Keywords (occurrence): artificial intelligence (10) machine learning (1) show keywords in context
Description: To amend the Energy Independence and Security Act of 2007 to direct research, development, demonstration, and commercial application activities in support of supercritical geothermal and closed-loop geothermal systems in supercritical various conditions, and for other purposes.
Summary: The Supercritical Geothermal Research and Development Act aims to enhance research, development, and commercialization of supercritical and closed-loop geothermal systems to improve geothermal energy utilization in various conditions.
Collection: Legislation
Status date: June 7, 2024
Status: Introduced
Primary sponsor: Frank Lucas
(2 total sponsors)
Last action: Subcommittee Hearings Held (July 23, 2024)
Data Governance
System Integrity (see reasoning)
The text does include references to AI through terms such as 'machine learning algorithms,' showing a connection to the use of AI in optimizing and enhancing geothermal research and applications. However, the primary focus of the legislation appears to be on geothermal energy rather than extensive AI-related social impacts or regulatory frameworks. The mention of machine learning suggests some relevance to the Data Governance and System Integrity categories, but it is not the primary thrust of the bill. Furthermore, without broader implications on data governance or system integrity, the scoring for Social Impact and Robustness may remain lower.
Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)
The focus of this legislation is primarily on geothermal energy research and development, with only tangential mentions of AI. The references to AI are included within the context of enhancing geothermal technology rather than addressing specific sectors significantly. Therefore, while there is a slight connection to the Government Agencies and Public Services sector due to its regulatory nature, the overall impact on sectors like Healthcare or Private Enterprises does not seem relevant. The understanding is limited to applications within the energy sector, and its broader impact does not clearly translate to other sectors.
Keywords (occurrence): machine learning (1) show keywords in context
Description: Enacts into law major components of legislation necessary to implement the state public protection and general government budget for the 2024-2025 state fiscal year; establishes the crime of assault on a retail worker (Part A); establishes the crime of fostering the sale of stolen goods as a class A misdemeanor (Part B); adds to the list of specified offenses that constitutes a hate crime (Part C); authorizes the governor to close correctional facilities upon notice to the legislature (Part D...
Summary: The bill implements components of New York's 2024-2025 budget, introducing new crimes like assault on retail workers and fostering the sale of stolen goods, while enhancing safety measures.
Collection: Legislation
Status date: Jan. 17, 2024
Status: Introduced
Primary sponsor: Budget
(sole sponsor)
Last action: SUBSTITUTED BY A8805C (April 18, 2024)
The provided text primarily focuses on implementing state legislation aimed at public protection and adjustments to the penal code. The text does not engage with AI systems, their impacts on society, or legislation directly related to AI governance or integrity. As such, it appears to be completely unrelated to AI-specific issues, making it irrelevant for all categories concerning AI.
Sector: None (see reasoning)
The text outlines various legal amendments and public protection measures but lacks any discussion or reference to AI-related use cases or regulations within specific sectors. This absence of AI content likewise renders the text non-relevant to the identified sectors, leading to a score of 1 across all sectors.
Keywords (occurrence): automated (2) show keywords in context
Description: Concerning benefits to facilitate data center development while supporting electric grid infrastructure, and, in connection therewith, creating the "Colorado Data Center Development and Grid Modernization Act".
Summary: The Data Center Development & Grid Modernization Act establishes a program in Colorado to incentivize data center development and grid modernization through tax and utility benefits, promoting economic growth and clean energy initiatives.
Collection: Legislation
Status date: April 4, 2025
Status: Introduced
Primary sponsor: Nick Hinrichsen
(6 total sponsors)
Last action: Introduced In Senate - Assigned to Transportation & Energy (April 4, 2025)
Description: Use of tenant screening software that uses nonpublic competitor data to set rent prohibited, and use of software that is biased against protected classes prohibited.
Summary: This bill prohibits the use of tenant screening software that relies on nonpublic competitor data for setting rent and bans algorithms biased against protected classes, aiming to enhance fair housing practices in Minnesota.
Collection: Legislation
Status date: Feb. 19, 2025
Status: Introduced
Primary sponsor: Michael Howard
(sole sponsor)
Last action: Introduction and first reading, referred to Housing Finance and Policy (Feb. 19, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This bill addresses two significant issues of AI usage: the prohibition of tenant screening software that employs nonpublic competitor data to set rents and the restriction on algorithms or AI utilized for background screening that may have a biased impact on protected classes. Consequently, it is extremely relevant to Social Impact due to its emphasis on preventing discrimination and bias within AI systems, thereby safeguarding social equity. The Data Governance category is also highly relevant as it concerns the use of data (public and nonpublic) in algorithms and the implications of bias in data used for algorithmic decision-making. The relevance to System Integrity is moderate since the bill does touch upon accountability and responsible AI usage in a specific context. Robustness receives a lower relevance score as it does not focus on benchmarking or auditing for performance, but directly ensures the protection of vulnerable classes against biased AI decisions.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation primarily impacts the housing sector due to its focus on tenant screening algorithms and the implications for renters. The bill intersects with topics in Government Agencies and Public Services through potential regulation enforcement regarding how public services are delivered in the housing market. However, it does not directly address the political process or electoral integrity, nor does it focus on AI's role in the judicial system or healthcare. As a result, the most relevant sector for categorization is Private Enterprises, Labor, and Employment, as the bill pertains to landlord practices in the rental market. The presence of AI in tenant applications also aligns with academic discussions about algorithm bias. Overall, the strongest connections lie with Private Enterprises and Government Agencies.
Keywords (occurrence): algorithm (2) show keywords in context
Description: AN ACT relating to insurance; imposing requirements governing prior authorization for medical or dental care; prohibiting an insurer from requiring prior authorization for covered emergency services or denying coverage for covered, medically necessary emergency services; requiring an insurer to publish certain information relating to requests for prior authorization on the Internet; requiring an insurer and the Commissioner of Insurance to compile certain reports; and providing other matters ...
Summary: Assembly Bill 290 revises prior authorization requirements for medical and dental care under health insurance, enhances transparency, and ensures timely approvals, particularly for emergency services and Medicaid patients.
Collection: Legislation
Status date: Feb. 25, 2025
Status: Introduced
Primary sponsor: Duy Nguyen
(17 total sponsors)
Last action: To printer. (April 21, 2025)
Description: Creates the Artificial Intelligence Safety and Security Protocol Act. Provides that a developer shall produce, implement, follow, and conspicuously publish a safety and security protocol that includes specified information. Provides that, no less than every 90 days, a developer shall produce and conspicuously publish a risk assessment report that includes specified information. Provides that, at least once every calendar year, a developer shall retain a reputable third-party auditor to produc...
Summary: The Artificial Intelligence Safety and Security Protocol Act mandates developers to establish, publish, and regularly assess safety protocols for AI systems, aiming to mitigate risks and enhance public safety.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: Daniel Didech
(sole sponsor)
Last action: Referred to Rules Committee (Feb. 18, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly addresses various aspects of artificial intelligence, particularly in the context of safety and security protocols for developers. This direct focus on AI systems and their management indicates a high relevance across almost all categories. The emphasis on risk assessment, third-party audits, and safety protocols implies significant implications for societal impacts, data governance, system integrity, and robustness. Each section discusses critical risks that AI could pose, necessitating human oversight and transparent protocols, thus correlating well with the themes within the categories. As such, all categories are expected to receive a high relevance score based on the content of the text.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Hybrid, Emerging, and Unclassified (see reasoning)
The proposed legislation targets AI developers directly and outlines compliance protocols that pertain to the AI sector comprehensively. It does not focus on specific sectors like healthcare, politics, or nonprofits, but it broadly applies to any sector where AI is utilized, especially in public safety and security contexts. Given that it includes elements of governance, the management of data, and public accountability, a neutral score is appropriate since the law acts more as a general framework applicable to any AI-related situation rather than being specifically confined to one sector. Hence, scores for specific sectors will be lower compared to the overarching categories.
Keywords (occurrence): artificial intelligence (8) foundation model (9) show keywords in context
Description: An act to add Article 6.65 (commencing with Section 792) to Chapter 1 of Part 2 of Division 1 of the Insurance Code, relating to insurance.
Summary: The Insurance Consumer Privacy Protection Act of 2025 establishes stricter standards for how insurance companies handle consumers' personal information, enhancing privacy protections and ensuring transparency, consent, and accountability in data use.
Collection: Legislation
Status date: Feb. 12, 2025
Status: Introduced
Primary sponsor: Monique Limon
(sole sponsor)
Last action: Read second time and amended. Ordered to second reading. (May 23, 2025)