4827 results:
Description: An act to add Chapter 22.6 (commencing with Section 22601) to Division 8 of the Business and Professions Code, relating to artificial intelligence.
Summary: Senate Bill No. 243 establishes regulations for companion chatbots, requiring operators to prevent manipulative engagement, address user mental health concerns, and undergo regular audits to ensure user safety and compliance.
Collection: Legislation
Status date: Jan. 30, 2025
Status: Introduced
Primary sponsor: Josh Becker
(3 total sponsors)
Last action: From committee with author's amendments. Read second time and amended. Re-referred to Com. on JUD. (March 28, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly discusses the implications of chatbot use among minors and promotes accountability of operators regarding their AI systems. It highlights the societal impact of AI through the accountability measures and mental health concerns associated with the chatbot interactions with minors. This relevance is significant because it deals directly with the potential psychological and material harm caused by AI systems, warranting a high score in the Social Impact category. In terms of Data Governance, the bill mandates that operators report data related to suicidality in minors to the State Department of Health Care Services, which addresses the management of sensitive data and ensures data transparency, thus also being relevant to this category. The System Integrity category is moderately relevant due to the requirement for audits ensuring compliance, but it does not address broader issues of security or control in AI systems comprehensively. Robustness is less relevant as there are no specific mentions of performance standards or benchmarks for AI systems. Overall, the text primarily revolves around social considerations and data governance related to AI.
Sector:
Healthcare
Nonprofits and NGOs
Hybrid, Emerging, and Unclassified (see reasoning)
The legislation significantly impacts the sector of children and minors, creating specific regulations for the use of AI technology, particularly chatbots, in environments where minors are users. It mandates considerations for their mental health and safety, indicating a strong legislative focus on this sector (minors). Although the bill does not specifically classify under typical sectors like Government Agencies, Healthcare, or Education, it does impact the regulation of AI in interactions with minors, hence it may fit best under Hybrid, Emerging, and Unclassified. The text’s focus on the operators' responsibilities for preventing harm and providing transparency about AI technology adds to its relevance. Each sector seems somewhat disconnected from the details of this legislation; therefore, this leads to lower scores for more established sectors. It leans more towards being classified in an unclassifiable category.
Keywords (occurrence): artificial intelligence (4) chatbot (27) show keywords in context
Description: Relating to the use of artificial intelligence-based algorithms by health benefit plan issuers, utilization review agents, health care providers, and physicians.
Summary: The bill regulates the use of artificial intelligence algorithms in healthcare by prohibiting discrimination and requiring transparency, oversight, and consumer education on AI use by health benefit plan issuers and providers.
Collection: Legislation
Status date: Feb. 19, 2025
Status: Introduced
Primary sponsor: Donna Campbell
(sole sponsor)
Last action: Filed (Feb. 19, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text primarily concerns the use of artificial intelligence in healthcare, specifying regulations surrounding artificial intelligence-based algorithms in health benefit plans and their application in decision-making processes. It directly addresses potential biases and discrimination related to AI use, which links strongly to societal impacts of AI therefore supporting the Social Impact category. Moreover, provisions regarding algorithm submission to a department for certification and oversight tie into aspects of Data Governance, focusing on secure and responsible data practices in AI applications. Regarding System Integrity and Robustness, although the text implies the need for oversight and compliance, it doesn't emphasize clear mechanisms for security, transparency, or performance benchmarks. However, since the text does bring up accountability and standards, it leans more towards the System Integrity category without strongly fulfilling the Robustness criteria.
Sector:
Healthcare (see reasoning)
The text relates closely to healthcare institutions, particularly how they utilize AI in managing health benefit plans and decision-making processes. The focus on artificial intelligence-based algorithms used by healthcare providers underscores its direct relevance to the Healthcare sector. While the text could have broader implications across different sectors, its principal aim pertains to legislation that influences how healthcare services operate with AI. The references to benefits and discrimination indicate it will significantly impact the Healthcare sector. There is limited reference to other sectors which leads to lower relevance in those areas.
Keywords (occurrence): artificial intelligence (15) algorithm (5) show keywords in context
Description: AN ACT relating to social media platforms; requiring each social media platform to establish a system to verify the age of prospective users of the platform in this State; prohibiting a social media platform from allowing certain minors in this State to use the social media platform; requiring a social media platform to obtain the affirmative consent of a parent or legal guardian before authorizing certain minors in this State to use the social media platform; requiring a social media platfor...
Summary: The bill establishes age verification and parental consent requirements for minors using social media platforms in Nevada, aiming to enhance youth online safety by regulating access and data usage.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Commerce and Labor
(sole sponsor)
Last action: Read first time. To committee. (Feb. 3, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The AI-related portions of the text primarily address how social media platforms must treat personal information of minors and do not utilize minors' information in 'algorithmic recommendation systems.' This directly suggests concerns regarding bias and the social impact of automated systems on vulnerable populations, particularly minors. Furthermore, the bill suggests requiring social media platforms to implement age verification systems that leverage technology, hinting at accountability and transparency aspects of AI systems as they relate to social responsibility. The prohibition of the use of personal information in algorithmic systems inherently speaks to efforts against AI-driven discrimination and misinformation, especially in youth-centric environments. These considerations strongly align with the Social Impact category; thus, a high score can be granted. Data Governance also receives significant attention as the legislation explicitly restricts the handling of minors' personal information and seeks compliance with privacy practices. However, it lacks a comprehensive focus on issues like accuracy or bias within data sets, leading to a moderate score in this category. System Integrity is indirectly relevant; however, the text's primary focus on minors and social media platforms may limit its applicability. Robustness appears related only in passing as the text does establish recommendations, but it does not explore performance benchmarks for AI systems nor auditing practices. Therefore, the scores reflect the specific impact on social concerns and some governance aspects, while alignment with other categories varies.
Sector:
Government Agencies and Public Services (see reasoning)
The legislation specifically addresses the implications of AI and technology in managing social media platforms focused on minors. The keyword mentions underscore an intersection with youth safety, which closely ties into concerns about how AI-driven systems can imperil trusted spaces for younger users. Concerning the politics and elections sector, while the text does not directly impact electoral processes or political campaigns, it does influence public dialogue around minors and technology. Government Agencies and Public Services are modestly implicated as the legislation refers to the Department of Health and Human Services establishing regulations for best practices. The judicial system isn't directly addressed in terms of legal precedence with AI; however, implications for civil enforcement provisions may introduce legal contexts. There is very limited to no connection to healthcare, private enterprises or academic institutions within the text, leading to low scores for those sectors. The International Cooperation category is not applicable, nor is there an evident link to non-profits. Meanwhile, the 'Hybrid, Emerging, and Unclassified' category only fits if considering social media as a unique emergent tech context but would not justify high relevance. Overall, the text carries strong implications for government approaches to AI in social media and youth safeguarding, leading to favorable scores for those sectors.
Keywords (occurrence): automated (1) recommendation system (3) show keywords in context
Description: Relating to the unlawful production or distribution of certain sexually explicit media and to the removal of certain intimate visual depictions published on online platforms without the consent of the person depicted; increasing criminal penalties.
Summary: The Exploitation Protection Act criminalizes non-consensual production or distribution of explicit media, mandates removal processes on platforms for unauthorized depictions, and increases penalties for violations.
Collection: Legislation
Status date: March 4, 2025
Status: Introduced
Primary sponsor: Richard Raymond
(sole sponsor)
Last action: Filed (March 4, 2025)
Societal Impact (see reasoning)
The text primarily focuses on the unauthorized production and distribution of intimate visual depictions, including the use of deepfake technology, which falls under the scope of AI. Given this context, the relevance of the four categories can be assessed as follows. In terms of Social Impact, the bill addresses the potential harm caused by misused AI technologies (like deepfakes), which can affect individuals’ privacy and consent. Hence, it scores a 5. For Data Governance, the bill does not directly address data collection and management concerns; it focuses more on the prohibition of misuse and does not specifically deal with data accuracy or privacy regulations, leading to a score of 2. Regarding System Integrity, while it does touch upon safeguarding individuals from misuse of AI through legal means, it lacks in addressing broader principles of AI system security and transparency, resulting in a score of 2. Lastly, for Robustness, there is little mention of performance benchmarks or compliance requirements for AI systems, leading to a score of 1.
Sector:
Government Agencies and Public Services
Judicial system
Nonprofits and NGOs (see reasoning)
The text specifically addresses the unlawful production and distribution of deepfake media, with implications in relation to individuals’ rights and online platforms. Therefore, it is most relevant to the Nonprofits and NGOs sector as these organizations could be stakeholders when addressing misuse and advocacy around AI-generated content. Furthermore, given the public safety concerns surrounding intimate visual depictions, a score of 4 was assigned for Government Agencies and Public Services as they might need to enforce or regulate the legislation. There is also implicit relevance to the Judicial System for addressing legal ramifications of deepfake technology; hence, it receives a score of 3. Other sectors like Healthcare, Politics and Elections, Private Enterprises, Labor, and Employment, Academic and Research Institutions have minimal direct relevance and were scored lower.
Keywords (occurrence): artificial intelligence (1) machine learning (1) automated (1) deepfake (7) show keywords in context
Description: Relating to the deceptive trade practice of failure to disclose information regarding the use of artificial intelligence system or algorithmic pricing systems for setting of price.
Summary: This bill establishes that failing to disclose the use of artificial intelligence or algorithmic pricing in setting prices is a deceptive trade practice in Texas. It aims to enhance transparency and consumer protection.
Collection: Legislation
Status date: March 13, 2025
Status: Introduced
Primary sponsor: Royce West
(sole sponsor)
Last action: Filed (March 13, 2025)
Societal Impact
Data Governance (see reasoning)
The text is primarily focused on the deceptive practices associated with the lack of disclosure related to artificial intelligence systems and algorithmic pricing. This falls under the 'Social Impact' category as it aims to protect consumers from misinformation regarding AI applications that could affect their purchasing decisions. Additionally, it addresses issues such as consumer protection and potentially the biases that may arise in automated pricing systems. The relevance to 'Data Governance' is present but less explicit, as it tangentially relates to the management and transparency of AI systems but does not focus on data security or accuracy. 'System Integrity' and 'Robustness' are less relevant as the legislation does not address specific security, oversight, or performance benchmarks of AI systems.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text's emphasis on the disclosure of AI usage in pricing directly relates to 'Private Enterprises, Labor, and Employment,' as it affects how businesses operate regarding consumer transactions and pricing strategies. It is also relevant to 'Government Agencies and Public Services' since regulations of this nature often stem from governmental oversight intended to protect consumers. There are non-significant implications for 'Judicial System' as enforcement could involve legal interpretations of deceptive practices, but this is more of an indirect association. Other sectors such as 'Politics and Elections,' 'Healthcare,' and 'Academic and Research Institutions' do not find direct connections in this text. The text does not fit into 'Nonprofits and NGOs' or 'International Cooperation and Standards' as it focuses more on local business practices.
Keywords (occurrence): artificial intelligence (4) machine learning (1) show keywords in context
Description: No Use Of Ai For Rent Manipulation
Summary: The bill prohibits the use of artificial intelligence for rent manipulation, preventing property owners from coordinating price changes and ensuring fair competition in the rental market in New Mexico.
Collection: Legislation
Status date: Jan. 29, 2025
Status: Introduced
Primary sponsor: Andrea Romero
(2 total sponsors)
Last action: HJC: Reported by Committee with Do Not Pass but with a Do Pass Recommendation on Committee Substitution, placed on temporary calendar (Feb. 18, 2025)
Societal Impact
Data Governance (see reasoning)
This bill explicitly addresses the manipulation of rent pricing using artificial intelligence, which has significant implications for social fairness and market dynamics. The legislation aims to mitigate potential harm caused by AI in rental markets, directly relating to the manipulation of consumer prices and competitive fairness. The prohibition of AI in this context highlights concerns about accountability for the outcomes produced by AI-driven pricing. Thus, it has a high relevance to the Social Impact category, particularly regarding fairness and consumer protection. Data Governance is also relevant as the bill touches on the management and usage of data by highlighting prohibited practices around AI coordination functions. System Integrity and Robustness are less relevant since the bill does not primarily focus on security, transparency, benchmarks, or compliance aspects of AI systems.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The bill is most relevant to the rental housing market, which may impact persons seeking rental agreements as well as rental property owners, connecting indirectly to the Economic sector. While it touches on issues related to governance, such as the coordination and competitive practices of rental owners utilizing AI, it does not specifically address AI use in broad public services or employment contexts in the way other sectors might relate to legislative efforts. The specifics of AI manipulation in pricing do not extend neatly into sectors such as Healthcare or the Judicial System. Therefore, the most relevant evaluations are focused upon the Private Enterprises and Labor sector, as the manipulation has direct implications for rental business practices alongside governance.
Keywords (occurrence): artificial intelligence (1) algorithm (1) show keywords in context
Description: Prohibiting a person from operating a motor vehicle while using a wireless communications device in a handheld manner; providing an exception; requiring that sustained use of a wireless communications device by a person operating a motor vehicle be conducted through a hands-free accessory until such use is terminated; revising penalty provisions relating to the use of wireless communications devices in a handheld manner in certain circumstances; requiring persons cited for specified infractio...
Summary: The bill amends Florida traffic laws to prohibit handheld use of wireless devices while driving, enhances penalties for violations, and aims to improve road safety and reduce accidents.
Collection: Legislation
Status date: Feb. 26, 2025
Status: Introduced
Primary sponsor: Rules
(6 total sponsors)
Last action: Pending reference review -under Rule 4.7(2) - (Committee Substitute) (April 1, 2025)
The text primarily addresses traffic offenses related to the use of wireless communications devices while operating a motor vehicle, with a specific focus on prohibiting handheld use. No explicit references to AI technologies or their implications are present, which indicates low relevance to the AI-focused categories. The legislation does not delve into concepts like fairness, accountability, data governance, transparency, or the integrity of systems utilizing AI. Therefore, the text does not fit well into the categories of Social Impact, Data Governance, System Integrity, or Robustness.
Sector: None (see reasoning)
The text is focused on traffic law and the safe use of wireless communications devices in vehicles, rather than discussing the application of AI technologies within any specific sector like politics, government, judicial systems, healthcare, or businesses. It does briefly mention autonomous vehicles, but this is only in the context of using hands-free devices while driving, lacking a broader engagement with AI technologies. Thus, the relevance to the sectors provided is minimal to none.
Keywords (occurrence): automated (1) autonomous vehicle (2) show keywords in context
Description: Relating to the use of artificial intelligence to score certain portions of assessment instruments administered to public school students.
Summary: The bill restricts the use of artificial intelligence for scoring constructed responses on public school assessments, allowing it only under specific conditions to ensure fairness and reliability.
Collection: Legislation
Status date: March 14, 2025
Status: Introduced
Primary sponsor: Gina Hinojosa
(sole sponsor)
Last action: Filed (March 14, 2025)
Societal Impact
Data Governance (see reasoning)
The text explicitly discusses the role of artificial intelligence in scoring assessment instruments in public schools. It elaborates on the prohibition and conditional allowance of AI methods, detailing how these methods must ensure validity and reliability, particularly addressing issues of bias and accountability in AI scoring systems. This strongly indicates a focus on both the societal impact of AI on education (Social Impact category) and the governance of data integrity and accuracy in AI applications (Data Governance category). The content does not delve significantly into system integrity or robustness, as it mainly addresses the application of AI in scoring assessments rather than the broader security or performance standards that those categories would usually encompass.
Sector:
Academic and Research Institutions (see reasoning)
The legislation specifically targets the use of artificial intelligence within the educational sector, especially in K-12 public school assessment scoring. This connects closely with the Academic and Research Institutions sector, as it deals with educational assessment methods and AI's implications in this context. While it touches on broader societal impacts (fitting also in the Education sector), it does not directly relate to other sectors like Healthcare, Politics and Elections, or the Judicial System. The mention of educationally disadvantaged and special education students indicates a focus on fairness within the education system, reinforcing its relevance to Academic and Research Institutions.
Keywords (occurrence): artificial intelligence (4) automated (3) algorithm (1) show keywords in context
Description: Relating to the creation of the artificial intelligence advisory council and the establishment of the artificial intelligence learning laboratory.
Summary: The bill establishes an Artificial Intelligence Advisory Council and an Artificial Intelligence Learning Laboratory in Texas to evaluate and monitor AI systems used by state agencies, ensuring ethical implementation and minimizing risks.
Collection: Legislation
Status date: March 5, 2025
Status: Introduced
Primary sponsor: Brian Harrison
(sole sponsor)
Last action: Filed (March 5, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text clearly addresses the creation of an artificial intelligence advisory council and establishment of an AI learning laboratory, focusing on the management and study of AI systems. The relevance to 'Social Impact' is strong due to provisions aimed at assessing the impact of AI on constitutional rights and public trust. 'Data Governance' is relevant through mandates on reporting and inventorying automated decision systems, addressing their use, and the potential for bias or privacy concerns. 'System Integrity' is also relevant given the emphasis on oversight and recommendations for ethical AI use within the state government. 'Robustness', however, is less directly applicable as the text does not focus on performance benchmarks or auditing regulations.
Sector:
Government Agencies and Public Services
Judicial system
Academic and Research Institutions (see reasoning)
The sector classifications align quite well with the text. 'Government Agencies and Public Services' is extremely relevant since the legislation pertains directly to the establishment of oversight and advisory bodies related to AI employment within state government operations. 'Judicial System' is somewhat relevant, considering the impact of AI systems on legal rights as mentioned, but does not specifically address the judicial applications. Other sectors like 'Private Enterprises, Labor, and Employment', 'Healthcare', etc., are less relevant as the text does not address these domains directly. 'Politics and Elections' could be marginally relevant given the establishment of a council but does not focus specifically on political processes.
Keywords (occurrence): artificial intelligence (31) machine learning (1) automated (31) algorithm (3) show keywords in context
Description: Create new sections of KRS Chapter 158 to define terms; allow the use of camera monitoring systems on school buses operated by a school district, and allow the enforcement of a civil penalty for stop arm camera violations recorded by a camera monitoring system; set the amount of the civil penalty; provide that the revenue generated from a civil penalty shall be retained by the school district; allow a law enforcement agency to charge a fee of $25 from every civil penalty enforced by the law e...
Summary: The bill enhances school bus safety by allowing districts to install camera monitoring systems to record stop-arm violations, promoting enforcement and accountability, while ensuring privacy for recorded images.
Collection: Legislation
Status date: Jan. 7, 2025
Status: Introduced
Primary sponsor: Greg Elkins
(3 total sponsors)
Last action: 3rd reading, passed 37-0 with Committee Substitute (1) (March 5, 2025)
The text primarily focuses on school bus safety and the use of camera monitoring systems on school buses. While it does involve technology in terms of using cameras for enforcement, it does not explicitly mention or relate to broader AI concepts such as artificial intelligence or automated decision making. The relevance of the categories is assessed as follows: Social Impact scores low, as it does not directly address societal effects beyond safety; Data Governance is also low, as it doesn’t delve into data management concerns specific to AI; System Integrity is slightly relevant due to the technology involved but not significantly focused on systems security or oversight; Robustness has no relevance since performance benchmarks for AI are not addressed.
Sector:
Government Agencies and Public Services (see reasoning)
The text discusses the enforcement of school bus regulations via monitoring systems and outlines penalties for infractions. While the topics presented may involve government operations, they are not specifically related to broader AI applications within governmental contexts. Politics and Elections, Judicial System, Healthcare, Private Enterprises, Academic Institutions, International Standards, Nonprofits, and Hybrid sectors are not applicable, as the legislation is narrowly focused on transportation and school safety without extending into these broader areas.
Keywords (occurrence): automated (3) autonomous vehicle (9) show keywords in context
Description: Requires surplus lines insurers to comply with valued policy law; requires insurers' decisions to deny claims to be reviewed, approved, & signed off by qualified human professionals; prohibits artificial intelligence, machine learning algorithms, & automated systems from serving as basis for denying claims; requires insurers to maintain certain records of human review process for denied claims; requires insurers to include certain information in denial communications to claimants; authorizes ...
Summary: HB 1555 mandates human review of insurance claim denials, prohibits automated decision-making, and enhances insurer accountability, aiming to ensure fairness and transparency in the claims process.
Collection: Legislation
Status date: Feb. 28, 2025
Status: Introduced
Primary sponsor: Hillary Cassel
(sole sponsor)
Last action: Filed (Feb. 28, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly prohibits the use of artificial intelligence (AI), machine learning algorithms, and automated systems in the process of denying insurance claims. This highlights the concern over the social impact of AI in making significant decisions that affect individuals financially and emotionally. There are mandates for human review and accountability that tie directly to social implications, particularly in protecting consumers from potential biases or errors that automated systems might introduce. As such, the text aligns well with the Social Impact category, as it emphasizes fairness and accountability in decision-making that relates to claim denials. Data governance is somewhat relevant due to the record-keeping requirements for human review, though less so compared to the social impact directly addressed in the text. System integrity is relevant due to the enforced human oversight in decision-making processes which ensures integrity and trust in the claims process, while Robustness is less pertinent as the focus here is not on performance benchmarks but rather on decision-making pathways. Thus, the relevance scores reflect a clear alignment with the social implications and necessary human governance over automated decisions.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text pertains specifically to the insurance sector, setting regulations for how insurance companies should manage claim decisions, which inherently involves the use of AI in those processes. The mention of human oversight directly affects how insurers operate and interact with clients, making it relevant to Private Enterprises and Labor, as it discusses accountability and labor practices within the industry. Government Agencies and Public Services are also relevant as the legislation includes provisions for oversight by the Office of Insurance Regulation, displaying the governance aspect of the insurance sector. However, sectors like Healthcare, Academic and Research Institutions, and others do not find direct relevance in this context. Accordingly, scores reflect the strong relevance of the text to private enterprises and the regulatory framework governing the insurance sector.
Keywords (occurrence): artificial intelligence (1) machine learning (3) automated (4) algorithm (2) show keywords in context
Description: To counter the malign influence and theft perpetuated by the People's Republic of China and the Chinese Communist Party.
Summary: The Countering Communist China Act aims to address the threats posed by the Chinese Communist Party, particularly focusing on economic coercion, intellectual property theft, and support for authoritarianism, while enhancing U.S. national security.
Collection: Legislation
Status date: Feb. 29, 2024
Status: Introduced
Primary sponsor: Kevin Hern
(50 total sponsors)
Last action: Referred to the Committee on Foreign Affairs, and in addition to the Committees on Financial Services, Ways and Means, Rules, the Judiciary, Oversight and Accountability, Energy and Commerce, Intelligence (Permanent Select), Agriculture, Science, Space, and Technology, Natural Resources, Education and the Workforce, Armed Services, Transportation and Infrastructure, and Veterans' Affairs, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (Feb. 29, 2024)
The text primarily addresses issues of national security and economic relations concerning the People's Republic of China, without mentioning AI or related technologies explicitly. While it's possible that AI technologies could be implicated in discussions about trade and economic threats, the text does not delve into the specific impacts or regulations related to AI systems. Therefore, the relevance of Social Impact, Data Governance, System Integrity, and Robustness to this specific text appears to be quite low.
Sector: None (see reasoning)
The text heavily focuses on national security and economic issues related to China and does not connect to specific sectors like Politics and Elections, Healthcare, or others concerning AI applications. It lacks any mention of AI's impacts or usages in the listed sectors, meaning that they do not pertain to the text. Hence, all sectors are rated as not relevant.
Keywords (occurrence): artificial intelligence (2) machine learning (1) algorithm (1) show keywords in context
Description: Schools; requiring students beginning certain school year to complete a computer science unit to graduate with standard diploma. Effective date. Emergency.
Summary: This bill mandates that students graduating with a standard diploma starting in the 2024-2025 school year must complete a computer science unit, enhancing STEM education.
Collection: Legislation
Status date: March 26, 2025
Status: Engrossed
Primary sponsor: Brenda Stanley
(2 total sponsors)
Last action: First Reading (March 26, 2025)
Description: To provide for Federal civilian agency laboratory development for testing and certification of artificial intelligence for civilian agency use, and for other purposes.
Summary: The bill establishes federal laboratories for testing and certifying artificial intelligence systems intended for civilian agency use, prioritizing privacy, accountability, and legal protections while ensuring ethical application of AI technologies.
Collection: Legislation
Status date: July 15, 2024
Status: Introduced
Primary sponsor: Sheila Jackson-Lee
(sole sponsor)
Last action: Referred to the Committee on Homeland Security, and in addition to the Committee on Oversight and Accountability, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (July 15, 2024)
System Integrity
Data Robustness (see reasoning)
The text explicitly mentions 'artificial intelligence' in the context of Federal civilian agency use, suggesting a direct relevance to AI. Given that the legislation focuses on testing and certification, it implies a concern for the integrity and performance of AI systems used in government agencies. However, without specific details, the extent of its relevance to the broader impacts, governance, and robustness of AI systems cannot be thoroughly assessed. The title suggests considerations of both social and governance aspects, primarily focusing on System Integrity and potentially Robustness.
Sector:
Government Agencies and Public Services (see reasoning)
The text pertains to Federal civilian agencies, indicating that it is closely related to the use and regulation of AI by government entities. The mention of laboratory development implies a focus on ensuring that AI meets certain standards and regulations necessary for public services. However, without specific applications or implications mentioned, the relevance to the broader governmental and public service sector remains limited, making it moderately relevant.
Keywords (occurrence): artificial intelligence (15) automated (2) show keywords in context
Description: An act to add Chapter 22.6 (commencing with Section 22602) to Division 8 of the Business and Professions Code, and to add Sections 11547.6 and 11547.6.1 to the Government Code, relating to artificial intelligence.
Summary: The "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" establishes safety protocols for AI model developers, including compliance auditing and incident reporting, aimed at mitigating risks associated with advanced AI technologies.
Collection: Legislation
Status date: Aug. 29, 2024
Status: Enrolled
Primary sponsor: Scott Wiener
(4 total sponsors)
Last action: Enrolled and presented to the Governor at 3 p.m. (Sept. 9, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text primarily discusses regulations and requirements for the development, safety, and management of artificial intelligence models, specifically 'covered models.' This clearly relates to all four categories: 1) Social Impact addresses potential risks and harms that AI systems can pose to safety, producing regulations around AI-driven innovations; 2) Data Governance highlights the importance of accurate data and compliance for AI models; 3) System Integrity relates to implementing safety protocols and the ability to shut down models; and 4) Robustness emphasizes compliance benchmarks and independent auditing for the AI models. Given the strong focus on safety, compliance, and societal implications tied to AI's usage and development, all four categories demonstrate relevance.
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions
International Cooperation and Standards
Nonprofits and NGOs
Hybrid, Emerging, and Unclassified (see reasoning)
The text addresses how AI is regulated in various contexts, such as government operations and compliance measures for AI models, suggesting a wide-ranging impact across several sectors. Specifically, 1) Politics and Elections is pertinent as regulations might influence electoral technologies; 2) Government Agencies and Public Services is relevant since the legislation pertains directly to government operations; 3) Judicial System could relate to legal frameworks governing AI use; 4) Healthcare may also be relevant where AI health technologies are employed, and the risk of harm mitigation is crucial; 5) Private Enterprises, Labor, and Employment ties to how businesses manage compliance for AI technologies; 6) Academic and Research Institutions are included due to provisions promoting equitable access for universities; 7) International Cooperation and Standards is relevant when discussing compliance across jurisdictions; 8) Nonprofits and NGOs, where applicable, can be influenced by these AI governance frameworks; while 9) Hybrid, Emerging, and Unclassified captures potential intersections of AI with new domains. However, the primary emphasis in the text is on government regulation and safety protocols rather than specific impacts on individual sectors. Therefore, while all categories could be considered, the strongest relevance is primarily found in the Government Agencies and Public Services sector.
Keywords (occurrence): artificial intelligence (39) show keywords in context
Description: Prohibiting health care providers and carriers from using artificial intelligence if the artificial intelligence has been designed only to reduce costs for a health care provider or carrier at the expense of reducing the quality of patient care, delaying care, or denying coverage for patient care; requiring health care providers and carriers that use artificial intelligence for health care decisions annually to post certain key data about the decisions on the health care provider's or carrier...
Summary: House Bill 1240 prohibits health care providers and insurance carriers from using artificial intelligence that prioritizes cost-cutting over patient care quality, requiring transparency and annual audits for AI use in decision-making.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: C.T. Wilson
(sole sponsor)
Last action: Hearing 3/06 at 1:00 p.m. (Feb. 12, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text specifically addresses the use of artificial intelligence in healthcare decision-making, focusing on prohibiting AI systems that prioritize cost reduction over quality of care. It mandates transparency and accountability regarding AI use within healthcare settings, including annual posting of decision-related data and third-party audits. Consequently, it significantly relates to the categories of Social Impact, Data Governance, and System Integrity as it discusses the societal implications of AI in healthcare, handling of data associated with AI decisions, and mandates for oversight in AI system processes.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The legislation is centered around healthcare, with a focus on artificial intelligence's role in healthcare providers and insurers. It aims to establish guidelines for AI's ethical implementation, evaluation standards, and transparency in healthcare decision-making. Therefore, it is extremely relevant to Healthcare, while it also touches on aspects of Government Agencies due to regulatory considerations. However, the relevance to other sectors appears limited, hence the lower scores for those other categories.
Keywords (occurrence): artificial intelligence (3) automated (2) show keywords in context
Description: Relating to the use of certain automated systems in, and certain adverse determinations made in connection with, the health benefit claims process.
Summary: The bill restricts the use of automated decision systems in making adverse health benefit determinations, ensuring transparency and allowing audits while permitting their use for administrative tasks like fraud detection.
Collection: Legislation
Status date: March 26, 2025
Status: Engrossed
Primary sponsor: Charles Schwertner
(4 total sponsors)
Last action: Reported engrossed (March 26, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text primarily addresses the use of artificial intelligence in the context of healthcare, specifically in utilization review for health benefit plans. It emphasizes the potential impact of AI on decision-making processes in healthcare services, which relates to the Social Impact category. The legislation imposes restrictions on the use of AI algorithms to prevent reliance solely on them for determinations regarding medical necessity, hence holding developers accountable for the AI’s role in healthcare decisions, aligning with social accountability. Regarding Data Governance, there are aspects related to the audit and inspection of AI usage that imply a need for data accuracy and secure management, though this isn’t the primary focus of the text. System Integrity is relevant due to the mention of oversight and potential audits of AI utilization, ensuring the reliability and transparency of AI systems used in healthcare. Robustness isn’t addressed here as there is no mention of benchmarks or performance standards for AI systems specified. Overall, the legislation is highly relevant to Social Impact and moderately relevant to Data Governance and System Integrity.
Sector:
Healthcare (see reasoning)
The text explicitly pertains to AI in healthcare settings, focusing on utilization review processes, which means it aligns very closely with the Healthcare sector. It outlines how AI can and cannot be used in making healthcare service decisions, thereby regulating its application in this domain. The regulation of AI in health benefit plans is crucial for the safe and effective use of technology in a sensitive sector like healthcare. Other sectors such as Politics and Elections or International Cooperation and Standards are not relevant to the content of the text. Overall, the text is extremely relevant to the Healthcare sector.
Keywords (occurrence): artificial intelligence (3) machine learning (1) automated (6) algorithm (4) show keywords in context
Description: Provides for the continuous revision of the Code of Civil Procedure
Summary: House Bill No. 178 aims to continuously revise Louisiana's Code of Civil Procedure by amending existing articles, enacting new provisions, and clarifying procedures related to civil litigation, including issues like child custody, attorney conduct, and service of documents.
Collection: Legislation
Status date: March 31, 2025
Status: Introduced
Primary sponsor: Mike Johnson
(sole sponsor)
Last action: Under the rules, provisionally referred to the Committee on Civil Law and Procedure. (March 31, 2025)
Description: To Prohibit Deceptive And Fraudulent Deepfakes In Election Communications.
Summary: The bill aims to prohibit deceptive and fraudulent deepfakes in election communications, establishing civil penalties and a cause of action against violators to protect candidates' reputations and inform voters.
Collection: Legislation
Status date: Nov. 20, 2024
Status: Introduced
Primary sponsor: Andrew Collins
(sole sponsor)
Last action: Filed (Nov. 20, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
This legislation explicitly addresses the creation and distribution of deceptive and fraudulent deepfakes, particularly in the context of election communications, highlighting the social impact of such technologies on electoral integrity and misinformation. The proposed law aims to manage psychological and material harm caused by AI-generated content that misrepresents candidates, which is a direct engagement with issues of AI's effects on society, trust in democratic processes, and potential harm to reputations. It also proposes civil penalties, indicating accountability for developers or users of these technologies, further solidifying its relevance to social impact. Consequently, it is extremely relevant to the category of Social Impact. The legislation mentions synthetic media and generative adversarial networks, which are closely related to AI, thus bridging directly to concerns regarding fairness, bias, and the societal effects of algorithmically driven content creation. Its focus on preventing misinformation aligns heavily with the category’s broader description, warranting a very high score here.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The legislation primarily targets the use of deceptive and fraudulent deepfakes in the context of elections and political communications. As such, it is directly relevant to the sector of Politics and Elections due to its explicit focus on safeguarding electoral integrity against AI-generated misinformation. The requirement for clear disclosures and the imposition of civil penalties indicate an engagement with governance within electoral processes, thereby impacting how elections may be conducted and regulated using AI technologies. The mentions of civil penalties and enforcement mechanisms also tie into regulatory frameworks around elections. While there are some implications for Government Agencies and Public Services in terms of enforcement, the primary focus remains on electoral integrity, leading to a more moderate score for that category. As such, Politics and Elections is assigned a high score, while other sectors, including Government Agencies and Public Services, receive moderate relevance due to implications beyond elections.
Keywords (occurrence): artificial intelligence (1) deepfake (11) synthetic media (10) show keywords in context
Description: Relating to artificial intelligence mental health services.
Summary: The bill establishes regulations for artificial intelligence mental health services in Texas, requiring approval for AI applications and licensed professionals to ensure ethical and safe service provision.
Collection: Legislation
Status date: Nov. 13, 2024
Status: Introduced
Primary sponsor: Nate Schatzline
(sole sponsor)
Last action: Filed (Nov. 13, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text is focused on the provision of mental health services using artificial intelligence (AI) and outlines regulations for the approval, provision, and oversight of such services. Given the explicit mention of AI in the context of mental health services, this text strongly intersects with the Social Impact, as it addresses how AI affects individuals' access to mental health care, introduces ethical considerations, and mandates informed consent to mitigate psychological harm. The Data Governance category is also relevant as it implicates the management of records and obligations to ensure privacy and compliance with professional standards. System Integrity is pertinent because it emphasizes the need for professional oversight and the integrity of AI applications involved in mental health services. Lastly, Robustness is slightly relevant since the text mentions the verification of AI applications for competency and safety, although it primarily focuses on service provision rather than benchmarking AI performance.
Sector:
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
The legislation specifically revolves around the use of artificial intelligence in mental health services, which directly connects it to the healthcare sector. It outlines how AI can be employed by licensed professionals, the necessary approvals for AI applications, and ethical standards related to patient care and service provision. Thus, the Healthcare sector is rated highly relevant. The implications for the roles of mental health professionals and potential regulatory mechanisms also connect it to private enterprises but to a lesser extent. No other specific sectors apply to the content of the text, leading to lower relevance scores for the remaining sectors.
Keywords (occurrence): artificial intelligence (22) machine learning (1) show keywords in context