4932 results:
Description: An act relating to restricting electronic monitoring of employees and the use of employment-related automated decision systems
Summary: The bill proposes to limit electronic monitoring of employees and the use of automated decision systems in employment practices, ensuring transparency, privacy, and compliance with labor laws.
Collection: Legislation
Status date: Feb. 19, 2025
Status: Introduced
Primary sponsor: Monique Priestley
(9 total sponsors)
Last action: Read first time and referred to the Committee on General and Housing (Feb. 19, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This legislation directly addresses the use and implications of automated decision systems and electronic monitoring in employment contexts, making it highly relevant to the categories concerning Social Impact and System Integrity. The focus on automated decision systems specifically highlights potential societal implications, implications for fairness and bias, and the protection of employees within automated workplace environments. Additionally, the requirement for impact assessments and notice requirements relates closely to data governance and system integrity principles, ensuring transparency and accountability for AI usage in employment settings. The bill seeks to establish safeguards that protect employee rights and privacy in regards to automated systems, specifically addressing concerns around accountability, the handling of sensitive data, and fairness in decision-making processes.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment
Hybrid, Emerging, and Unclassified (see reasoning)
This text is particularly relevant to several sectors due to its implications for labor practices and employee rights in the context of AI technologies. It explicitly deals with the regulation of automated decision systems in employment contexts, setting standards for transparency and fairness in decision-making. This affects the Private Enterprises, Labor, and Employment sector significantly, as well as the Government Agencies and Public Services sector because it sets a precedent for government regulations that may influence various public and private organizations. Additionally, implications may extend to the Judicial System as cases may arise challenging the legality of the use of automated decision aids in hiring or employment evaluations. While its direct relevance to sectors like Healthcare and Academic Institutions is less clear, the principles discussed could be applicable in contexts where AI-driven decisions also exist.
Keywords (occurrence): artificial intelligence (1) automated (42) algorithm (2) show keywords in context
Description: As introduced, defines "human being," "life," and "natural person" for statutory construction purposes; excludes from the definition of "person," "life," and "natural person" artificial intelligence, a computer algorithm, a software program, computer hardware, or any type of machine. - Amends TCA Title 1.
Summary: The bill amends Tennessee law to clarify definitions of "person," "human being," and "life," explicitly excluding artificial intelligence and machines from these definitions, while recognizing unborn humans.
Collection: Legislation
Status date: Feb. 4, 2025
Status: Introduced
Primary sponsor: Michele Reneau
(sole sponsor)
Last action: Assigned to s/c Civil Justice Subcommittee (Feb. 10, 2025)
Societal Impact (see reasoning)
This bill explicitly addresses the definition of 'person' and related terms, specifically excluding artificial intelligence and other machine-related entities. As such, its relevance to the Social Impact category is moderate, as it pertains to how AI is perceived in relation to personhood and legal status, which shapes the societal implications of AI's existence and usage. It is less relevant to Data Governance, System Integrity, and Robustness, as the legislation primarily focuses on definitions and exclusions rather than on governance, integrity, or performance benchmarks of AI systems.
Sector: None (see reasoning)
The bill directly addresses the legal definitions concerning AI, thus impacting the understanding of AI in a legislative context. However, it does not delve into specific areas such as politics, government operations, or healthcare, so its relevance to these sectors is minimal. It does not focus on employment implications or international standards either. Therefore, the legislative intent’s relevance is mostly confined to the legal and philosophical implications of personhood, allowing a moderate score in the relevant sectors.
Keywords (occurrence): artificial intelligence (3) algorithm (3) show keywords in context
Description: Making improvements to transparency and accountability in the prior authorization determination process.
Summary: This bill aims to enhance transparency and accountability in the prior authorization process for healthcare services, ensuring timely decisions and proper oversight in using artificial intelligence for medical coverage determinations in Washington State.
Collection: Legislation
Status date: Jan. 24, 2025
Status: Introduced
Primary sponsor: Alicia Rule
(5 total sponsors)
Last action: Referred to Appropriations. (Feb. 21, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text discusses the use of artificial intelligence in the prior authorization determination process for healthcare coverage. This explicitly relates to the 'Social Impact' category as it addresses accountability, ensuring AI does not make inappropriate healthcare determinations which could affect patient health. It also aligns with 'Data Governance,' as it touches on the need for fair and non-discriminatory AI usage based on individual medical history rather than biased group data sets. Additionally, the text emphasizes the need for maintaining oversight and transparency in AI systems, which fits into 'System Integrity'. Lastly, it calls for compliance with standards and regular reviews of AI tools to ensure their robustness and effectiveness, thus it also pertains to 'Robustness'. Overall, the integration of AI into healthcare decisions, accountability safeguards, and guidelines tighten the relevance across multiple categories.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text primarily focuses on the healthcare sector, specifically on how AI technologies influence healthcare coverage decisions through prior authorization processes. The legislation outlines the standards for health insurance and care providers concerning how prior authorizations are to be handled with respect to individual patient data and AI's role in decision making. Thus, it is highly relevant to the 'Healthcare' sector. Some elements of accountability and integrity are also relevant to the 'Government Agencies and Public Services' sector since the legislation regulates the actions of organizations in delivering health services. However, the primary focus remains on the healthcare implications of AI.
Keywords (occurrence): artificial intelligence (38) machine learning (6) automated (3) foundation model (3) show keywords in context
Description: To require covered platforms to remove nonconsensual intimate visual depictions, and for other purposes.
Summary: The TAKE IT DOWN Act mandates platforms to remove nonconsensual intimate visual depictions, establishes penalties for violations, and outlines a process for individuals to report such content.
Collection: Legislation
Status date: Jan. 22, 2025
Status: Introduced
Primary sponsor: Maria Salazar
(10 total sponsors)
Last action: Referred to the House Committee on Energy and Commerce. (Jan. 22, 2025)
Societal Impact
Data Governance (see reasoning)
The text centers around the regulation of nonconsensual intimate visual depictions, particularly those that involve digital forgery or deepfakes created through AI technologies. This clearly ties into the Social Impact category as it addresses psychological and reputational harm caused by nonconsensual uses of AI-generated imagery. Furthermore, it encompasses accountability of technologies that could lead to exploitation, aligning with existing issues around fairness and bias. There are also elements that touch upon data governance, particularly in how identity and consent are managed and safeguarded within AI systems. However, the primary focus remains on individual and societal implications. System Integrity and Robustness categories are less relevant here, as the text does not lay out specific safeguards, compliance measures, or performance benchmarks for AI itself, rather it focuses on the ramifications of negative societal impacts stemming from misuse of such technologies.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The legislation's focus on the regulation of digital forgeries created by AI expands into the political discourse surrounding technology's role in public safety and individual rights, thus moderately connecting to Politics and Elections. It has strong relevance to the category of Government Agencies and Public Services, considering that government oversight and enforcement via the Federal Trade Commission is elaborated in the enactment and enforcement sections, indicating a direct impact on public service mechanics. The regulation doesn’t specifically address the Judicial System but aligns with broader legal implications. The healthcare sector, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and the Hybrid, Emerging, and Unclassified categories do not relate closely to the text, rendering them significantly less relevant. Overall, it prominently intersects with social, governmental, and legal frameworks.
Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context
Description: Artificial intelligence; prohibiting distribution of certain media and requiring certain disclosures. Effective date.
Summary: The bill prohibits the distribution of synthetic media, particularly deepfakes, targeting political candidates within 90 days of an election, requiring disclosures about manipulation to protect electoral integrity.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Bill Coleman
(3 total sponsors)
Last action: Coauthored by Representative Newton (principal House author) (March 5, 2025)
Societal Impact
Data Governance (see reasoning)
The text explicitly deals with Artificial Intelligence (AI) by defining it in the context of regulations on synthetic media and deepfakes. It addresses concerns related to the social impact of using AI to create misleading content, particularly in political contexts, thereby falling squarely into the Social Impact category. The necessity for disclosures regarding AI-generated content highlights regulatory attempts to mitigate harm and promote transparency in AI's societal applications. While aspects of data management and system integrity are touched upon, as there are requirements for disclosures and penalties for misuse, the focus remains primarily on the social implications of AI-generated media. As a result, the relevance of this text in the context of Social Impact is very high. The Data Governance category is moderately relevant due to implicit concerns about data handling related to the creation of deepfakes, but it is not the primary focus. System Integrity is slightly relevant as it discusses monitoring and accountability of AI usage in media, but this is not the focal point of the legislation. Robustness does not apply as there are no benchmarks or compliance issues emphasized in the text.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
This legislation specifically concerns the use of AI in political campaigning through the lens of deepfakes, which has immediate implications for Politics and Elections. The references to media distribution regulations indicate a significant focus on how AI impacts electoral processes and candidate representation, warranting an extremely high relevance score for this sector. There are also implications for Government Agencies and Public Services since enforcement measures and disclosures may involve state monitoring; however, this is secondary and does not receive as high a score. The text does not address AI in contexts relevant to the Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, or Hybrid, Emerging, and Unclassified sectors. Thus, the primary relevance is to Politics and Elections, followed by a consideration for the operations of Government Agencies in enforcing these regulations.
Keywords (occurrence): artificial intelligence (3) deepfake (6) synthetic media (2) show keywords in context
Description: An Act To Create New Section 75-99-1, Mississippi Code Of 1972, To Establish A Short Title For The Mississippians' Right To Name, Likeness And Voice Act; To Create New Section 75-99-3, Mississippi Code Of 1972, To Define Terms; To Create New Section 75-99-5, Mississippi Code Of 1972, To Provide That Every Individual Has A Property Right In Their Own Name, Likeness And Voice; To Create New Section 75-99-7, Mississippi Code Of 1972, To Provide Certain Liability For Persons Or Entities Who Infri...
Summary: The Mississippians' Right to Name, Likeness, and Voice Act establishes individuals' rights over their name, likeness, and voice, regulating unauthorized commercial use and defining liabilities for infringement.
Collection: Legislation
Status date: Feb. 13, 2025
Status: Engrossed
Primary sponsor: Bradford Blackmon
(4 total sponsors)
Last action: Referred To Universities and Colleges;Judiciary A (Feb. 18, 2025)
Societal Impact
Data Governance (see reasoning)
The text notably addresses the implications of AI and digital technology in the context of individual rights to name, likeness, and voice. It specifies the use of terms such as 'artificial intelligence', 'machine learning', and 'algorithm' in defining 'digital technology' and 'personalized cloning service', which indicates a legislative interest in the social implications of AI technologies, particularly regarding individual rights and consent. The relevance to Social Impact arises here since the act seeks to establish liability and protections concerning the unauthorized use of personal likenesses, inadvertently tying into the broader discourse about AI-generated digital content and its potential implications for identity and representation. Data Governance is moderately relevant as the act defines digital depictions and could concern the responsible management of data related to AI technologies producing likenesses or voice replicas. It is less relevant to System Integrity and Robustness, since the legislation does not focus primarily on the security, performance benchmarks, or integrity of AI but rather on the rights and control individuals have over their personal representations and the implications of their unauthorized use. Therefore, Social Impact scores high, Data Governance moderately, while System Integrity and Robustness score lower due to their peripheral connection to the text's main themes.
Sector:
Private Enterprises, Labor, and Employment
Academic and Research Institutions
International Cooperation and Standards (see reasoning)
This legislation has direct implications for multiple sectors, particularly in how AI influences individual freedoms and rights to personal identity. In the realms of Private Enterprises, Labor, and Employment, there could be significant impacts on how businesses utilize AI to create digital representations for commercial use, leading to economic implications and considerations around intellectual property. The Healthcare sector might have a slight relevance if AI's role in producing voice replicas is considered in medical contexts; however, there is no specific mention of healthcare applications within the document. The legislation may also be of interest to Academic and Research Institutions as they navigate ethical considerations with AI technologies, especially as they apply studies about likeness and identity in digital environments. International Cooperation and Standards may come into play, particularly if there are cross-border consequences with AI-generated content. Therefore, while some areas like Politics and Elections and Government Agencies and Public Services do not strongly apply, there's notable relevance for sectors concerned with commercial implications and educational contexts around AI.
Keywords (occurrence): artificial intelligence (1) machine learning (1) algorithm (2) show keywords in context
Description: HEALTH AND SAFETY -- THE RHODE ISLAND CLEAN AIR PRESERVATION ACT - Establishes the Rhode Island Clean Air Preservation Act that establishes a regulatory process to prohibit polluting atmospheric experimentation.
Summary: The Rhode Island Clean Air Preservation Act aims to regulate and prohibit atmospheric experimentation that releases pollutants, ensuring public health and safety from harmful environmental interventions like geoengineering and cloud seeding.
Collection: Legislation
Status date: Feb. 26, 2025
Status: Introduced
Primary sponsor: Elaine Morgan
(2 total sponsors)
Last action: Introduced, referred to Senate Environment and Agriculture (Feb. 26, 2025)
Societal Impact (see reasoning)
The text pertains to the regulation of atmospheric experiments, including those involving artificial intelligence (AI). It outlines legislative intent, definitions, and regulations that address how AI systems could be involved in atmospheric activities. Since the bill explicitly mentions AI in contexts pertaining to public health and environmental safety, it relates directly to the governance of technologies that can impact societal elements. This makes it relevant to the Social Impact category particularly when considering the potential harms from AI-influenced atmospheric interventions. The Data Governance category is less relevant, as the text does not focus on data management or accuracy related to AI systems or datasets. System Integrity and Robustness have limited relevance since the text focuses more on legislative regulation of harmful atmospheric activities rather than on integrity and robustness of AI systems themselves.
Sector:
Government Agencies and Public Services (see reasoning)
The use of AI as mentioned in the bill pertains primarily to environmental and health concerns, making it somewhat relevant to various sectors, but not specifically highlighting a particular one with strong authority. The mention of potential involvement of AI in atmospheric activities connects it to the realms of government agencies and public services as it discusses regulatory oversight, but the legislative focus on atmospheric experimentation makes categorization difficult. The text does not adequately address AI in politics or elections, the judicial system, nor does it explicitly reference healthcare applications or employment issues. Therefore, it does not fit neatly into any of the sectors beyond a moderately relevant connection to Government Agencies and Public Services due to reinforcement of regulatory frameworks affecting citizens.
Keywords (occurrence): artificial intelligence (3) machine learning (3) show keywords in context
Description: Prohibit conduct involving computer-generated child pornography
Summary: The bill prohibits the creation, possession, and distribution of computer-generated child pornography in Nebraska, establishes enhanced penalties, and aims to strengthen the Child Pornography Prevention Act.
Collection: Legislation
Status date: Jan. 13, 2025
Status: Introduced
Primary sponsor: Brian Hardin
(sole sponsor)
Last action: Notice of hearing for February 06, 2025 (Jan. 28, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly discusses computer-generated child pornography and the implications of its production, possession, and distribution using artificial intelligence. The inclusion of 'computer-generated' alongside 'artificial intelligence' directly aligns with AI-related legislation addressing potential harms and ethical concerns. Therefore, its relevance spans across Social Impact, Data Governance concerning the data inputs and outputs, System Integrity regarding the control and security of generated content, and Robustness as it relates to legislative measures ensuring compliance and oversight of AI technologies used in creating such material.
Sector:
Government Agencies and Public Services
Judicial system
Hybrid, Emerging, and Unclassified (see reasoning)
The legislation pertains to the topic of child pornography, primarily relevant in the contexts of protecting minors, law enforcement, and public safety. While it touches upon the production mechanism (AI-generated content), it primarily aligns with the regulations that could be pertinent to the Judicial System (handling of cases involving such content) and Government Agencies and Public Services (in terms of enforcement and potential involvement of social services). It is less relevant to other sectors like Healthcare or Private Enterprises, and thus, receives lower scores in those areas.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Including any photograph, film, video picture, digital or computer-generated image or picture that has been created, altered or modified by artificial intelligence or any digital means in the definition of a visual depiction for certain criminal offenses.
Summary: House Bill No. 2183 updates Kansas law to include AI-generated images in definitions relating to child sexual exploitation, unlawful transmission, and breach of privacy, enhancing protections against such depictions.
Collection: Legislation
Status date: Jan. 30, 2025
Status: Introduced
Primary sponsor: Judiciary
(sole sponsor)
Last action: House Final Action - Passed as amended; Yea: 119 Nay: 3 (Feb. 20, 2025)
Societal Impact
Data Governance (see reasoning)
The text specifically addresses the implications of artificial intelligence in the context of visual depictions, particularly concerning criminal offenses related to sexual exploitation. It discusses modifying definitions of crimes to include images altered or generated by AI and establishes nuanced legal ramifications for such depictions. This indicates a direct concern about the social impact of AI as it relates to crime, exploitation, and the protection of minors, thus linking strongly to the category of Social Impact. The inclusion of measures regarding privacy and ethics within the context of AI further aligns it with discussions on Data Governance, addressing how data (in this case, visual depictions) is managed and used in these new legal considerations. However, while the text touches upon issues related to system integrity and robustness in terms of preventing harm and outlining legal structures, it does not delve into technical specifics about system validation or integrity benchmarks, resulting in lower relevance scores for those categories.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The legislation's focus on criminal offenses that involve AI-generated content, specifically concerning the potential harm to minors, positions it strongly within the realm of government regulation as it pertains to public safety. It reflects societal concerns and legal responses to the risks posed by emerging technologies, which is indicative of the Government Agencies and Public Services sector. Given the nature of the offenses and the involvement of minors, there is a moderate connection to the Judicial System sector as well, though it focuses more on enforcement than on the legal framework surrounding AI application. It does not fit squarely into other sectors like Healthcare or Private Enterprises, as it is not addressing those specific domains. Thus, the scores reflect this concentrated focus.
Keywords (occurrence): artificial intelligence (5) automated (1) show keywords in context
Description: An act relating to an age-appropriate design code
Summary: The bill establishes an Age-Appropriate Design Code to protect minors online by prohibiting privacy-invasive features in services likely accessed by children, ensuring their personal data is safeguarded.
Collection: Legislation
Status date: Feb. 12, 2025
Status: Introduced
Primary sponsor: Monique Priestley
(53 total sponsors)
Last action: Read first time and referred to the Committee on Commerce and Economic Development (Feb. 12, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text includes a definition for 'algorithmic recommendation system' and mentions processing and analyzing personal data, especially concerning minors. This indicates a significant relevance to data collection and processing related to AI technologies. Additionally, the design code aims to protect minors from abusive or intrusive AI features in online services, impacting its categorization under social impact.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text relates to policies that specifically involve children's online experiences and data usage, indicating its application across several sectors. However, the primary focus is on consumer protection, privacy, and data governance, with less emphasis on specific industries like healthcare or judicial systems. Hence, it receives varying relevance scores based on its potential impacts on government services and private enterprises dealing with personal data.
Keywords (occurrence): automated (1) recommendation system (3) algorithm (1) show keywords in context
Description: An Act; Relating to: state finances and appropriations, constituting the executive budget act of the 2025 legislature. (FE)
Summary: The bill outlines the executive budget for Wisconsin for the 2025-2026 fiscal biennium, detailing appropriations and significant changes to state funding, notably in agriculture and broadband services.
Collection: Legislation
Status date: Feb. 18, 2025
Status: Introduced
Primary sponsor: Finance
(sole sponsor)
Last action: Read first time and referred to joint survey committee on Retirement Systems (Feb. 18, 2025)
The text primarily revolves around state finances and appropriations with no explicit mention or implication of AI-related elements. Terms relating to AI such as artificial intelligence, algorithms, or machine learning do not appear, and the discussions focus solely on budgetary topics, appropriations, and state administration. Thus, none of the categories directly connects with the contents of the text, leading to low relevance scores across the board.
Sector: None (see reasoning)
Similar to the category assessment, the text does not touch upon or encompass the application of AI within any of the sectors. The focus is strictly on budgeting and state finance, with no references to political campaigns, healthcare, judicial use of AI, or other stated sectors that relate to the predefined sectors. Hence, each sector is rated lowest, reflecting the absence of relevant content.
Keywords (occurrence): artificial intelligence (21) automated (12) show keywords in context
Description: Revised for 1st Substitute: Allowing bargaining over matters related to certain uses of artificial intelligence.Original: Allowing bargaining over matters related to the use of artificial intelligence.
Summary: This bill allows collective bargaining over the adoption and modification of artificial intelligence (AI) technologies in higher education, specifically if these changes impact employee wages or performance evaluations.
Collection: Legislation
Status date: Jan. 22, 2025
Status: Introduced
Primary sponsor: Jessica Bateman
(13 total sponsors)
Last action: Referred to Ways & Means. (Feb. 21, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text directly addresses the bargaining rights related to the use of artificial intelligence within institutions of higher education. This connection to collective bargaining, employee rights, and regulatory frameworks around AI usage suggests a strong relevance to Social Impact, as it implies concerns for employee welfare, substantial impacts on jobs, and the management of AI’s influence on labor conditions. Additionally, the text outlines the governance of AI systems in workplaces, impacting aspects of labor and employee evaluations; thus, it moderately relates to Data Governance. The talk of bargaining and decision-making rights around AI technology indicates some relevance to System Integrity as well, given the emphasis on management and operational rights during the adoption of technology. However, there is minimal discussion of AI's performance standards, safety, or compliance, resulting in a lower score for Robustness. Overall, the Social Impact category is the most pertinent, followed by Data Governance and System Integrity, while Robustness is less relevant.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text is primarily focused on regulations concerning labor rights within educational institutions, specifically regarding how AI technologies can influence negotiations around employee conditions. Given this focus, it is relevant to the Private Enterprises, Labor, and Employment sector as it discusses collective bargaining and the influence of AI on employment conditions. There is a tangential impact on Government Agencies and Public Services as it touches on higher education, which is a public sector and often involves government oversight. However, other sectors like Healthcare, Politics and Elections, or Nonprofits and NGOs do not appear pertinent as there is no explicit reference to AI applications or regulation in those areas. Therefore, the primary scoring emphasizes relevance to Private Enterprises, Labor, and Employment.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Regulating the manner in which a developer or deployer of artificial intelligence must protect consumers from certain risks; requiring a developer that offers to sell a certain artificial intelligence system to provide certain information and make certain disclosures; requiring a deployer to implement a certain risk management policy and take certain precautions to protect consumers from certain risks; requiring a deployer to complete an impact assessment and make certain disclosures; etc.
Summary: House Bill 1331 establishes consumer protections for artificial intelligence systems, requiring developers and deployers to disclose risks, implement risk management policies, and conduct impact assessments to mitigate algorithmic discrimination.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: Lily Qi
(sole sponsor)
Last action: Hearing 3/04 at 1:00 p.m. (Feb. 7, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text focuses on consumer protection in the context of artificial intelligence, addressing risks posed by AI systems. It emphasizes the necessity for developers to disclose information and perform impact assessments to mitigate algorithmic discrimination. This scrutiny of AI's effects on consumers is highly relevant under the Social Impact category. The provisions for data handling by developers and requirements for impact assessments directly relate to Data Governance. The need for standards of care in deploying AI systems aligns with System Integrity. Furthermore, since it also introduces frameworks for better evaluation and compliance, it fits the Robustness category as well. Therefore, each category reflects varying degrees of relevance to the text.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation applies broadly to the involvement of AI in consumer protection, but can be particularly significant in government settings. It discusses regulations affecting how AI systems function in commerce and the resulting implications for consumers, hence it is relevant for Government Agencies and Public Services. It may touch slightly on Private Enterprises, Labor, and Employment due to its commercial nature and consumer interaction, but it is not focused solely on employment aspects. Thus, while multiple sectors could apply, the primary focus remains on consumer protection.
Keywords (occurrence): artificial intelligence (7) automated (1) algorithm (1) show keywords in context
Description: Establishes rate transparency requirements for insurance companies operating in the State. Establishes the Office of Insurance Consumer Affairs within the Insurance Division of the Department of Commerce and Consumer affairs to provide oversight, information, and consumer advocacy. Appropriates funds. Effective 7/1/2050. (SD1)
Summary: The bill establishes insurance rate transparency requirements in Hawaii, creating an Office of Insurance Consumer Affairs for oversight and advocacy, aiming to boost consumer confidence and protect policyholders from unfair practices.
Collection: Legislation
Status date: Jan. 23, 2025
Status: Introduced
Primary sponsor: Angus McKelvey
(sole sponsor)
Last action: Report adopted; Passed Second Reading, as amended (SD 1) and referred to WAM. (Feb. 14, 2025)
Societal Impact (see reasoning)
The text primarily concerns insurance rate transparency and consumer protection within the insurance industry. It discusses the establishment of an Office of Insurance Consumer Affairs, public advocacy, oversight, and guidelines on how insurance companies should disclose their rate calculation factors. While there is a mention of 'machine learning algorithms' that may pertain to AI, the legislation does not explicitly address broader impacts of AI, data governance, system integrity, or robustness regarding AI systems. Its focus is more on consumer advocacy and insurance practices than on concerns central to the categories of Data Governance, System Integrity, or Robustness. However, the mention of machine learning algorithms indicates some relevance to AI issues, particularly in how algorithms impact consumer rates. Therefore, the relevance of each category varies significantly.
Sector:
Government Agencies and Public Services (see reasoning)
The text directly addresses the insurance sector, focusing on how insurance companies calculate rates and the regulatory framework governing this process. While it mentions the use of algorithms, the broader implications regarding the sectors remain limited. The legislation aims at fostering transparency and accountability within the insurance environment, thus maintaining a focus on consumer advocacy and rights. Other sectors are not influenced or mentioned significantly within this legislation. Scores reflect the relevance of the text to each defined sector.
Keywords (occurrence): machine learning (1) show keywords in context
Description: For legislation to require consumer notification for software or computer program that simulates human conversation or chatter through text or voice interactions. Consumer Protection and Professional Licensure.
Summary: The bill mandates that consumers be notified when interacting with chatbot systems to prevent deception, ensuring clarity that they are communicating with a computerized entity rather than a human.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Barry Finegold
(sole sponsor)
Last action: House concurred (Feb. 27, 2025)
Societal Impact (see reasoning)
This legislative text explicitly addresses AI through its focus on chatbot systems, which are a form of artificial intelligence technology designed to simulate conversations. Therefore, the text is highly relevant to the 'Social Impact' category, as it touches upon consumer protection and issues of deception that may arise from interactions between individuals and AI systems. It seeks to mitigate potential harm and improve transparency in consumer interactions, indicating a significant social impact. For the 'Data Governance' category, while there are considerations around data usage involved in chatbot interactions, the text does not directly address data collection or management. The relevance to 'System Integrity' is also minimal since the bill does not explicitly mention transparency or security measures, focusing instead on consumer notification. 'Robustness' is not applicable as there are no mentions of performance standards or benchmarks for the chatbot systems. Overall, the text’s primary aim is to protect consumers from misleading AI interactions, fitting firmly within the social impact framework.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text is particularly relevant to the 'Private Enterprises, Labor, and Employment' sector since it governs how businesses must operate when using chatbot technology to interact with consumers, ensuring transparency in communication practices. It may also have implications for the 'Government Agencies and Public Services' sector, as government entities could also be employing chatbot systems to communicate with citizens and must adhere to similar rules. Although this legislation touches on consumer interactions with AI, it does not cover political processes or judicial applications, making those sectors less relevant. Overall, the focus is mainly on consumer interactions in a business context, showcasing how AI systems like chatbots must be managed appropriately in relation to stakeholders.
Keywords (occurrence): automated (1) chatbot (3) show keywords in context
Description: Digital Content Authenticity and Transparency Act established; civil penalty. Requires a developer of an artificial intelligence system or service to apply provenance data to synthetic digital content that is generated by such developer's generative artificial intelligence system or service and requires a developer to make a provenance application tool and a provenance reader available to the public. The bill requires a controller of an online service, product, or feature to retain any availa...
Summary: The Digital Content Authenticity and Transparency Act establishes requirements for generative AI developers to disclose provenance data about synthetic digital content, imposing civil penalties for violations to ensure transparency.
Collection: Legislation
Status date: Jan. 16, 2025
Status: Introduced
Primary sponsor: Adam Ebbin
(sole sponsor)
Last action: Referred to Committee on General Laws and Technology (Jan. 16, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text clearly addresses legislation related to artificial intelligence, particularly focusing on requirements for developers of generative AI systems to apply provenance data to synthetic content. This directly relates to accountability and consumer protection, aligning with the Social Impact category. It discusses measures to ensure transparency in AI-generated content, addressing potential harms related to misinformation and public trust. Furthermore, it deals with the management of data, particularly provenance data, which ties into Data Governance. The mention of maintaining accuracy and transparency falls closely under System Integrity, as well. Lastly, the criteria for evaluating AI systems and the emphasis on compliance align with Robustness. Overall, the legislation covers themes critical to all four categories, albeit with a stronger focus on Social Impact, Data Governance, and System Integrity, which concern ethical implications and data management of AI outputs.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation significantly impacts the use of AI in digital content creation and transparency, which is particularly relevant to Private Enterprises, Labor, and Employment as it affects how businesses, especially those creating or managing AI-generated content, will operate. The requirements laid out for developers make it clear that it will affect commercial practices and potentially the labor market related to content creation and management. This legislation might not directly address healthcare, the judicial system, or nonprofit sectors, making those categories less relevant. It may have indirect implications for Government Agencies and Public Services given the call for public availability of tools and data, but these are less pronounced than in the private enterprise context. Overall, the strongest sector involvement appears to be in Private Enterprises, given the commercial focus of the legislation.
Keywords (occurrence): artificial intelligence (20) machine learning (2) foundation model (2) show keywords in context
Description: Rental price fixing; algorithmic pricing
Summary: HB 2847 prohibits algorithmic price fixing for rental rates in Arizona, establishing a presumption of antitrust violations when nonpublic competitor data is used, and mandates enforcement by the attorney general.
Collection: Legislation
Status date: Feb. 12, 2025
Status: Introduced
Primary sponsor: Oscar De Los Santos
(sole sponsor)
Last action: House read second time (Feb. 13, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text directly addresses algorithmic pricing in the context of rental rates and introduces regulations to prevent anti-competitive behavior through the use of AI and machine learning techniques. The mention of 'algorithmic pricing' and the inclusion of terms such as 'machine learning' and 'artificial intelligence' in the definitions clearly indicate a relevance to the Social Impact category, particularly concerning fairness and potential discrimination in pricing practices influenced by AI. Data Governance is also highly relevant as it discusses nonpublic competitor data and concerns over data usage in determining rental rates. System Integrity is moderately relevant due to the enforcement mechanisms outlined, ensuring that algorithmic processes are monitored to prevent abuse. Robustness is less relevant here as the focus is more on enforcing existing regulations rather than developing performance benchmarks. Overall, the text presents a strong link to Social Impact and Data Governance while maintaining moderate relevance to System Integrity.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation primarily impacts the housing market, touching on issues relevant to Private Enterprises through the regulation of algorithmic pricing in rental markets. It also has implications for Government Agencies and Public Services, as the Attorney General is tasked with enforcement. The legislation does not directly pertain to other sectors like Healthcare, Judicial System, or Academic Institutions, thus showing limited relevance for those areas. Overall, the text is strongly relevant to Private Enterprises and Government Agencies.
Keywords (occurrence): artificial intelligence (1) algorithm (2) show keywords in context
Description: A bill to amend title 18, United States Code, to prohibit United States persons from advancing artificial intelligence capabilities within the People's Republic of China, and for other purposes.
Summary: The "Decoupling America's Artificial Intelligence Capabilities from China Act of 2025" prohibits U.S. persons from advancing AI technology in China, aiming to limit collaboration and technology transfer to safeguard national security.
Collection: Legislation
Status date: Jan. 29, 2025
Status: Introduced
Primary sponsor: Josh Hawley
(sole sponsor)
Last action: Read twice and referred to the Committee on the Judiciary. (Jan. 29, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text of the Decoupling America's Artificial Intelligence Capabilities from China Act of 2025 revolves around the prohibition of advancing AI technologies in China. It discusses AI in terms of research, development, technology, and regulations associated with intellectual property related to AI. Therefore, this text is highly relevant to all categories dealing with the social impact of AI, data governance, system integrity, and robustness as it pertains to national security, ethical considerations, and the integrity of AI systems. Each category reflects aspects of the legislation's aim to regulate and control AI development in a global context. The direct references to AI technologies and the implications of these actions provide a strong basis for assigning high relevance scores across these categories.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions
International Cooperation and Standards
Hybrid, Emerging, and Unclassified (see reasoning)
The sectors primarily influenced by this legislation include government operations due to its direct implications on U.S. foreign policy and national security concerning AI technology. The bill does not directly mention healthcare, political campaigns, or judicial systems, limiting its relevance to those sectors. However, the government sector is pivotal, as it establishes regulations that will likely affect various government functions and public services, particularly in connection with AI research and development. The text's primary focus on international relations and technology regulation fits squarely into the domain of government agencies and public services, hence the higher relevance score.
Keywords (occurrence): artificial intelligence (25) automated (2) show keywords in context
Description: High-risk artificial intelligence; development, deployment, and use; civil penalties. Creates requirements for the development, deployment, and use of high-risk artificial intelligence systems, defined in the bill, and civil penalties for noncompliance, to be enforced by the Attorney General. The bill has a delayed effective date of July 1, 2026.
Summary: The bill establishes regulations regarding the development and deployment of high-risk artificial intelligence systems in Virginia, focusing on preventing algorithmic discrimination and imposing civil penalties for non-compliance.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Engrossed
Primary sponsor: Michelle Maldonado
(24 total sponsors)
Last action: Passed by for the day (Feb. 18, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This text establishes requirements and standards for the development, deployment, and use of high-risk artificial intelligence systems, emphasizing accountability for algorithmic discrimination, consumer protection, and operational standards. The references to 'high-risk artificial intelligence systems' and 'algorithmic discrimination' highlight the potential social impact and regulatory measures required to prevent discrimination and protect individuals. Furthermore, it outlines safety and responsibility frameworks for developers and deployers of AI, making it highly relevant to all categories specified. The need for documentation, risk management plans, and standards compliance directly impacts social welfare, data governance, system integrity, and the robustness of AI systems.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Hybrid, Emerging, and Unclassified (see reasoning)
The text mentions developers and deployers of high-risk AI, which could include various sectors like healthcare, public services, or private enterprises but does not specifically restrict itself to any single sector. The focus on algorithmic discrimination and consumer rights suggests relevance to various sectors, especially those directly interfacing with consumers (like healthcare and public services) and risk management in business environments. However, since the language is broad and does not focus exclusively on any one sector, the scores reflect general applicability rather than direct regulation within specific sectors.
Keywords (occurrence): artificial intelligence (138) machine learning (2) automated (1) algorithm (1) autonomous vehicle (1) show keywords in context
Description: Requires advertisements to disclose the use of a synthetic performer; imposes a $1,000 civil penalty for a first violation and a $5,000 penalty for any subsequent violation.
Summary: The bill mandates advertisements to disclose the use of synthetic performers created by artificial intelligence. It imposes penalties for violations, aiming to enhance transparency in advertising.
Collection: Legislation
Status date: Jan. 8, 2025
Status: Introduced
Primary sponsor: Michael Gianaris
(sole sponsor)
Last action: REFERRED TO CONSUMER PROTECTION (Jan. 8, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly addresses the use of generative artificial intelligence in relation to advertisements and synthetic performers. This relevance can be assessed across the categories. In terms of Social Impact, the legislation directly concerns the implications of synthetic performers in public perception and trust, thus highlighting psychological and material harm, which aligns well with concerns about AI in society. In Data Governance, the text discusses the requirements for disclosure regarding synthetic performers, which touches on data management policies as they relate to transparency. System Integrity is also relevant due to the transparency demanded in the use of AI in advertisements, ensuring that AI systems are used responsibly. Robustness applies since the scope of the legislation suggests a need for standards around how AI-generated content is used in commercial settings to build trust with consumers. Overall, these elements substantiate high relevance in more than one area.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily addresses the implications of AI within the context of advertising, which fits best within the broader framework of private enterprises. It ensures fair practices in commercial advertising using AI, thereby directly impacting how businesses interact with AI technologies. There is also a relevant implication for consumer protection, ensuring that individuals are aware of AI influence in the advertisements they encounter. This does not strongly align with sectors such as Politics and Elections or Judicial System, but there are connections to Government Agencies and Public Services due to regulatory oversight of advertising standards that the state may enforce. Overall, however, Private Enterprises may still receive the highest score given the primary sector of influence in this legislation.
Keywords (occurrence): artificial intelligence (2) machine learning (1) algorithm (1) show keywords in context