4942 results:
Description: Creates the Protect Health Data Privacy Act. Provides that a regulated entity shall disclose and maintain a health data privacy policy that clearly and conspicuously discloses specified information. Sets forth provisions concerning health data privacy policies. Provides that a regulated entity shall not collect, share, or store health data, except in specified circumstances. Provides that it is unlawful for any person to sell or offer to sell health data concerning a consumer without first ob...
Summary: The Protect Health Data Privacy Act establishes stringent rules for the collection, sharing, and sale of health data in Illinois, ensuring consumer consent, clear policies, and protective measures against unauthorized use.
Collection: Legislation
Status date: Feb. 2, 2024
Status: Introduced
Primary sponsor: Celina Villanueva
(sole sponsor)
Last action: Referred to Assignments (Feb. 2, 2024)
Societal Impact
Data Governance (see reasoning)
The text discusses health data privacy and the explicit requirement for consumer consent regarding the collection and processing of health data, which touches on various AI-related considerations. For example, it mentions algorithms and machine learning in relation to processing health data, highlighting concerns about fairness and transparency that align with social impact. However, it primarily focuses on privacy and data rights, which emphasizes data governance. Therefore, while there are connections to social aspects regarding consumer rights and potential biases in AI-driven data processing, the primary focus remains on the correct use and governance of health data.
Sector:
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The text is highly relevant to the Healthcare sector as it directly addresses health data rights, privacy policies, and the mandated consent process specifically within healthcare contexts. It outlines the responsibilities of regulated entities handling health data, thereby connecting directly with legislative measures aimed at protecting healthcare data. Although there are implications for other sectors related to the treatment of personal data, the primary focus is undoubtedly healthcare.
Keywords (occurrence): machine learning (1) show keywords in context
Description: Hospitals; emergency departments; licensed physicians. Requires any hospital with an emergency department to have at least one licensed physician on duty and physically present at all times. Current law requires such hospitals to have a licensed physician on call, though not necessarily physically present on the premises, at all times. The bill has a delayed effective date of July 1, 2025 and is identical to
Summary: The bill mandates that Virginia hospitals' emergency departments have at least one licensed physician on duty at all times to ensure patient safety and care quality.
Collection: Legislation
Status date: April 4, 2024
Status: Passed
Primary sponsor: Stella Pekarsky
(sole sponsor)
Last action: Governor: Acts of Assembly Chapter text (CHAP0505) (April 4, 2024)
This text focuses primarily on establishing regulations for hospitals and emergency departments, with a significant emphasis on ensuring the physical presence of licensed physicians. It does not explicitly mention issues related to AI, data management, system security, or performance benchmarks associated with AI systems. As such, the categories of Social Impact, Data Governance, System Integrity, and Robustness do not have a relevant connection to the text. Therefore, the scores reflect the lack of relevance to AI-related issues in these categories.
Sector:
Healthcare (see reasoning)
The text primarily pertains to the Healthcare sector as it discusses regulations concerning hospitals and emergency services, specifically detailing the requirements for licensed physicians on duty. Despite the absence of explicit AI references, these regulations can indirectly touch upon the use of AI in healthcare, which could be relevant upon further contextualization. However, AI’s role is not specified in this legislation, leading to a lower relevancy score. Hence, while the text does discuss healthcare regulations, it doesn't directly engage with the nuances of AI applications within that sector. Subsequently, the other sectors such as Politics, Government, and academia have no connections with this text; thus, they receive the lowest scores.
Keywords (occurrence): artificial intelligence (2) show keywords in context
Summary: The bill honors Senator Laphonza R. Butler's notable contributions during her brief tenure, highlighting her advocacy for women's rights, mental health, and civic participation, alongside her historic significance as a trailblazer in the Senate.
Collection: Congressional Record
Status date: Dec. 3, 2024
Status: Issued
Source: Congress
Societal Impact (see reasoning)
The text is primarily a farewell speech for Senator Laphonza R. Butler and does not delve deeply into specific AI legislation or regulations. However, it makes a brief mention of Senator Butler's efforts to bring the workforce up to speed in the age of artificial intelligence, which can be classified under all four categories as AI's impact on society, data management, system integrity, and robustness are implied in her initiatives. Still, the general tone is more of a tribute than a detailed discussion on AI matters. Thus, overall relevance to all four categories is relatively low, with some connection to Social Impact due to the workforce aspect.
Sector: None (see reasoning)
The text focuses on Senator Butler's accomplishments and her contributions while in office it does not specifically address AI in the context of politics or any other sector. While there is a mention of efforts related to the workforce concerning AI, there are no specific legislative details that connect this directly to the sectors outlined. Therefore, scores reflect minimal relevance to most sectors, with a slight score given to Government Agencies and Public Services due to the mention of her championing various causes.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Creates Health Care Innovation Council within DOH for specified purpose; requires council to submit annual reports to Governor & Legislature; requires department to administer revolving loan program for applicants seeking to implement certain health care innovations in this state; authorizes department to contract with third party to administer program, including loan servicing, & manage revolving loan fund.
Summary: The bill establishes the Health Care Innovation Council in Florida, aiming to enhance healthcare delivery through innovative solutions and a revolving loan program for health care improvements.
Collection: Legislation
Status date: Jan. 8, 2024
Status: Introduced
Primary sponsor: Health Care Appropriations Subcommittee
(4 total sponsors)
Last action: Laid on Table, refer to SB 7018 (Feb. 21, 2024)
Societal Impact
Data Governance (see reasoning)
The bill creates the Health Care Innovation Council aimed at improving health care through innovative technologies, including possibly AI. Although the text does not explicitly mention AI, it references the integration of technologies and seeks to harness innovations. These innovations could include AI applications, thus the Social Impact category is relevant as it may pertain to the implications of AI in health care delivery, addressing efficiency, cost reduction, and quality of care. Data Governance is also relevant since the text emphasizes the need for standards and best practices around health data which are crucial for any AI implementations in healthcare. System Integrity might be less relevant since the focus is more on innovation than on security or transparency issues specific to AI. Robustness is not directly applicable as the legislation does not discuss performance benchmarks for AI systems or regulatory compliance. Overall, the Social Impact and Data Governance categories are most relevant to AI-related portions of the text.
Sector:
Healthcare (see reasoning)
The text relates to the Healthcare sector by establishing a council aimed at advancing health care innovations, including potentially those driven by AI. It addresses health care delivery models, better patient outcomes, and uses technology to improve services, which all indicate a focus on the Healthcare sector. While it critiques efficiencies in the healthcare workforce and service delivery, it stops short of evaluating other sectors such as politics or public services, making its relevance mainly contained to Healthcare.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Amend The South Carolina Code Of Laws By Adding Section 39-5-190 So As To Provide That Every Individual Has A Property Right In The Use Of That Individual's Name, Photograph, Voice, Or Likeness In Any Medium In Any Manner And To Provide Penalties.
Summary: The bill establishes that individuals possess property rights over their name, image, voice, and likeness, providing legal protections and penalties for unauthorized commercial use.
Collection: Legislation
Status date: April 10, 2024
Status: Introduced
Primary sponsor: Brandon Guffey
(8 total sponsors)
Last action: Referred to Committee on Judiciary (April 10, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text of the Ensuring Likeness, Voice, and Image Security Act strongly relates to Social Impact, particularly concerning the protection of individuals' rights over their likenesses in the context of AI-generated content and deepfake technologies. This legislation addresses misuse and unauthorized usage, which aligns with accountability and consumer protections. Data Governance is also relevant due to the mention of algorithms and technology that generate likenesses, signaling a need for secure and ethical data management. System Integrity is relevant due to the mention of safeguards against unauthorized use of likenesses, emphasizing the control over AI-driven processes. Robustness is less applicable as this act does not set benchmarks for AI performance but focuses more on rights protection and legal frameworks.
Sector:
Judicial system
Private Enterprises, Labor, and Employment
Nonprofits and NGOs (see reasoning)
The legislation has significant implications for multiple sectors. It affects the Judicial System by providing grounds for civil action against unauthorized use of likenesses, which could lead to legal disputes. It could also have implications for Private Enterprises, Labor, and Employment, particularly in industries where likeness and voice are monetized, such as entertainment. The text does not mention specific governmental applications of AI or its role in healthcare, academic institutions, or international settings, which makes those sectors irrelevant here. Its implications on AI raise concerns about ethical standards and consumer rights, linking it to the broader framework of Governance and possibly impacting Nonprofits and NGOs that advocate for individual rights.
Keywords (occurrence): algorithm (1) show keywords in context
Description: Imposes liability for misleading, incorrect, contradictory or harmful information to a user by a chatbot that results in financial loss or other demonstrable harm.
Summary: The bill holds chatbot proprietors liable for misleading or harmful information that leads to user financial loss, enforcing accountability and requiring clear user notifications about chatbot interactions.
Collection: Legislation
Status date: May 14, 2024
Status: Introduced
Primary sponsor: Kristen Gonzalez
(sole sponsor)
Last action: REFERRED TO INTERNET AND TECHNOLOGY (May 14, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text clearly addresses the role of chatbots, which are identified as forms of artificial intelligence (AI) systems. The focus on liability for misinformation provided by chatbots falls under the category of Social Impact as it speaks to the ramifications of AI on users and the accountability of AI systems for potential harm. The need for transparency and consumer protection further emphasizes the societal consequences of AI interactions. Data Governance is relevant as the chatbot's responsibility includes providing accurate information and addressing inaccuracies, which aligns with the management of data accuracy. System Integrity is also moderately relevant since the legislation imposes obligations on chatbot operators to ensure their systems provide accurate information and maintain user trust. Robustness may be less relevant, but the emphasis on compliance with policies and user safeguards suggests a concern with standards for AI performance.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text pertains mainly to the use of chatbots within the Private Enterprises, Labor, and Employment sector, as it specifically addresses the liability of businesses and organizations that utilize chatbots to interact with customers. The regulation relates directly to how these businesses manage interactions with users and the accuracy of the information provided. While there is a role for Government Agencies and Public Services, which could be seen as relevant due to the potential for government entities as proprietors, the main thrust of the text is more aligned with private enterprises due to the imposition of liability on chatbot operators. No direct relevance to sectors like Politics and Elections, Judicial System, Healthcare, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, or Hybrid, Emerging, and Unclassified is established in the text.
Keywords (occurrence): artificial intelligence (1) chatbot (12) show keywords in context
Description: Revising conditions under which a person commits the offense of driving under the influence or boating under the influence, respectively; providing that the disposition of an administrative proceeding relating to a specified fine does not affect certain criminal action; adding specified grounds for issuance of a search warrant; revising probation guidelines for felonies in which certain substances are contributing factors, etc.
Summary: The bill modifies Florida laws regarding operating vehicles and vessels under the influence, defining terms, revising penalties, and establishing conditions for arrest and affirmative defenses, enhancing accountability for impaired driving.
Collection: Legislation
Status date: March 8, 2024
Status: Other
Primary sponsor: Lori Berman
(2 total sponsors)
Last action: Died in Criminal Justice (March 8, 2024)
The text primarily addresses legal definitions and offenses related to driving and boating under the influence of alcohol and other impairing substances. It does not contain any explicit references or implications concerning AI technologies, their impact, or their governance. Hence, the legislation does not fall into any of the specified categories of Social Impact, Data Governance, System Integrity, or Robustness, scoring low in relevance to each. Instead, the focus remains entirely on human behavior concerning substance use and legal proceedings surrounding that behavior.
Sector: None (see reasoning)
Similarly, the text is concerned with legal terminology, penalties, and procedures related to DUI and BUI offenses. It does not engage with AI's role, regulation, or implications within any of the specified sectors (Politics and Elections, Government Agencies and Public Services, Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, Hybrid, Emerging, and Unclassified). There are no mentions or relevance to how AI may influence these sectors, leading to a score of 1 across all sector categories.
Keywords (occurrence): automated (2) autonomous vehicle (7) show keywords in context
Description: As introduced, requires a person to include a disclosure on certain content generated by artificial intelligence that the content was generated using artificial intelligence; makes it an unfair or deceptive act or practice under the Tennessee Consumer Protection Act of 1977 to distribute certain content generated using artificial intelligence without the required disclosure. - Amends TCA Title 47.
Summary: This bill amends Tennessee law to require disclosure for AI-generated content, ensuring transparency about its origins, and allows for legal action against those who fail to comply.
Collection: Legislation
Status date: Jan. 30, 2024
Status: Introduced
Primary sponsor: Bill Powers
(sole sponsor)
Last action: Assigned to General Subcommittee of Senate Commerce and Labor Committee (March 13, 2024)
Societal Impact
Data Governance (see reasoning)
The bill explicitly addresses the use of artificial intelligence, particularly focusing on the requirement for disclosures regarding AI-generated content. This connects strongly with issues of social impact, as it seeks to protect consumers from potential deception and misinformation generated by AI systems. It also touches on data governance as it mandates the disclosure of AI-generated content to provide transparency, which is a key aspect of ensuring that individuals are informed about the digital content they consume. These considerations directly relate to the implications of AI in society and the management of data that is manipulated or generated through AI frameworks. Given the nature of the content and its potential impacts, all categories are relevant to varying degrees.
Sector: None (see reasoning)
The bill primarily addresses consumer protection in the context of AI-generated content, which relates most closely to public welfare and consumer rights. It does not specifically address the use of AI in political systems, government services, judicial applications, healthcare, employment, research institutions, or international cooperation, thus lowering the relevance of the other sectors. The bill’s focus on AI disclosures means it is primarily applicable to general consumers rather than specific sectors; this also explains why certain sectors received lower scores. Politics and elections, government agencies and public services, judicial system, healthcare, and the other sectors show minimal to no relevance. Therefore, only a couple of sectors are deemed relevant based on the implications discussed.
Keywords (occurrence): artificial intelligence (5) automated (1) show keywords in context
Description: COMMERCIAL LAW -- GENERAL REGULATORY PROVISIONS -- AUTOMATED DECISION TOOLS - Requires companies that develop or deploy high-risk AI systems to conduct impact assessments and adopt risk management programs, would apply to both developers and deployers of AI systems with different obligations based on their role in AI ecosystem.
Summary: The bill mandates companies developing or deploying high-risk AI systems to conduct impact assessments and implement risk management programs, ensuring accountability and reducing potential harm from AI decisions.
Collection: Legislation
Status date: March 22, 2024
Status: Introduced
Primary sponsor: Louis Dipalma
(6 total sponsors)
Last action: Committee recommended measure be held for further study (April 11, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This legislation primarily addresses the development and deployment of high-risk AI systems, specifically focusing on required impact assessments and risk management programs. It discusses the implications of artificial intelligence (AI) in making consequential decisions that significantly affect individuals, illustrating the potential for bias in AI outputs and the need for transparency in AI deployment. Therefore, it has clear relevance to Social Impact due to its focus on accountability and the ethical implications of AI usage. Data Governance is also relevant as the law mandates accurate documentation and management of data used in AI systems to prevent bias and protect privacy. System Integrity is pertinent due to the consideration of risk management and oversight in AI processes. Robustness is relevant, but to a lesser extent, as it mentions general requirements for assessing the performance of AI systems without delving into specific benchmarks. Overall, the text's provisions primarily address social implications arising from AI deployment, with robust requirements for both developers and deployers of AI.
Sector:
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
The legislation is highly relevant to various sectors that involve the application of AI technology. Specifically, it directly pertains to the Government Agencies and Public Services sector as it outlines regulatory requirements for developers and deployers of AI, which likely includes public service applications. It is also relevant to Private Enterprises, Labor, and Employment as it determines how businesses use AI systems to make significant decisions impacting individuals' lives, including employment and financial decisions. While it has implications for the Judicial System regarding potential bias in AI outcomes affecting legal aspects, its focus does not specifically target judicial processes. Healthcare may be considered due to the impact on AI systems used in healthcare decision-making, but it is not a primary focus. The legislation does not directly address political implications or nonprofit usage of AI, falling short in relevance to these sectors. Overall, the strongest connections are with Government Agencies and Public Services and Private Enterprises, Labor, and Employment.
Keywords (occurrence): artificial intelligence (3) machine learning (1) automated (4) show keywords in context
Description: Amends the Criminal Code of 2012. Provides that certain forms of false personation may be accomplished by artificial intelligence. Defines "artificial intelligence".
Summary: The bill amends Illinois' Criminal Code to include provisions that allow for false personation via artificial intelligence, defining AI and expanding existing fraud laws.
Collection: Legislation
Status date: Jan. 19, 2024
Status: Introduced
Primary sponsor: Meg Loughran Cappel
(sole sponsor)
Last action: Rule 3-9(a) / Re-referred to Assignments (March 15, 2024)
Societal Impact
System Integrity (see reasoning)
The text discusses amendments to the Criminal Code specifically related to false personation facilitated by artificial intelligence (AI). The relevance to each category is as follows: - **Social Impact:** The legislation clearly addresses potential harms and impacts on society due to AI's ability to facilitate false personation. This has implications for trust in social transactions and public interactions, leading to significant societal issues. Thus, it is deemed extremely relevant. - **Data Governance:** While the document defines AI, it does not specifically address issues around data collection or management, which is central to the data governance category. This category is slightly relevant mainly due to the potential underlying requirements for accurate data in AI systems, but not explicitly mentioned in this text. - **System Integrity:** The legislation implies a need for control over AI in preventing misuse (false personation). However, it does not extend to broader governance or technical standards relevant to system integrity, resulting in a moderately relevant assessment. - **Robustness:** There are no mentions of benchmarks, auditing, or assessments of AI performance in this text. Therefore, the relevance of this category is considered not relevant.
Sector:
Judicial system
Hybrid, Emerging, and Unclassified (see reasoning)
The legislation primarily addresses the use of AI in a criminal context, focusing on implications for false personation. Each sector's relevance is evaluated as follows: - **Politics and Elections:** The text does not address the regulatory impacts of AI in political contexts, making it not relevant. - **Government Agencies and Public Services:** Though it pertains to criminal law, it does not explicitly address the use of AI in governmental service delivery. Therefore, it is considered slightly relevant. - **Judicial System:** The text pertains to criminal law and could have implications for how AI is treated within the judicial framework; however, it does not outline specific regulations for AI in judicial contexts, hence, moderately relevant. - **Healthcare:** There are no direct implications concerning healthcare applications of AI, resulting in no relevance here. - **Private Enterprises, Labor, and Employment:** While it concerns false personation, this does not touch on employment practices directly related to AI, making this sector not relevant. - **Academic and Research Institutions:** The text does not touch on educational or research implications, so relevance is not applicable. - **International Cooperation and Standards:** There are no discussions about international standards or cooperation regarding AI in this text, leading to no relevance. - **Nonprofits and NGOs:** The text does not address AI within the nonprofit sector, resulting in no relevance. - **Hybrid, Emerging, and Unclassified:** Given that the legislation is specific to criminal implications related to AI and does not fit neatly into other categories, this sector receives a moderate relevance score.
Keywords (occurrence): artificial intelligence (3) automated (1) show keywords in context
Description: To require the Election Assistance Commission to develop voluntary guidelines for the administration of elections that address the use and risks of artificial intelligence technologies, and for other purposes.
Summary: The "Preparing Election Administrators for AI Act" mandates the Election Assistance Commission to create voluntary guidelines addressing the use and risks of artificial intelligence in election administration.
Collection: Legislation
Status date: May 10, 2024
Status: Introduced
Primary sponsor: Chrissy Houlahan
(9 total sponsors)
Last action: Referred to the House Committee on House Administration. (May 10, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly addresses the use and risks of artificial intelligence technologies in the context of election administration. It discusses the need for voluntary guidelines on how AI can affect elections, including the risks and benefits associated with its use, highlighting issues such as cybersecurity, accuracy of election information, and the potential for disinformation. Given the connection to social issues, the security and transparency of AI systems in elections, and designing governance measures around AI use, the categories reflect varying degrees of relevance. The 'Social Impact' category is highly relevant due to attention to biases and misinformation, while 'Data Governance', 'System Integrity', and 'Robustness' may relate indirectly but are less emphasized in the context of the text's focus on election administration.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The text centers on the implications of AI technologies specifically within the electoral process, highlighting guidelines for election officials. This context strongly aligns with the 'Politics and Elections' sector as it addresses the intersection of AI and democratic processes. While 'Government Agencies and Public Services' is also relevant due to the involvement of the Election Assistance Commission, it is secondary to the more focused implications for politics and elections. Other sectors such as 'Judicial System', 'Healthcare', and 'Private Enterprises' do not apply here, as they do not intersect with the content of this text.
Keywords (occurrence): artificial intelligence (7) show keywords in context
Description: To require digital social companies to adopt terms of service that meet certain minimum requirements.
Summary: The Digital Social Platform Transparency Act mandates digital social companies to implement clear terms of service, enhancing user awareness and reporting practices, with penalties for non-compliance.
Collection: Legislation
Status date: July 24, 2024
Status: Introduced
Primary sponsor: Katie Porter
(sole sponsor)
Last action: Referred to the House Committee on Energy and Commerce. (July 24, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The Digital Social Platform Transparency Act addresses AI primarily through content moderation mechanisms that involve automated systems. This includes the usage of 'artificial intelligence software' for flagging content and determining actions on flagged items. The consideration of these automated systems points to a significant engagement with issues of accountability, transparency, and bias, impacting societal implications. As such, while primarily focused on transparency and accountability, the act’s provisions around AI usage in content moderation make it relevant to multiple categories, particularly Social Impact and System Integrity, as they both pertain to the usage of AI in public spheres and the implications this has for society. Data Governance is relevant due to the need for accurate reporting on flagged content, which indirectly involves data management governed by AI systems. Robustness is less relevant here since the focus isn’t primarily on performance benchmarks or compliance standards, but rather on operational transparency and accountability of digital social platforms.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The act directly pertains to sectors such as Politics and Elections due to its implications for content moderation and misinformation, particularly related to political discourse. It also engages the Government Agencies and Public Services sector by requiring reporting to the Attorney General regarding practices and misconduct, ensuring accountability for digital social platforms. Although it does not primarily address the Judicial System or Healthcare, its potential effects on public trust and interaction with digital media in influence and transparency connect it to broader implications in these sectors. Thus, while primarily focused on digital social platforms, its effects on political discourse and public services establish its relevance.
Keywords (occurrence): artificial intelligence (2) automated (1) show keywords in context
Description: Establishes criminal penalties for production or dissemination of deceptive audio or visual media, commonly known as "deepfakes."
Summary: The bill establishes criminal penalties for creating or sharing deceptive audio or visual media, known as "deepfakes," aimed at preventing their misuse in committing crimes or harming individuals.
Collection: Legislation
Status date: Feb. 8, 2024
Status: Introduced
Primary sponsor: Paul Moriarty
(5 total sponsors)
Last action: Substituted by A3540 (ACS/2R) (Jan. 30, 2025)
Societal Impact
System Integrity (see reasoning)
The text primarily addresses the issue of deceptive audio and visual media, or 'deepfakes,' and establishes penalties for their production and dissemination. This has significant implications for Social Impact as it pertains to psychological harm caused by misleading media. The legislation also has relevance to System Integrity by ensuring accountability in the use of AI technologies involved in creating deepfakes, thereby promoting transparency in such operations. Data Governance is somewhat relevant, but the focus is more on the governance of media than data management itself. Robustness has limited direct relevance as this text does not focus on performance benchmarks or compliance auditing related to AI systems.
Sector:
Politics and Elections
Government Agencies and Public Services
Nonprofits and NGOs (see reasoning)
This legislation is particularly relevant to Nonprofits and NGOs that may engage with or advocate against the misuse of deepfake technology, potentially affecting their operations. Additionally, the implications of deepfake media tangentially touch upon the realms of Politics and Elections, as the spread of manipulated media can influence public perception and electoral outcomes. However, the primary focus is not explicitly on political regulation, thus not receiving a high relevance score for that sector. The Government Agencies and Public Services sector may also intersect as public institutions may be required to address the harms of deepfakes. Nevertheless, the core of the legislation speaks more to the societal impact of technology rather than specific sector regulations or applications.
Keywords (occurrence): artificial intelligence (2) show keywords in context
Description: Establishes the Stop Addictive Feeds Exploitation (SAFE) For Kids Act prohibiting the provision of addictive feeds to minors by addictive social media platforms; establishes remedies and penalties.
Summary: The SAFE For Kids Act prohibits addictive social media feeds for minors without parental consent, aiming to protect children's mental health and regulate social media companies' practices. Penalties are established for violations.
Collection: Legislation
Status date: June 20, 2024
Status: Passed
Primary sponsor: Andrew Gounardes
(38 total sponsors)
Last action: SIGNED CHAP.120 (June 20, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text addresses the negative impacts of addictive feeds created by social media platforms, particularly on minors, which closely aligns with concerns about the social impact of AI. The use of 'machine learning algorithms' indicates the reliance on AI technologies for personalizing these feeds, which can contribute to mental health issues among youth. The act aims to regulate these phenomena by prohibiting the provision of addictive feeds to minors, highlighting its relevance to accountability and consumer protection which further underscores its social implications.
Sector:
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
The legislation specifically targets the effects of AI-driven algorithms in social media, impacting minors' health and online safety. While it does not directly address AI in government services, healthcare, or the judicial system, it does touch on regulation of private enterprises (social media platforms). Thus, its broad implications for youth welfare and the regulation of technology in these contexts make it marginally relevant here. However, it is primarily focused on social media rather than healthcare or legal applications.
Keywords (occurrence): machine learning (1) automated (1) show keywords in context
Description: Establishes certain requirements for social media websites concerning content moderation practices; establishes cause of action against social media websites for violation of content moderation practices.
Summary: The bill mandates social media websites in New Jersey to adhere to specific content moderation standards, prohibits arbitrary user bans, and allows users to seek damages for violations.
Collection: Legislation
Status date: Feb. 22, 2024
Status: Introduced
Primary sponsor: Paul Kanitra
(sole sponsor)
Last action: Introduced, Referred to Assembly Science, Innovation and Technology Committee (Feb. 22, 2024)
Societal Impact
Data Governance (see reasoning)
The text addresses the requirements for social media websites specifically concerning content moderation practices and the role of algorithms in censoring or tagging content. The references to 'Algorithm' and their categorization in various subsections indicate a significant focus on how algorithms decide user interactions and content visibility, which is central to the discussions of social impact through moderation and censorship in the public discourse. Additionally, there are mentions of interactions that may lead to harm, such as banning and censoring users. Hence, the relevance is explicitly tied to social impact. The data governance aspects are also notable, especially regarding transparency and fairness in how algorithms are applied, but this is secondary to the larger social impact issues discussed. System integrity is less relevant as the text does not focus on the security of algorithms, although some transparency requirements touch upon aspects of integrity. Robustness is not explicitly addressed. Therefore, final scores reflect these considerations.
Sector:
Politics and Elections
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily concerns social media platforms, which directly aligns with the broader sector of Private Enterprises, Labor, and Employment due to its implications on user content manageability within a corporate structure. There is also a significant dimension related to Politics and Elections, particularly due to the mentions of 'candidates' for public office and the oversight of online interactions related to political campaigns. However, given that much of the content focuses specifically on algorithmic behavior in the context of content moderation, it is well justified to score higher in that sector. While government agencies’ roles in oversight are hinted, this aspect does not dominate the text. Therefore, final scoring captures these primary sector correlations.
Keywords (occurrence): algorithm (4) show keywords in context
Description: An act to add Title 15.2 (commencing with Section 3110) to Part 4 of Division 3 of the Civil Code, relating to artificial intelligence.
Summary: Assembly Bill 2013 mandates developers of generative AI systems to disclose training data information on their websites by January 1, 2026, ensuring transparency and accountability for AI development.
Collection: Legislation
Status date: Sept. 28, 2024
Status: Passed
Primary sponsor: Jacqui Irwin
(sole sponsor)
Last action: Chaptered by Secretary of State - Chapter 817, Statutes of 2024. (Sept. 28, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text specifically discusses the regulation of artificial intelligence, particularly in relation to training data transparency. It addresses requirements for AI developers to disclose information about the datasets used for AI training, which relates directly to concerns about the social impact of AI and the implications of AI training data on bias, accountability, and consumer protections. Furthermore, it highlights transparency requirements in AI systems, connecting it strongly with Data Governance as it deals with data management and rectifying inaccuracies. The bill also places importance on AI systems' purpose and integrity, linking it with System Integrity as it mandates developers to provide substantial documentation. Given these considerations, the text is relevant to all four categories, but the emphasis on training data indicates a particularly strong connection to Data Governance and Social Impact.
Sector:
Government Agencies and Public Services
Hybrid, Emerging, and Unclassified (see reasoning)
The legislation addresses the use of artificial intelligence in a clear and direct manner through the framework it establishes for data transparency among developers. However, the text does not specifically cater to any single sector like politics, healthcare, or the judicial system, but rather provides a broad regulatory framework applicable across sectors. Thus, it implies an impact on multiple sectors but does not directly address any specific sector, making it less relevant in that particular context. It does relate to Government Agencies and Public Services since it mandates transparency that could affect state agencies employing AI. The content thus encourages cross-sectoral implications but remains loosely connected to specific sectors.
Keywords (occurrence): artificial intelligence (26) automated (1) show keywords in context
Description: Creates a temporary state commission to study and investigate how to regulate artificial intelligence, robotics and automation; repeals such commission.
Summary: The bill proposes the creation of a temporary New York state commission to study regulations for artificial intelligence, robotics, and automation, reporting findings by December 2025 before its repeal.
Collection: Legislation
Status date: March 20, 2024
Status: Introduced
Primary sponsor: Clyde Vanel
(sole sponsor)
Last action: referred to science and technology (March 20, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text outlines the establishment of a temporary commission focused on investigating the regulation of artificial intelligence, robotics, and automation. The commission’s responsibilities will include examining current laws, potential liabilities, employment impacts, information confidentiality, weapon usage restrictions, and public sector applications. Therefore, it is closely aligned with issues that relate to social impact as the commission will assess how AI and related technologies affect employment and societal structures. Furthermore, the text implicates system integrity as it aims to ensure proper accountability and oversight of AI technologies' use and regulation. Given the nature of the commission’s work, data governance also comes into play, especially regarding the handling of confidential information. Robustness is less relevant here as the focus is not primarily on performance benchmarks or compliance, but instead on regulatory frameworks and ethical considerations. Thus, categories of Social Impact, Data Governance, and System Integrity are particularly relevant.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The text speaks primarily to a public sector initiative as it creates a commission that will investigate how government regulation can handle AI and related technologies. This involves public considerations, governmental oversight, and societal impacts—categorizing it primarily under Government Agencies and Public Services. It touches on potential impacts on employment and legal aspects, suggesting some relevance to the Private Enterprises, Labor, and Employment sector but not strongly enough to elevate it beyond moderate relevance. The mention of confidentiality and regulations related to AI processing could link to aspects of the Judicial System or Healthcare, but those are not the focus of this text. Therefore, the most fitting category here is Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (9) show keywords in context
Description: Expressing the sense of the House of Representatives with respect to the use of artificial intelligence in the financial services and housing industries.
Summary: This resolution expresses the House's stance on overseeing artificial intelligence use in financial services and housing, emphasizing regulation, consumer data privacy, workforce impact, and maintaining U.S. leadership in AI development.
Collection: Legislation
Status date: Nov. 26, 2024
Status: Introduced
Primary sponsor: Patrick McHenry
(2 total sponsors)
Last action: Referred to the House Committee on Financial Services. (Nov. 26, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly discusses the use of artificial intelligence in financial services and housing, touching on various potential impacts and risks it may pose. It addresses issues of bias and discrimination in decision-making, regulatory gaps, and the need for oversight, making it particularly relevant to the Social Impact category. Furthermore, it highlights the importance of data privacy in relation to AI systems, which is a significant focus of the Data Governance category. Discussions on compliance and oversight may relate to System Integrity, while the overall focus on adoption, performance measures, and scrutiny could connect to Robustness. However, since these elements are more exploratory rather than strictly performance-related, the relevance to Robustness is less pronounced. Therefore, the scores reflect the emphasis on social and data governance implications while maintaining a supportive mention for system integrity.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text relates directly to the financial services and housing industries, highlighting the role of AI within these sectors. It mentions applications like underwriting and market surveillance, which are directly tied to the operational aspects of financial services. Therefore, the relevance to Government Agencies and Public Services could be seen in how these sectors may work under or with governmental regulation. However, specific references to governmental roles are not extensively described outside of legislative oversight. Topics touching on economic implications via AI involvement in labor could marginally connect to the Private Enterprises, Labor, and Employment sector. Yet, the focus remains tightly bound to financial services and housing, limiting broader sector applicability. The text does not delve into politics, law, healthcare, education, or international relations, which further constrains its classification into sectors beyond the primary focus. Thus, scoring reflects a strong alignment with the financial context without significant intersections with other sectors.
Keywords (occurrence): artificial intelligence (3) automated (1) show keywords in context
Description: A bill to amend the Workforce Innovation and Opportunity Act to establish a grant program for a workforce data quality initiative, and for other purposes.
Summary: The Workforce Data Enhancement Act aims to establish a grant program within the Workforce Innovation and Opportunity Act, enhancing workforce data quality for better decision-making and improving education and labor market integration.
Collection: Legislation
Status date: Nov. 21, 2024
Status: Introduced
Primary sponsor: John Hickenlooper
(2 total sponsors)
Last action: Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (Nov. 21, 2024)
Societal Impact
Data Governance (see reasoning)
The text of the 'Workforce Data Enhancement Act' primarily discusses the establishment of a grant program aimed at improving workforce data quality initiatives. AI is explicitly mentioned regarding its expanding role in the workplace, which indicates a consideration for the implications of AI technologies in labor markets and workforce dynamics. However, the bill is fundamentally focused on data quality and workforce development rather than the broader societal implications, governance frameworks, system integrity, or robustness of AI technologies specifically. Therefore, while related to workforce improvement influenced by AI, its alignment with deeper regulatory concerns around AI impacts on society, data management practices, system integrity, or performance benchmarks is limited. It combines elements relevant to both the impact of AI on jobs and the methods of handling workforce data but does not deeply engage with each category's broader concerns.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The act includes provisions that address how emerging technologies, particularly AI and machine learning, are influencing labor market outcomes and the skills required by workers. It discusses the integration of data from education and workforce development systems, as well as the establishment and improvement of systems for data collection that could affect how government agencies implement AI. However, it does do not focus heavily on specific sectors such as healthcare or judicial systems, but its implications for labor markets and public services are significant. Overall, it aligns well with the importance of accurate workforce data influenced by AI development trends, which indicates a moderate impact on workforce-related legislation and public services.
Keywords (occurrence): artificial intelligence (1) machine learning (2) show keywords in context
Summary: The bill H.R. 10263 asserts Congress's authority to regulate commerce related to artificial intelligence, affirming its single focus on this subject for legislative clarity.
Collection: Congressional Record
Status date: Nov. 26, 2024
Status: Issued
Source: Congress
Societal Impact (see reasoning)
The text explicitly mentions 'Artificial Intelligence' as the single subject of the legislation. This indicates that the legislation is directly focused on AI, suggesting significant relevance to all categories related to AI, particularly Social Impact. Since there are no references to data governance, system integrity, or robustness, these categories will score lower to reflect that lack of relevance.
Sector:
Government Agencies and Public Services (see reasoning)
The text does not explicitly mention applications of AI in specific sectors such as politics and elections, government services, healthcare, etc., but it introduces the subject of Artificial Intelligence, which could potentially impact any of these sectors. However, without specific references to any sector usage, all sectors will score low, with the highest score going to Government Agencies and Public Services due to the legislative context.
Keywords (occurrence): artificial intelligence (1)