5055 results:


Description: As introduced, requires a person to include a disclosure on certain content generated by artificial intelligence that the content was generated using artificial intelligence; makes it an unfair or deceptive act or practice under the Tennessee Consumer Protection Act of 1977 to distribute certain content generated using artificial intelligence without the required disclosure. - Amends TCA Title 47.
Summary: This bill amends Tennessee law to require disclosure for AI-generated content, ensuring transparency about its origins, and allows for legal action against those who fail to comply.
Collection: Legislation
Status date: Jan. 30, 2024
Status: Introduced
Primary sponsor: Bill Powers (sole sponsor)
Last action: Assigned to General Subcommittee of Senate Commerce and Labor Committee (March 13, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The bill explicitly addresses the use of artificial intelligence, particularly focusing on the requirement for disclosures regarding AI-generated content. This connects strongly with issues of social impact, as it seeks to protect consumers from potential deception and misinformation generated by AI systems. It also touches on data governance as it mandates the disclosure of AI-generated content to provide transparency, which is a key aspect of ensuring that individuals are informed about the digital content they consume. These considerations directly relate to the implications of AI in society and the management of data that is manipulated or generated through AI frameworks. Given the nature of the content and its potential impacts, all categories are relevant to varying degrees.


Sector: None (see reasoning)

The bill primarily addresses consumer protection in the context of AI-generated content, which relates most closely to public welfare and consumer rights. It does not specifically address the use of AI in political systems, government services, judicial applications, healthcare, employment, research institutions, or international cooperation, thus lowering the relevance of the other sectors. The bill’s focus on AI disclosures means it is primarily applicable to general consumers rather than specific sectors; this also explains why certain sectors received lower scores. Politics and elections, government agencies and public services, judicial system, healthcare, and the other sectors show minimal to no relevance. Therefore, only a couple of sectors are deemed relevant based on the implications discussed.


Keywords (occurrence): artificial intelligence (5) automated (1) show keywords in context

Description: COMMERCIAL LAW -- GENERAL REGULATORY PROVISIONS -- AUTOMATED DECISION TOOLS - Requires companies that develop or deploy high-risk AI systems to conduct impact assessments and adopt risk management programs, would apply to both developers and deployers of AI systems with different obligations based on their role in AI ecosystem.
Summary: The bill mandates companies developing or deploying high-risk AI systems to conduct impact assessments and implement risk management programs, ensuring accountability and reducing potential harm from AI decisions.
Collection: Legislation
Status date: March 22, 2024
Status: Introduced
Primary sponsor: Louis Dipalma (6 total sponsors)
Last action: Committee recommended measure be held for further study (April 11, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

This legislation primarily addresses the development and deployment of high-risk AI systems, specifically focusing on required impact assessments and risk management programs. It discusses the implications of artificial intelligence (AI) in making consequential decisions that significantly affect individuals, illustrating the potential for bias in AI outputs and the need for transparency in AI deployment. Therefore, it has clear relevance to Social Impact due to its focus on accountability and the ethical implications of AI usage. Data Governance is also relevant as the law mandates accurate documentation and management of data used in AI systems to prevent bias and protect privacy. System Integrity is pertinent due to the consideration of risk management and oversight in AI processes. Robustness is relevant, but to a lesser extent, as it mentions general requirements for assessing the performance of AI systems without delving into specific benchmarks. Overall, the text's provisions primarily address social implications arising from AI deployment, with robust requirements for both developers and deployers of AI.


Sector:
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)

The legislation is highly relevant to various sectors that involve the application of AI technology. Specifically, it directly pertains to the Government Agencies and Public Services sector as it outlines regulatory requirements for developers and deployers of AI, which likely includes public service applications. It is also relevant to Private Enterprises, Labor, and Employment as it determines how businesses use AI systems to make significant decisions impacting individuals' lives, including employment and financial decisions. While it has implications for the Judicial System regarding potential bias in AI outcomes affecting legal aspects, its focus does not specifically target judicial processes. Healthcare may be considered due to the impact on AI systems used in healthcare decision-making, but it is not a primary focus. The legislation does not directly address political implications or nonprofit usage of AI, falling short in relevance to these sectors. Overall, the strongest connections are with Government Agencies and Public Services and Private Enterprises, Labor, and Employment.


Keywords (occurrence): artificial intelligence (3) machine learning (1) automated (4) show keywords in context

Description: Amends the Criminal Code of 2012. Provides that certain forms of false personation may be accomplished by artificial intelligence. Defines "artificial intelligence".
Summary: The bill amends Illinois' Criminal Code to include provisions that allow for false personation via artificial intelligence, defining AI and expanding existing fraud laws.
Collection: Legislation
Status date: Jan. 19, 2024
Status: Introduced
Primary sponsor: Meg Loughran Cappel (sole sponsor)
Last action: Rule 3-9(a) / Re-referred to Assignments (March 15, 2024)

Category:
Societal Impact
System Integrity (see reasoning)

The text discusses amendments to the Criminal Code specifically related to false personation facilitated by artificial intelligence (AI). The relevance to each category is as follows: - **Social Impact:** The legislation clearly addresses potential harms and impacts on society due to AI's ability to facilitate false personation. This has implications for trust in social transactions and public interactions, leading to significant societal issues. Thus, it is deemed extremely relevant. - **Data Governance:** While the document defines AI, it does not specifically address issues around data collection or management, which is central to the data governance category. This category is slightly relevant mainly due to the potential underlying requirements for accurate data in AI systems, but not explicitly mentioned in this text. - **System Integrity:** The legislation implies a need for control over AI in preventing misuse (false personation). However, it does not extend to broader governance or technical standards relevant to system integrity, resulting in a moderately relevant assessment. - **Robustness:** There are no mentions of benchmarks, auditing, or assessments of AI performance in this text. Therefore, the relevance of this category is considered not relevant.


Sector:
Judicial system
Hybrid, Emerging, and Unclassified (see reasoning)

The legislation primarily addresses the use of AI in a criminal context, focusing on implications for false personation. Each sector's relevance is evaluated as follows: - **Politics and Elections:** The text does not address the regulatory impacts of AI in political contexts, making it not relevant. - **Government Agencies and Public Services:** Though it pertains to criminal law, it does not explicitly address the use of AI in governmental service delivery. Therefore, it is considered slightly relevant. - **Judicial System:** The text pertains to criminal law and could have implications for how AI is treated within the judicial framework; however, it does not outline specific regulations for AI in judicial contexts, hence, moderately relevant. - **Healthcare:** There are no direct implications concerning healthcare applications of AI, resulting in no relevance here. - **Private Enterprises, Labor, and Employment:** While it concerns false personation, this does not touch on employment practices directly related to AI, making this sector not relevant. - **Academic and Research Institutions:** The text does not touch on educational or research implications, so relevance is not applicable. - **International Cooperation and Standards:** There are no discussions about international standards or cooperation regarding AI in this text, leading to no relevance. - **Nonprofits and NGOs:** The text does not address AI within the nonprofit sector, resulting in no relevance. - **Hybrid, Emerging, and Unclassified:** Given that the legislation is specific to criminal implications related to AI and does not fit neatly into other categories, this sector receives a moderate relevance score.


Keywords (occurrence): artificial intelligence (3) automated (1) show keywords in context

Description: To require the Election Assistance Commission to develop voluntary guidelines for the administration of elections that address the use and risks of artificial intelligence technologies, and for other purposes.
Summary: The "Preparing Election Administrators for AI Act" mandates the Election Assistance Commission to create voluntary guidelines addressing the use and risks of artificial intelligence in election administration.
Collection: Legislation
Status date: May 10, 2024
Status: Introduced
Primary sponsor: Chrissy Houlahan (9 total sponsors)
Last action: Referred to the House Committee on House Administration. (May 10, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text explicitly addresses the use and risks of artificial intelligence technologies in the context of election administration. It discusses the need for voluntary guidelines on how AI can affect elections, including the risks and benefits associated with its use, highlighting issues such as cybersecurity, accuracy of election information, and the potential for disinformation. Given the connection to social issues, the security and transparency of AI systems in elections, and designing governance measures around AI use, the categories reflect varying degrees of relevance. The 'Social Impact' category is highly relevant due to attention to biases and misinformation, while 'Data Governance', 'System Integrity', and 'Robustness' may relate indirectly but are less emphasized in the context of the text's focus on election administration.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The text centers on the implications of AI technologies specifically within the electoral process, highlighting guidelines for election officials. This context strongly aligns with the 'Politics and Elections' sector as it addresses the intersection of AI and democratic processes. While 'Government Agencies and Public Services' is also relevant due to the involvement of the Election Assistance Commission, it is secondary to the more focused implications for politics and elections. Other sectors such as 'Judicial System', 'Healthcare', and 'Private Enterprises' do not apply here, as they do not intersect with the content of this text.


Keywords (occurrence): artificial intelligence (7) show keywords in context

Description: To require digital social companies to adopt terms of service that meet certain minimum requirements.
Summary: The Digital Social Platform Transparency Act mandates digital social companies to implement clear terms of service, enhancing user awareness and reporting practices, with penalties for non-compliance.
Collection: Legislation
Status date: July 24, 2024
Status: Introduced
Primary sponsor: Katie Porter (sole sponsor)
Last action: Referred to the House Committee on Energy and Commerce. (July 24, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The Digital Social Platform Transparency Act addresses AI primarily through content moderation mechanisms that involve automated systems. This includes the usage of 'artificial intelligence software' for flagging content and determining actions on flagged items. The consideration of these automated systems points to a significant engagement with issues of accountability, transparency, and bias, impacting societal implications. As such, while primarily focused on transparency and accountability, the act’s provisions around AI usage in content moderation make it relevant to multiple categories, particularly Social Impact and System Integrity, as they both pertain to the usage of AI in public spheres and the implications this has for society. Data Governance is relevant due to the need for accurate reporting on flagged content, which indirectly involves data management governed by AI systems. Robustness is less relevant here since the focus isn’t primarily on performance benchmarks or compliance standards, but rather on operational transparency and accountability of digital social platforms.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The act directly pertains to sectors such as Politics and Elections due to its implications for content moderation and misinformation, particularly related to political discourse. It also engages the Government Agencies and Public Services sector by requiring reporting to the Attorney General regarding practices and misconduct, ensuring accountability for digital social platforms. Although it does not primarily address the Judicial System or Healthcare, its potential effects on public trust and interaction with digital media in influence and transparency connect it to broader implications in these sectors. Thus, while primarily focused on digital social platforms, its effects on political discourse and public services establish its relevance.


Keywords (occurrence): artificial intelligence (2) automated (1) show keywords in context

Description: Establishes criminal penalties for production or dissemination of deceptive audio or visual media, commonly known as "deepfakes."
Summary: The bill establishes criminal penalties for creating or sharing deceptive audio or visual media, known as "deepfakes," aimed at preventing their misuse in committing crimes or harming individuals.
Collection: Legislation
Status date: Feb. 8, 2024
Status: Introduced
Primary sponsor: Paul Moriarty (5 total sponsors)
Last action: Substituted by A3540 (ACS/2R) (Jan. 30, 2025)

Category:
Societal Impact
System Integrity (see reasoning)

The text primarily addresses the issue of deceptive audio and visual media, or 'deepfakes,' and establishes penalties for their production and dissemination. This has significant implications for Social Impact as it pertains to psychological harm caused by misleading media. The legislation also has relevance to System Integrity by ensuring accountability in the use of AI technologies involved in creating deepfakes, thereby promoting transparency in such operations. Data Governance is somewhat relevant, but the focus is more on the governance of media than data management itself. Robustness has limited direct relevance as this text does not focus on performance benchmarks or compliance auditing related to AI systems.


Sector:
Politics and Elections
Government Agencies and Public Services
Nonprofits and NGOs (see reasoning)

This legislation is particularly relevant to Nonprofits and NGOs that may engage with or advocate against the misuse of deepfake technology, potentially affecting their operations. Additionally, the implications of deepfake media tangentially touch upon the realms of Politics and Elections, as the spread of manipulated media can influence public perception and electoral outcomes. However, the primary focus is not explicitly on political regulation, thus not receiving a high relevance score for that sector. The Government Agencies and Public Services sector may also intersect as public institutions may be required to address the harms of deepfakes. Nevertheless, the core of the legislation speaks more to the societal impact of technology rather than specific sector regulations or applications.


Keywords (occurrence): artificial intelligence (2) show keywords in context

Description: Establishes the Stop Addictive Feeds Exploitation (SAFE) For Kids Act prohibiting the provision of addictive feeds to minors by addictive social media platforms; establishes remedies and penalties.
Summary: The SAFE For Kids Act prohibits addictive social media feeds for minors without parental consent, aiming to protect children's mental health and regulate social media companies' practices. Penalties are established for violations.
Collection: Legislation
Status date: June 20, 2024
Status: Passed
Primary sponsor: Andrew Gounardes (38 total sponsors)
Last action: SIGNED CHAP.120 (June 20, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text addresses the negative impacts of addictive feeds created by social media platforms, particularly on minors, which closely aligns with concerns about the social impact of AI. The use of 'machine learning algorithms' indicates the reliance on AI technologies for personalizing these feeds, which can contribute to mental health issues among youth. The act aims to regulate these phenomena by prohibiting the provision of addictive feeds to minors, highlighting its relevance to accountability and consumer protection which further underscores its social implications.


Sector:
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)

The legislation specifically targets the effects of AI-driven algorithms in social media, impacting minors' health and online safety. While it does not directly address AI in government services, healthcare, or the judicial system, it does touch on regulation of private enterprises (social media platforms). Thus, its broad implications for youth welfare and the regulation of technology in these contexts make it marginally relevant here. However, it is primarily focused on social media rather than healthcare or legal applications.


Keywords (occurrence): machine learning (1) automated (1) show keywords in context

Description: Establishes certain requirements for social media websites concerning content moderation practices; establishes cause of action against social media websites for violation of content moderation practices.
Summary: The bill mandates social media websites in New Jersey to adhere to specific content moderation standards, prohibits arbitrary user bans, and allows users to seek damages for violations.
Collection: Legislation
Status date: Feb. 22, 2024
Status: Introduced
Primary sponsor: Paul Kanitra (sole sponsor)
Last action: Introduced, Referred to Assembly Science, Innovation and Technology Committee (Feb. 22, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text addresses the requirements for social media websites specifically concerning content moderation practices and the role of algorithms in censoring or tagging content. The references to 'Algorithm' and their categorization in various subsections indicate a significant focus on how algorithms decide user interactions and content visibility, which is central to the discussions of social impact through moderation and censorship in the public discourse. Additionally, there are mentions of interactions that may lead to harm, such as banning and censoring users. Hence, the relevance is explicitly tied to social impact. The data governance aspects are also notable, especially regarding transparency and fairness in how algorithms are applied, but this is secondary to the larger social impact issues discussed. System integrity is less relevant as the text does not focus on the security of algorithms, although some transparency requirements touch upon aspects of integrity. Robustness is not explicitly addressed. Therefore, final scores reflect these considerations.


Sector:
Politics and Elections
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text primarily concerns social media platforms, which directly aligns with the broader sector of Private Enterprises, Labor, and Employment due to its implications on user content manageability within a corporate structure. There is also a significant dimension related to Politics and Elections, particularly due to the mentions of 'candidates' for public office and the oversight of online interactions related to political campaigns. However, given that much of the content focuses specifically on algorithmic behavior in the context of content moderation, it is well justified to score higher in that sector. While government agencies’ roles in oversight are hinted, this aspect does not dominate the text. Therefore, final scoring captures these primary sector correlations.


Keywords (occurrence): algorithm (4) show keywords in context

Description: An act to add Title 15.2 (commencing with Section 3110) to Part 4 of Division 3 of the Civil Code, relating to artificial intelligence.
Summary: Assembly Bill 2013 mandates developers of generative AI systems to disclose training data information on their websites by January 1, 2026, ensuring transparency and accountability for AI development.
Collection: Legislation
Status date: Sept. 28, 2024
Status: Passed
Primary sponsor: Jacqui Irwin (sole sponsor)
Last action: Chaptered by Secretary of State - Chapter 817, Statutes of 2024. (Sept. 28, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text specifically discusses the regulation of artificial intelligence, particularly in relation to training data transparency. It addresses requirements for AI developers to disclose information about the datasets used for AI training, which relates directly to concerns about the social impact of AI and the implications of AI training data on bias, accountability, and consumer protections. Furthermore, it highlights transparency requirements in AI systems, connecting it strongly with Data Governance as it deals with data management and rectifying inaccuracies. The bill also places importance on AI systems' purpose and integrity, linking it with System Integrity as it mandates developers to provide substantial documentation. Given these considerations, the text is relevant to all four categories, but the emphasis on training data indicates a particularly strong connection to Data Governance and Social Impact.


Sector:
Government Agencies and Public Services
Hybrid, Emerging, and Unclassified (see reasoning)

The legislation addresses the use of artificial intelligence in a clear and direct manner through the framework it establishes for data transparency among developers. However, the text does not specifically cater to any single sector like politics, healthcare, or the judicial system, but rather provides a broad regulatory framework applicable across sectors. Thus, it implies an impact on multiple sectors but does not directly address any specific sector, making it less relevant in that particular context. It does relate to Government Agencies and Public Services since it mandates transparency that could affect state agencies employing AI. The content thus encourages cross-sectoral implications but remains loosely connected to specific sectors.


Keywords (occurrence): artificial intelligence (26) automated (1) show keywords in context

Description: Creates a temporary state commission to study and investigate how to regulate artificial intelligence, robotics and automation; repeals such commission.
Summary: The bill proposes the creation of a temporary New York state commission to study regulations for artificial intelligence, robotics, and automation, reporting findings by December 2025 before its repeal.
Collection: Legislation
Status date: March 20, 2024
Status: Introduced
Primary sponsor: Clyde Vanel (sole sponsor)
Last action: referred to science and technology (March 20, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

This text outlines the establishment of a temporary commission focused on investigating the regulation of artificial intelligence, robotics, and automation. The commission’s responsibilities will include examining current laws, potential liabilities, employment impacts, information confidentiality, weapon usage restrictions, and public sector applications. Therefore, it is closely aligned with issues that relate to social impact as the commission will assess how AI and related technologies affect employment and societal structures. Furthermore, the text implicates system integrity as it aims to ensure proper accountability and oversight of AI technologies' use and regulation. Given the nature of the commission’s work, data governance also comes into play, especially regarding the handling of confidential information. Robustness is less relevant here as the focus is not primarily on performance benchmarks or compliance, but instead on regulatory frameworks and ethical considerations. Thus, categories of Social Impact, Data Governance, and System Integrity are particularly relevant.


Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)

The text speaks primarily to a public sector initiative as it creates a commission that will investigate how government regulation can handle AI and related technologies. This involves public considerations, governmental oversight, and societal impacts—categorizing it primarily under Government Agencies and Public Services. It touches on potential impacts on employment and legal aspects, suggesting some relevance to the Private Enterprises, Labor, and Employment sector but not strongly enough to elevate it beyond moderate relevance. The mention of confidentiality and regulations related to AI processing could link to aspects of the Judicial System or Healthcare, but those are not the focus of this text. Therefore, the most fitting category here is Government Agencies and Public Services.


Keywords (occurrence): artificial intelligence (9) show keywords in context

Description: Expressing the sense of the House of Representatives with respect to the use of artificial intelligence in the financial services and housing industries.
Summary: This resolution expresses the House's stance on overseeing artificial intelligence use in financial services and housing, emphasizing regulation, consumer data privacy, workforce impact, and maintaining U.S. leadership in AI development.
Collection: Legislation
Status date: Nov. 26, 2024
Status: Introduced
Primary sponsor: Patrick McHenry (2 total sponsors)
Last action: Referred to the House Committee on Financial Services. (Nov. 26, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text explicitly discusses the use of artificial intelligence in financial services and housing, touching on various potential impacts and risks it may pose. It addresses issues of bias and discrimination in decision-making, regulatory gaps, and the need for oversight, making it particularly relevant to the Social Impact category. Furthermore, it highlights the importance of data privacy in relation to AI systems, which is a significant focus of the Data Governance category. Discussions on compliance and oversight may relate to System Integrity, while the overall focus on adoption, performance measures, and scrutiny could connect to Robustness. However, since these elements are more exploratory rather than strictly performance-related, the relevance to Robustness is less pronounced. Therefore, the scores reflect the emphasis on social and data governance implications while maintaining a supportive mention for system integrity.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text relates directly to the financial services and housing industries, highlighting the role of AI within these sectors. It mentions applications like underwriting and market surveillance, which are directly tied to the operational aspects of financial services. Therefore, the relevance to Government Agencies and Public Services could be seen in how these sectors may work under or with governmental regulation. However, specific references to governmental roles are not extensively described outside of legislative oversight. Topics touching on economic implications via AI involvement in labor could marginally connect to the Private Enterprises, Labor, and Employment sector. Yet, the focus remains tightly bound to financial services and housing, limiting broader sector applicability. The text does not delve into politics, law, healthcare, education, or international relations, which further constrains its classification into sectors beyond the primary focus. Thus, scoring reflects a strong alignment with the financial context without significant intersections with other sectors.


Keywords (occurrence): artificial intelligence (3) automated (1) show keywords in context

Description: A bill to amend the Workforce Innovation and Opportunity Act to establish a grant program for a workforce data quality initiative, and for other purposes.
Summary: The Workforce Data Enhancement Act aims to establish a grant program within the Workforce Innovation and Opportunity Act, enhancing workforce data quality for better decision-making and improving education and labor market integration.
Collection: Legislation
Status date: Nov. 21, 2024
Status: Introduced
Primary sponsor: John Hickenlooper (2 total sponsors)
Last action: Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (Nov. 21, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text of the 'Workforce Data Enhancement Act' primarily discusses the establishment of a grant program aimed at improving workforce data quality initiatives. AI is explicitly mentioned regarding its expanding role in the workplace, which indicates a consideration for the implications of AI technologies in labor markets and workforce dynamics. However, the bill is fundamentally focused on data quality and workforce development rather than the broader societal implications, governance frameworks, system integrity, or robustness of AI technologies specifically. Therefore, while related to workforce improvement influenced by AI, its alignment with deeper regulatory concerns around AI impacts on society, data management practices, system integrity, or performance benchmarks is limited. It combines elements relevant to both the impact of AI on jobs and the methods of handling workforce data but does not deeply engage with each category's broader concerns.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The act includes provisions that address how emerging technologies, particularly AI and machine learning, are influencing labor market outcomes and the skills required by workers. It discusses the integration of data from education and workforce development systems, as well as the establishment and improvement of systems for data collection that could affect how government agencies implement AI. However, it does do not focus heavily on specific sectors such as healthcare or judicial systems, but its implications for labor markets and public services are significant. Overall, it aligns well with the importance of accurate workforce data influenced by AI development trends, which indicates a moderate impact on workforce-related legislation and public services.


Keywords (occurrence): artificial intelligence (1) machine learning (2) show keywords in context

Summary: The bill H.R. 10263 asserts Congress's authority to regulate commerce related to artificial intelligence, affirming its single focus on this subject for legislative clarity.
Collection: Congressional Record
Status date: Nov. 26, 2024
Status: Issued
Source: Congress

Category:
Societal Impact (see reasoning)

The text explicitly mentions 'Artificial Intelligence' as the single subject of the legislation. This indicates that the legislation is directly focused on AI, suggesting significant relevance to all categories related to AI, particularly Social Impact. Since there are no references to data governance, system integrity, or robustness, these categories will score lower to reflect that lack of relevance.


Sector:
Government Agencies and Public Services (see reasoning)

The text does not explicitly mention applications of AI in specific sectors such as politics and elections, government services, healthcare, etc., but it introduces the subject of Artificial Intelligence, which could potentially impact any of these sectors. However, without specific references to any sector usage, all sectors will score low, with the highest score going to Government Agencies and Public Services due to the legislative context.


Keywords (occurrence): artificial intelligence (1)

Description: Reinserts the provisions of the engrossed bill with the following changes. Provides that a provision in an agreement between an individual and any other person for the performance of personal or professional services is contrary to public policy and is deemed unenforceable if the provision does not include a reasonably specific description of the intended uses of the digital replica (rather than the provision does not clearly define and detail all of the proposed uses of the digital replica)....
Summary: The Digital Voice and Likeness Protection Act establishes legal protections against the unauthorized creation and use of a person's digital replicas, ensuring individuals maintain rights over their likeness and voice in contracts.
Collection: Legislation
Status date: Aug. 9, 2024
Status: Passed
Primary sponsor: Jennifer Gong-Gershowitz (19 total sponsors)
Last action: Public Act . . . . . . . . . 103-0830 (Aug. 9, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The legislation primarily focuses on the protection of individuals' rights concerning their digital likenesses and voices when generated using AI systems. It emphasizes the use of artificial intelligence in the creation of digital replicas, specifically addressing issues of consent and contractual enforceability related to these AI-generated representations. Therefore, it has a significant relevance to all categories due to its implications on social rights, data governance, system integrity, and the robustness of AI standards in managing digital likeness exploitation.


Sector:
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs (see reasoning)

The act has strong implications for various sectors, particularly concerning the rights of individuals in the digital landscape. It pertains to private enterprises, where agreements regarding AI usage of likeness are common. It also impacts the academic sector due to the relevance of AI use cases in research on digital identity and representation. Lastly, the act may touch upon government oversight roles and nonprofit interests involved with digital rights advocacy. Nevertheless, it does not directly address AI use in healthcare, politics, or the judicial system.


Keywords (occurrence): artificial intelligence (5) automated (1) algorithm (1) show keywords in context

Description: As introduced, enacts the "Protecting Tennessee Schools and Events Act"; subject to appropriations, requires the department of education to contract for the provision of walk-through metal detectors to LEAs. - Amends TCA Title 12 and Title 49.
Summary: This bill enhances school safety in Tennessee by requiring local education agencies to be provided with walk-through metal detectors for schools, addressing rising violence through strategic deployment and training.
Collection: Legislation
Status date: Jan. 30, 2024
Status: Introduced
Primary sponsor: Rush Bricken (sole sponsor)
Last action: Taken off notice for cal in s/c Finance, Ways, and Means Subcommittee of Finance, Ways, and Means Committee (April 17, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text discusses the provision and specifications of walk-through metal detectors in schools and highlights the integration of AI technologies in enhancing security measures. While the primary focus is on physical security, AI plays a role in improving the effectiveness of these security systems, which has implications for social standards, data handling, and system control. The AI integration in identifying threats implies new accountability and safety metrics, placing it within the Social Impact category, especially as it relates to school safety and security enhancements. Additionally, aspects of data governance are touched upon given the mention of data collection and privacy laws, and system integrity due to the focus on secure operational protocols, transparency in data handling, and potential biases in automated systems. Robustness is relevant as it pertains to the adaptation of AI systems to evolving threats and ensuring compliance with existing regulations.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text is primarily focused on school safety and security, which directly relates to the Government Agencies and Public Services sector due to the involvement of education law and local education agencies. The integration of AI within security systems is highlighted, indicating significant relevance to the role of technology in public safety measures. While it mentions safety and operational procedures, its less direct references keep it from being highly relevant to other sectors like healthcare or nonprofits, though some moderate connections could be made. Overall, the legislation is most relevant to education-focused services, encapsulated within the Government Agencies and Public Services sector, along with potential implications for Private Enterprises that supply the technology.


Keywords (occurrence): artificial intelligence (1) automated (2) algorithm (1) show keywords in context

Description: To prohibit, or require disclosure of, the surveillance, monitoring, and collection of certain worker data by employers, and for other purposes.
Summary: The "Stop Spying Bosses Act" aims to prohibit or mandate disclosure of employer surveillance and data collection on workers to protect their privacy rights.
Collection: Legislation
Status date: March 15, 2024
Status: Introduced
Primary sponsor: Christopher Deluzio (2 total sponsors)
Last action: Referred to the Committee on Education and the Workforce, and in addition to the Committees on Oversight and Accountability, and House Administration, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (March 15, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text discusses automated decision systems as part of its provisions, which directly connects it to the category of Social Impact. The automated decision system's output refers to outcomes and decisions that impact employees, which could lead to fair or unfair treatment, hence the relevance to social aspects of AI. Other parts of the text focus on the monitoring and collection of worker data by employers, which could also encompass AI technologies in making those decisions, thereby relating it to ethical considerations in AI usage. The implications on equity and worker privacy further contextualize its social impact relevance. Data Governance is highly relevant due to the act's emphasis on the secure handling of worker data, creating measures for privacy protection and transparency in data collection by automated systems. It directly addresses bias and misuse concerns, which are essential parts of data governance. System Integrity is also applicable here since there are implications of security measures governing how automated systems are designed and managed, with a particular focus on the interaction between those systems and individual rights. Robustness does not apply as this legislation does not focus on performance benchmarks or compliance standards for AI systems.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text importantly navigates the sector of Private Enterprises, Labor, and Employment. It directly addresses employer practices regarding data collection and monitoring of employees, having a clear impact on labor relations and workplace privacy. The legislation is aimed at protecting employee data from potentially invasive employer practices that rely upon automated decision-making systems. The Government Agencies and Public Services sector is also touched upon as it details regulatory frameworks that may involve public bodies overseeing enforcement against privacy violations, thus indicating a governmental role in these monitoring practices. However, the text lacks direct references to the other sectors outlined; there's no mention of judicial frameworks, healthcare, political campaigning, or academic environments, nor does it address nonprofits specifically. Thus, while it is central to labor and potentially government oversight, it doesn't significantly touch on the broader implications across other sectors.


Keywords (occurrence): artificial intelligence (2) automated (6) show keywords in context

Description: As introduced, requires political advertisements that are created in whole or in part by artificial intelligence to include certain disclaimers; requires materially deceptive media disseminated for purposes of a political campaign to include certain disclaimers; establishes criminal penalties and the right to injunctive relief for violations. - Amends TCA Title 2, Chapter 19, Part 1.
Summary: This bill amends Tennessee law to require transparency in political advertisements, mandating disclosures for content generated by artificial intelligence and prohibiting the distribution of materially deceptive media during election campaigns.
Collection: Legislation
Status date: Jan. 25, 2024
Status: Introduced
Primary sponsor: Jeff Yarbro (sole sponsor)
Last action: Passed on Second Consideration, refer to Senate State and Local Government Committee (Jan. 31, 2024)

Category:
Societal Impact
System Integrity (see reasoning)

The text explicitly addresses the implications of artificial intelligence in political advertising, particularly regarding the use of AI-generated content and the requirement for disclaimers associated with such advertisements. It also outlines regulations to prevent the spread of materially deceptive media using AI, which directly intersects with social impact issues, such as misinformation and trust in public discourse. The law establishes responsibilities for disclosure, aiming to protect both candidates and the electorate from the harms of misleading political advertisements. Given these points, Social Impact is highly relevant. Data Governance is considered slightly relevant as while there are elements of data management associated with ensuring compliance in political advertisements, the focus is more on transparency in AI usage rather than data collection and management itself. System Integrity is moderately relevant since it involves disclosure requirements that touch upon the transparency of AI use in political campaigns, but does not delve deeply into control and oversight of AI systems. Robustness is not directly applicable here as the text does not reference performance metrics or benchmarks for AI systems. Overall, it heavily emphasizes social implications, particularly in the context of political integrity and public trust.


Sector:
Politics and Elections
Judicial system (see reasoning)

The legislation pertains specifically to the regulation of AI in the context of political campaigns, as it mandates disclosures for political advertisements created with AI and sets penalties for violations that could deceive voters. This clearly aligns with the sector 'Politics and Elections'. The relevance to Government Agencies and Public Services is less direct, as it primarily concerns campaign practices rather than government operations. The Judicial System has a moderate connection since the law includes provisions for legal action against deceptive practices in political media. However, it does not significantly address the use of AI within the judicial context itself. Other sectors such as Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified are not relevant as they do not directly relate to the bill's core focus on political advertisements.


Keywords (occurrence): artificial intelligence (6) show keywords in context

Description: Requiring that certain articles or broadcasts be removed from the Internet within a specified period to limit damages for defamation; providing persons in certain positions relating to newspapers with immunity for defamation if such persons exercise due care to prevent publication or utterance of such a statement; providing venue for damages for a defamation or privacy tort based on material broadcast over radio or television; providing a rebuttable presumption that a publisher of a false sta...
Summary: The bill amends Florida's defamation laws, requiring online defamatory content removal to limit damages, protects media from liability with due diligence, and establishes guidelines for AI-related false statements.
Collection: Legislation
Status date: March 8, 2024
Status: Other
Primary sponsor: Judiciary (2 total sponsors)
Last action: Died in Fiscal Policy (March 8, 2024)

Category:
Societal Impact (see reasoning)

The text primarily deals with legal aspects of defamation and the usage of artificial intelligence (AI) in creating or editing media that may lead to false implications about individuals. The mention of 'artificial intelligence' in this act indicates a direct relevance to the categories defined. The legislation provides liability for those who utilize AI to create misleading content, thereby impacting individuals and society. The bill also discusses ethical implications of AI use in media publication, which connects to Social Impact. However, it does not focus strongly on data governance, system integrity, or robustness, as those aspects are more related to how data is managed or the performance of AI systems rather than the legal context presented here.


Sector:
Politics and Elections
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text intersects with multiple sectors, notably Politics and Elections, due to its implications for media and public information, extending to Government Agencies and Public Services since it delineates ways to manage defamation in broadcasting and online contexts. It also touches upon Private Enterprises, Labor, and Employment, as it relates to how businesses produce media content and may be held accountable for AI-generated misinformation. However, it doesn't strongly relate to Judicial System, Healthcare, Academic Institutions, International Cooperation, Nonprofits, or sectors classified as Hybrid, Emerging, or Unclassified, thus showing a more focused impact on media law and public discourse.


Keywords (occurrence): artificial intelligence (3) machine learning (1) show keywords in context

Description: Regulates use of artificial intelligence enabled video interview in hiring process.
Summary: The bill regulates the use of AI in video interviews by requiring employer transparency, applicant consent, data privacy, and demographic reporting to address potential racial biases in hiring.
Collection: Legislation
Status date: Feb. 27, 2024
Status: Introduced
Primary sponsor: Victoria Flynn (sole sponsor)
Last action: Introduced, Referred to Assembly Science, Innovation and Technology Committee (Feb. 27, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

This legislation is highly relevant to the Social Impact category as it directly addresses issues related to the impact of AI on individuals participating in the hiring process. It includes regulations around consent for AI evaluations, transparency in how AI analyzes applicant videos, and collection of demographic data to monitor for potential racial bias. These elements show a clear intent to mitigate the risk of discrimination, thus having a significant societal impact. The Data Governance category is also relevant due to the requirements to delete applicant videos and report demographic information, which highlight data management and privacy concerns within AI hiring practices. System Integrity is relevant as it includes mandates for human oversight and accountability in the hiring AI systems. However, Robustness is less relevant since the text does not touch on performance benchmarks or compliance auditing for AI systems, which are central to that category. Overall, the focus on the ethical implications, transparency, and demographic reporting signifies strong alignment with social frameworks, making it very relevant to both Social Impact and Data Governance.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

This legislation is particularly relevant to the Private Enterprises, Labor, and Employment sector, as it governs the use of AI technology in the hiring process—affecting employers and applicants alike. It ensures fair practices in recruitment and addresses potential biases in hiring, indicating a direct impact on how companies implement technology in the workplace. There is also relevance to Government Agencies and Public Services since the legislation mandates reporting to a state department and requires oversight of AI use, which implicates regulatory frameworks surrounding the use of AI in government entities. The Judicial System, Healthcare, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified sectors do not find direct relevance in this text as it specifically targets employment practices and does not encompass their specific regulatory frameworks or applications.


Keywords (occurrence): artificial intelligence (16) show keywords in context

Description: To amend chapter 35 of title 44, United States Code, to establish Federal AI system governance requirements, and for other purposes.
Summary: The Federal A.I. Governance and Transparency Act of 2024 establishes governance requirements for federal artificial intelligence systems, ensuring compliance with laws, promoting fairness, and enhancing accountability and transparency in AI use.
Collection: Legislation
Status date: March 5, 2024
Status: Introduced
Primary sponsor: James Comer (8 total sponsors)
Last action: Placed on the Union Calendar, Calendar No. 740. (Dec. 18, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The Federal AI Governance and Transparency Act is directly focused on establishing governance for AI systems within the federal government. It explicitly addresses social impacts, such as civil rights, civil liberties, and fairness, by ensuring that AI applications do not unfairly harm or benefit certain groups. It also outlines requirements for transparency and accountability, which directly relate to the Social Impact category. The emphasis on responsible management, oversight, and adherence to laws reflects aspects covered by the System Integrity category while also tying into Data Governance under the data protection and privacy measures described. There is some relevance to robustness as well, due to the mention of testing AI systems against defined benchmarks and performance standards. However, the primary focus remains on governance and accountability in the context of social impact, data governance, and system integrity related to AI implementation and utilization.


Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)

The legislation is highly relevant to the Government Agencies and Public Services sector as it specifically pertains to the use and governance of AI within federal agencies, outlining their responsibilities and procedures in utilizing AI. There is a moderate relevance to the Judicial System because of implications that AI governance affects legal rights and individual determinations, such as appeals processes. Additionally, this legislation may touch upon the implications for Private Enterprises, Labor, and Employment due to its governance impact on contracts and procurement processes. However, it does not primarily address issues specifically related to sectors like Healthcare, Academic Institutions, or others listed.


Keywords (occurrence): artificial intelligence (51) machine learning (1) show keywords in context
Feedback form