4939 results:


Description: Reinserts the provisions of the engrossed bill with the following changes. Provides that a provision in an agreement between an individual and any other person for the performance of personal or professional services is contrary to public policy and is deemed unenforceable if the provision does not include a reasonably specific description of the intended uses of the digital replica (rather than the provision does not clearly define and detail all of the proposed uses of the digital replica)....
Summary: The Digital Voice and Likeness Protection Act establishes legal protections against the unauthorized creation and use of a person's digital replicas, ensuring individuals maintain rights over their likeness and voice in contracts.
Collection: Legislation
Status date: Aug. 9, 2024
Status: Passed
Primary sponsor: Jennifer Gong-Gershowitz (19 total sponsors)
Last action: Public Act . . . . . . . . . 103-0830 (Aug. 9, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The legislation primarily focuses on the protection of individuals' rights concerning their digital likenesses and voices when generated using AI systems. It emphasizes the use of artificial intelligence in the creation of digital replicas, specifically addressing issues of consent and contractual enforceability related to these AI-generated representations. Therefore, it has a significant relevance to all categories due to its implications on social rights, data governance, system integrity, and the robustness of AI standards in managing digital likeness exploitation.


Sector:
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs (see reasoning)

The act has strong implications for various sectors, particularly concerning the rights of individuals in the digital landscape. It pertains to private enterprises, where agreements regarding AI usage of likeness are common. It also impacts the academic sector due to the relevance of AI use cases in research on digital identity and representation. Lastly, the act may touch upon government oversight roles and nonprofit interests involved with digital rights advocacy. Nevertheless, it does not directly address AI use in healthcare, politics, or the judicial system.


Keywords (occurrence): artificial intelligence (5) automated (1) algorithm (1) show keywords in context

Description: As introduced, enacts the "Protecting Tennessee Schools and Events Act"; subject to appropriations, requires the department of education to contract for the provision of walk-through metal detectors to LEAs. - Amends TCA Title 12 and Title 49.
Summary: This bill enhances school safety in Tennessee by requiring local education agencies to be provided with walk-through metal detectors for schools, addressing rising violence through strategic deployment and training.
Collection: Legislation
Status date: Jan. 30, 2024
Status: Introduced
Primary sponsor: Rush Bricken (sole sponsor)
Last action: Taken off notice for cal in s/c Finance, Ways, and Means Subcommittee of Finance, Ways, and Means Committee (April 17, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text discusses the provision and specifications of walk-through metal detectors in schools and highlights the integration of AI technologies in enhancing security measures. While the primary focus is on physical security, AI plays a role in improving the effectiveness of these security systems, which has implications for social standards, data handling, and system control. The AI integration in identifying threats implies new accountability and safety metrics, placing it within the Social Impact category, especially as it relates to school safety and security enhancements. Additionally, aspects of data governance are touched upon given the mention of data collection and privacy laws, and system integrity due to the focus on secure operational protocols, transparency in data handling, and potential biases in automated systems. Robustness is relevant as it pertains to the adaptation of AI systems to evolving threats and ensuring compliance with existing regulations.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text is primarily focused on school safety and security, which directly relates to the Government Agencies and Public Services sector due to the involvement of education law and local education agencies. The integration of AI within security systems is highlighted, indicating significant relevance to the role of technology in public safety measures. While it mentions safety and operational procedures, its less direct references keep it from being highly relevant to other sectors like healthcare or nonprofits, though some moderate connections could be made. Overall, the legislation is most relevant to education-focused services, encapsulated within the Government Agencies and Public Services sector, along with potential implications for Private Enterprises that supply the technology.


Keywords (occurrence): artificial intelligence (1) automated (2) algorithm (1) show keywords in context

Description: To prohibit, or require disclosure of, the surveillance, monitoring, and collection of certain worker data by employers, and for other purposes.
Summary: The "Stop Spying Bosses Act" aims to prohibit or mandate disclosure of employer surveillance and data collection on workers to protect their privacy rights.
Collection: Legislation
Status date: March 15, 2024
Status: Introduced
Primary sponsor: Christopher Deluzio (2 total sponsors)
Last action: Referred to the Committee on Education and the Workforce, and in addition to the Committees on Oversight and Accountability, and House Administration, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (March 15, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text discusses automated decision systems as part of its provisions, which directly connects it to the category of Social Impact. The automated decision system's output refers to outcomes and decisions that impact employees, which could lead to fair or unfair treatment, hence the relevance to social aspects of AI. Other parts of the text focus on the monitoring and collection of worker data by employers, which could also encompass AI technologies in making those decisions, thereby relating it to ethical considerations in AI usage. The implications on equity and worker privacy further contextualize its social impact relevance. Data Governance is highly relevant due to the act's emphasis on the secure handling of worker data, creating measures for privacy protection and transparency in data collection by automated systems. It directly addresses bias and misuse concerns, which are essential parts of data governance. System Integrity is also applicable here since there are implications of security measures governing how automated systems are designed and managed, with a particular focus on the interaction between those systems and individual rights. Robustness does not apply as this legislation does not focus on performance benchmarks or compliance standards for AI systems.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text importantly navigates the sector of Private Enterprises, Labor, and Employment. It directly addresses employer practices regarding data collection and monitoring of employees, having a clear impact on labor relations and workplace privacy. The legislation is aimed at protecting employee data from potentially invasive employer practices that rely upon automated decision-making systems. The Government Agencies and Public Services sector is also touched upon as it details regulatory frameworks that may involve public bodies overseeing enforcement against privacy violations, thus indicating a governmental role in these monitoring practices. However, the text lacks direct references to the other sectors outlined; there's no mention of judicial frameworks, healthcare, political campaigning, or academic environments, nor does it address nonprofits specifically. Thus, while it is central to labor and potentially government oversight, it doesn't significantly touch on the broader implications across other sectors.


Keywords (occurrence): artificial intelligence (2) automated (6) show keywords in context

Description: As introduced, requires political advertisements that are created in whole or in part by artificial intelligence to include certain disclaimers; requires materially deceptive media disseminated for purposes of a political campaign to include certain disclaimers; establishes criminal penalties and the right to injunctive relief for violations. - Amends TCA Title 2, Chapter 19, Part 1.
Summary: This bill amends Tennessee law to require transparency in political advertisements, mandating disclosures for content generated by artificial intelligence and prohibiting the distribution of materially deceptive media during election campaigns.
Collection: Legislation
Status date: Jan. 25, 2024
Status: Introduced
Primary sponsor: Jeff Yarbro (sole sponsor)
Last action: Passed on Second Consideration, refer to Senate State and Local Government Committee (Jan. 31, 2024)

Category:
Societal Impact
System Integrity (see reasoning)

The text explicitly addresses the implications of artificial intelligence in political advertising, particularly regarding the use of AI-generated content and the requirement for disclaimers associated with such advertisements. It also outlines regulations to prevent the spread of materially deceptive media using AI, which directly intersects with social impact issues, such as misinformation and trust in public discourse. The law establishes responsibilities for disclosure, aiming to protect both candidates and the electorate from the harms of misleading political advertisements. Given these points, Social Impact is highly relevant. Data Governance is considered slightly relevant as while there are elements of data management associated with ensuring compliance in political advertisements, the focus is more on transparency in AI usage rather than data collection and management itself. System Integrity is moderately relevant since it involves disclosure requirements that touch upon the transparency of AI use in political campaigns, but does not delve deeply into control and oversight of AI systems. Robustness is not directly applicable here as the text does not reference performance metrics or benchmarks for AI systems. Overall, it heavily emphasizes social implications, particularly in the context of political integrity and public trust.


Sector:
Politics and Elections
Judicial system (see reasoning)

The legislation pertains specifically to the regulation of AI in the context of political campaigns, as it mandates disclosures for political advertisements created with AI and sets penalties for violations that could deceive voters. This clearly aligns with the sector 'Politics and Elections'. The relevance to Government Agencies and Public Services is less direct, as it primarily concerns campaign practices rather than government operations. The Judicial System has a moderate connection since the law includes provisions for legal action against deceptive practices in political media. However, it does not significantly address the use of AI within the judicial context itself. Other sectors such as Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified are not relevant as they do not directly relate to the bill's core focus on political advertisements.


Keywords (occurrence): artificial intelligence (6) show keywords in context

Description: Requiring that certain articles or broadcasts be removed from the Internet within a specified period to limit damages for defamation; providing persons in certain positions relating to newspapers with immunity for defamation if such persons exercise due care to prevent publication or utterance of such a statement; providing venue for damages for a defamation or privacy tort based on material broadcast over radio or television; providing a rebuttable presumption that a publisher of a false sta...
Summary: The bill amends Florida's defamation laws, requiring online defamatory content removal to limit damages, protects media from liability with due diligence, and establishes guidelines for AI-related false statements.
Collection: Legislation
Status date: March 8, 2024
Status: Other
Primary sponsor: Judiciary (2 total sponsors)
Last action: Died in Fiscal Policy (March 8, 2024)

Category:
Societal Impact (see reasoning)

The text primarily deals with legal aspects of defamation and the usage of artificial intelligence (AI) in creating or editing media that may lead to false implications about individuals. The mention of 'artificial intelligence' in this act indicates a direct relevance to the categories defined. The legislation provides liability for those who utilize AI to create misleading content, thereby impacting individuals and society. The bill also discusses ethical implications of AI use in media publication, which connects to Social Impact. However, it does not focus strongly on data governance, system integrity, or robustness, as those aspects are more related to how data is managed or the performance of AI systems rather than the legal context presented here.


Sector:
Politics and Elections
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text intersects with multiple sectors, notably Politics and Elections, due to its implications for media and public information, extending to Government Agencies and Public Services since it delineates ways to manage defamation in broadcasting and online contexts. It also touches upon Private Enterprises, Labor, and Employment, as it relates to how businesses produce media content and may be held accountable for AI-generated misinformation. However, it doesn't strongly relate to Judicial System, Healthcare, Academic Institutions, International Cooperation, Nonprofits, or sectors classified as Hybrid, Emerging, or Unclassified, thus showing a more focused impact on media law and public discourse.


Keywords (occurrence): artificial intelligence (3) machine learning (1) show keywords in context

Description: Provide for the use of autonomous vehicles
Summary: The bill allows the use of autonomous vehicles on Montana's public highways, establishing safety regulations, defining vehicle levels, and granting rulemaking authority to the Department of Transportation.
Collection: Legislation
Status date: Dec. 27, 2024
Status: Introduced
Primary sponsor: Denley Loge (sole sponsor)
Last action: (S) Tabled in Committee (S) Energy, Technology & Federal Relations (Jan. 23, 2025)

Category:
Societal Impact
System Integrity
Data Robustness (see reasoning)

The text primarily discusses the use of autonomous vehicles, which directly relates to Artificial Intelligence as these vehicles utilize AI technologies in their automation systems. The definitions provided for automated driving systems and the various levels of vehicle autonomy signify a legislative framework for AI implementation in transportation. The focus on safety, consistency with state law, and stakeholder consultation further establishes the significance of the AI component in this bill. While the text does not extensively address social implications or data governance aspects, it is rooted in the compliance with operational safety regulations and system integrity for AI-driven vehicles, thus suggesting relevance in the category of System Integrity.


Sector:
Government Agencies and Public Services (see reasoning)

The bill is highly relevant to the category of Government Agencies and Public Services as it outlines the role of the Department of Transportation in rulemaking and oversight of autonomous vehicle usage. Furthermore, it touches on safety measures and public consultation, showcasing how the implementation of autonomous vehicles can enhance or impact public services. Though it does not specifically mention politics and elections or have a significant connection to healthcare, the focus on the transportation sector indicates appropriate categorization under the relevant sector rather than marginal associations with others.


Keywords (occurrence): automated (23) autonomous vehicle (9) show keywords in context

Description: Regulates use of artificial intelligence enabled video interview in hiring process.
Summary: The bill regulates the use of AI in video interviews by requiring employer transparency, applicant consent, data privacy, and demographic reporting to address potential racial biases in hiring.
Collection: Legislation
Status date: Feb. 27, 2024
Status: Introduced
Primary sponsor: Victoria Flynn (sole sponsor)
Last action: Introduced, Referred to Assembly Science, Innovation and Technology Committee (Feb. 27, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

This legislation is highly relevant to the Social Impact category as it directly addresses issues related to the impact of AI on individuals participating in the hiring process. It includes regulations around consent for AI evaluations, transparency in how AI analyzes applicant videos, and collection of demographic data to monitor for potential racial bias. These elements show a clear intent to mitigate the risk of discrimination, thus having a significant societal impact. The Data Governance category is also relevant due to the requirements to delete applicant videos and report demographic information, which highlight data management and privacy concerns within AI hiring practices. System Integrity is relevant as it includes mandates for human oversight and accountability in the hiring AI systems. However, Robustness is less relevant since the text does not touch on performance benchmarks or compliance auditing for AI systems, which are central to that category. Overall, the focus on the ethical implications, transparency, and demographic reporting signifies strong alignment with social frameworks, making it very relevant to both Social Impact and Data Governance.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

This legislation is particularly relevant to the Private Enterprises, Labor, and Employment sector, as it governs the use of AI technology in the hiring process—affecting employers and applicants alike. It ensures fair practices in recruitment and addresses potential biases in hiring, indicating a direct impact on how companies implement technology in the workplace. There is also relevance to Government Agencies and Public Services since the legislation mandates reporting to a state department and requires oversight of AI use, which implicates regulatory frameworks surrounding the use of AI in government entities. The Judicial System, Healthcare, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified sectors do not find direct relevance in this text as it specifically targets employment practices and does not encompass their specific regulatory frameworks or applications.


Keywords (occurrence): artificial intelligence (16) show keywords in context

Description: To amend chapter 35 of title 44, United States Code, to establish Federal AI system governance requirements, and for other purposes.
Summary: The Federal A.I. Governance and Transparency Act of 2024 establishes governance requirements for federal artificial intelligence systems, ensuring compliance with laws, promoting fairness, and enhancing accountability and transparency in AI use.
Collection: Legislation
Status date: March 5, 2024
Status: Introduced
Primary sponsor: James Comer (8 total sponsors)
Last action: Placed on the Union Calendar, Calendar No. 740. (Dec. 18, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The Federal AI Governance and Transparency Act is directly focused on establishing governance for AI systems within the federal government. It explicitly addresses social impacts, such as civil rights, civil liberties, and fairness, by ensuring that AI applications do not unfairly harm or benefit certain groups. It also outlines requirements for transparency and accountability, which directly relate to the Social Impact category. The emphasis on responsible management, oversight, and adherence to laws reflects aspects covered by the System Integrity category while also tying into Data Governance under the data protection and privacy measures described. There is some relevance to robustness as well, due to the mention of testing AI systems against defined benchmarks and performance standards. However, the primary focus remains on governance and accountability in the context of social impact, data governance, and system integrity related to AI implementation and utilization.


Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)

The legislation is highly relevant to the Government Agencies and Public Services sector as it specifically pertains to the use and governance of AI within federal agencies, outlining their responsibilities and procedures in utilizing AI. There is a moderate relevance to the Judicial System because of implications that AI governance affects legal rights and individual determinations, such as appeals processes. Additionally, this legislation may touch upon the implications for Private Enterprises, Labor, and Employment due to its governance impact on contracts and procurement processes. However, it does not primarily address issues specifically related to sectors like Healthcare, Academic Institutions, or others listed.


Keywords (occurrence): artificial intelligence (51) machine learning (1) show keywords in context

Description: Repeals provisions, relating to application of Florida Motor Vehicle No-Fault Law; revises requirement for proof of security on motor vehicle; revises motor vehicle insurance coverages that applicant must show to register vehicles with DHSMV; revises garage liability insurance requirements for motor vehicle dealer license applicants; revises minimum liability coverage requirements for motor vehicle owners or operators; revises legal liability of uninsured motorist coverage insurer; revises re...
Summary: HB 653 repeals Florida's No-Fault Law and revises motor vehicle insurance requirements, including coverage types, proof of liability, and security standards, to enhance clarity and compliance.
Collection: Legislation
Status date: March 8, 2024
Status: Other
Primary sponsor: Daniel Alvarez (4 total sponsors)
Last action: Died in Insurance & Banking Subcommittee (March 8, 2024)

Category: None (see reasoning)

This text pertains primarily to motor vehicle insurance and regulatory aspects without direct reference to Artificial Intelligence (AI) technologies or implications. There are no mentions of terms associated with AI such as algorithms, machine learning, or automated decision-making. Thus, the categories of Social Impact, Data Governance, System Integrity, and Robustness are not applicable, as they require direct discussions or implications related to AI technologies which are absent in this legislative text.


Sector: None (see reasoning)

The text primarily revolves around motor vehicle regulations and insurance provisions. It does not delve into the intersection of AI with any sectors, neither does it provide insights into AI usage within Politics and Elections, Government Agencies, Healthcare, or any other specified sectors. Terms or activities associated with AI applications are completely lacking, rendering all sector categories irrelevant.


Keywords (occurrence): automated (2) autonomous vehicle (1) show keywords in context

Description: A bill to create an administrative subpoena process to assist copyright owners in determining which of their copyrighted works have been used in the training of artificial intelligence models.
Summary: The TRAIN Act establishes an administrative subpoena process for copyright owners to identify copyrighted works used in training artificial intelligence models, ensuring transparency and accountability in AI development.
Collection: Legislation
Status date: Nov. 21, 2024
Status: Introduced
Primary sponsor: Peter Welch (sole sponsor)
Last action: Read twice and referred to the Committee on the Judiciary. (Nov. 21, 2024)

Category:
Data Governance
System Integrity (see reasoning)

The text specifically addresses the issue of copyright and the use of copyrighted works in training artificial intelligence models. This has implications for data governance because it involves maintaining accurate records of what data has been used in AI training, which ties into intellectual property concerns and the management of data used in machine learning systems. The administrative subpoena process described offers a means for copyright owners to enforce their rights, highlighting the significance of transparency and accountability in AI development. While the text does touch on aspects of system integrity through transparency in the use of copyrighted materials, its primary focus on copyright and data collection makes it less relevant to robustness and social impact as defined in the categories. Overall, the primary focus on determining the use of copyrighted works in AI training aligns strongly with data governance.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The text does not specifically address AI applications in any of the provided sectors comprehensively. However, the mention of administrative subpoenas related to AI models implies a connection to private enterprises, particularly regarding how businesses develop and deploy generative AI models while respecting copyright laws. The text does not explicitly address politics, government agencies, the judicial system, healthcare, academic contexts, international cooperation, or nonprofits, making those categories less relevant. Therefore, the legislation is best categorized within private enterprises and is only slightly relevant to government activities regarding copyright enforcement.


Keywords (occurrence): artificial intelligence (10) show keywords in context

Summary: The bill includes various resolutions and public bills focusing on education, child safety, manufacturing, and financial regulation. Its purpose is to address funding eligibility, improve safety standards, and study the impact of artificial intelligence.
Collection: Congressional Record
Status date: Nov. 26, 2024
Status: Issued
Source: Congress

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text mentions various bills, specifically focusing on artificial intelligence (AI) in some instances, particularly legislation aimed at studying the benefits of AI and the standardization of AI systems descriptions in the financial sector. This indicates a notable concern regarding the implications and applications of AI in specific fields, which is directly relevant to the categories. The inclusion of these topics suggests an awareness and consideration of AI's societal and operational impacts, data governance issues related to AI systems, and integrity in how systems are structured and deployed.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text includes a resolution regarding AI in financial services and housing, indicating a significant focus on the integration of AI into governmental and commercial processes. Moreover, the specific mention of studies and reports related to AI in these areas suggests a clear alignment with sectors directly influenced by AI technology. The presence of financial-related bills and their connection to AI also supports the relevance to legislative matters concerning politics and public services, although the focus does not extensively cover sectors like healthcare or education.


Keywords (occurrence): artificial intelligence (3) show keywords in context

Description: ELECTIONS -- DECEPTIVE AND FRAUDULENT SYNTHETIC MEDIA IN ELECTION COMMUNICATIONS - Prohibits synthetic media within ninety (90) days of an election.
Summary: The bill prohibits the distribution of deceptive synthetic media related to candidates within 90 days of an election, aiming to prevent misinformation and manipulation in election communications.
Collection: Legislation
Status date: Feb. 12, 2024
Status: Introduced
Primary sponsor: Louis Dipalma (10 total sponsors)
Last action: Committee recommended measure be held for further study (March 7, 2024)

Category:
Societal Impact (see reasoning)

The text explicitly addresses the use of synthetic media in election communications, targeting the potential for deception and fraud associated with AI technologies like generative adversarial networks. This falls directly under the Social Impact category due to its focus on the societal implications of AI usage in political contexts, particularly the risk of misinformation and its effects on public trust in electoral processes. The prohibition and disclosure requirements also highlight the need for ethical considerations in the deployment of AI in media contexts. The focus on how synthetic media can mislead the public and compromise democratic processes indicates the legislation's critical relevance to societal impacts. Therefore, I scored it as '5' for Social Impact. For Data Governance, while there is mention of the accuracy of information, it does not delve into data management practices or biases, leading to a score of '2'. System Integrity is less applicable as it does not specifically address security or governance safeguards for AI and synthetic media deployments, resulting in a score of '1'. Robustness does not address benchmarks or performance metrics for AI systems, thus scoring '1'. Overall, the clear emphasis on the implications of AI-generated content in elections significantly drives its relevance to Social Impact.


Sector:
Politics and Elections (see reasoning)

The text has a direct relationship with the politics and elections sector by regulating the use of synthetic media in electoral contexts. The legislation addresses potential manipulation of media to deceive voters and the legal ramifications for those who produce such material. This clear focus on safeguarding electoral integrity through AI regulations makes it highly relevant to the Politics and Elections sector. Therefore, it received a score of '5'. Other sectors, such as Government Agencies and Public Services, or Nonprofits and NGOs, could tangentially relate but are not the primary focus here, leading to lower scores (1 for Government Agencies and 1 for Nonprofits and NGOs, considering their limited relevance to the content). No mention of healthcare, judicial systems, or international standards further clarifies that the primary concern is political, underscoring a focused score of '5' for Politics and Elections.


Keywords (occurrence): artificial intelligence (1) synthetic media (19) show keywords in context

Description: Amend The South Carolina Code Of Laws By Adding Section 39-5-190 So As To Provide That Every Individual Has A Property Right In The Use Of That Individual's Name, Photograph, Voice, Or Likeness In Any Medium In Any Manner And To Provide Penalties.
Summary: The bill establishes individuals' property rights over their names, photographs, voices, and likenesses, mandating consent for commercial use and imposing penalties for unauthorized usage.
Collection: Legislation
Status date: April 9, 2024
Status: Introduced
Primary sponsor: Patricia Henegan (4 total sponsors)
Last action: Referred to Committee on Judiciary (April 9, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text pertains to the legal rights surrounding an individual's name, photograph, voice, or likeness, particularly focusing on the unauthorized use of these attributes. In relation to AI, this has significant implications as AI technology can simulate or replicate voices and likenesses (deepfake technology), which raises issues of consent, personal rights, and the commercialization of one’s identity. The specific mention of algorithms, software, or technology used for this purpose makes it extremely relevant for Social Impact, as it directly addresses privacy and individual rights, highlighting concerns of misuse of AI. It also touches upon Data Governance as it involves management of data related to personal attributes, but to a lesser extent. System Integrity and Robustness are not significantly addressed here as the focus is not on the security and performance of AI systems. Therefore, Social Impact is highly relevant due to the ethical and personal rights issues associated with AI use, while Data Governance is somewhat relevant due to oversight implications. System Integrity and Robustness receive low scores due to the absence of focus on system functionalities.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The legislation heavily relates to individual rights over personal likeness and information. This affects individuals who might be targeted by AI technologies (like deepfakes or voice simulations). However, it does not specifically address sectors such as Politics and Elections or Healthcare, where AI's impact may be more direct and apparent. The main sector relevance is found within the context of Private Enterprises, Labor, and Employment, as companies could be liable if they misuse an individual’s likeness for commercial purposes. Government Agencies and Public Services may also have an indirect connection concerning how governmental bodies handle complaints or issues arising from unauthorized uses of likeness. The legislation doesn't explicitly address the use of AI in other sectors like Academics, International Standards, or NGOs, thus leading to low relevance scores across these categories. Therefore, Private Enterprises receives a moderate score due to the potential for commercial exploitation issues uniquely tied to AI, while other sectors remain less impacted.


Keywords (occurrence): algorithm (1) show keywords in context

Description: As introduced, expands the offense of unlawful exposure to include the distribution, with the intent to cause emotional distress, of an image of the intimate parts of another identifiable person or an image of an identifiable person engaged in sexually explicit conduct and the image was created or modified by means of a computer software program, artificial intelligence application, or other digital editing tools; specifies that for the purposes of sexual exploitation of children offenses, th...
Summary: This bill amends Tennessee law to clarify definitions regarding unlawful images, specifically addressing privacy considerations and the use of artificial intelligence in creating or modifying images.
Collection: Legislation
Status date: Jan. 25, 2024
Status: Introduced
Primary sponsor: Jeff Yarbro (sole sponsor)
Last action: Passed on Second Consideration, refer to Senate Judiciary Committee (Jan. 31, 2024)

Category:
Societal Impact (see reasoning)

The text contains explicit mentions of artificial intelligence applications and their role in modifying images, which is critical in the context of unlawful exposure and potential misuse of AI-generated content. Given the focus on issues such as emotional distress and identifying individuals within digital content, it connects strongly with legislation regarding the social implications of AI. The AI components mentioned hold strong relevance to considerations of fairness, bias, and misinformation, particularly as they relate to societal impact. The keywords present in the text warrant an analysis across all four categories, but particularly highlight social impacts due to concerns over emotional and psychological harm.


Sector:
Judicial system (see reasoning)

The text primarily relates to concerns around the creation and distribution of potentially malicious digital content facilitated by AI technologies, particularly in terms of emotional and psychological harm. It does not specifically deal with broader applications of AI in the sectors of politics, government, healthcare, etc., but instead focuses on issues unique to unlawful image generation and dissemination, which is more aligned with social welfare than with any specific sector. Therefore, it does not score highly in sector relevance, but its implications on social concerns justify a closer alignment with social impact.


Keywords (occurrence): artificial intelligence (2) show keywords in context

Description: Requires foreign-adversary-owned entities operating social media platforms to publicly disclose specified information in certain manner; requires foreign-adversary-owned entities operating social media platforms to implement user verification system for certain entities; requires enforcement by DLA.
Summary: The "Transparency in Social Media Act" mandates foreign-adversary-owned social media platforms in Florida to disclose algorithm details, implement user verification, and face penalties for non-compliance, enhancing transparency and security.
Collection: Legislation
Status date: March 8, 2024
Status: Other
Primary sponsor: Regulatory Reform & Economic Development Subcommittee (9 total sponsors)
Last action: Died in Fiscal Policy (March 8, 2024)

Category:
Societal Impact
System Integrity (see reasoning)

The text explicitly addresses the impact of algorithms used by social media platforms, which is crucial for understanding the broader societal implications of AI in communication and information sharing. It mentions the significance of transparency in these algorithms to safeguard democratic values and user privacy, thus tying it directly to the Social Impact category. The discussion around the need for user verification and accountability measures also intersects with social issues such as misinformation, public trust, and the influence on public discourse. However, the legislation does not delve deeply into specifics about data collection or management, which might relate to Data Governance, or broader implications for performance benchmarks that would correlate with Robustness. Thus, the focus remains more aligned with social impacts than technical frameworks or data integrity measures.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The text has a strong relevance to Politics and Elections due to its emphasis on social media's role in shaping public discourse and the regulations surrounding political advertising. It also touches upon Government Agencies and Public Services in terms of enforcement by the Department of Legal Affairs, indicating government involvement in monitoring and ensuring compliance. However, the relevance to sectors like Healthcare, Judicial System, Private Enterprises, and others is minimal, as the primary focus is on social media regulation. The mention of foreign influence and accountability may touch upon broad themes related to governance but doesn't specifically slot into other defined sectors.


Keywords (occurrence): algorithm (1)

Summary: The bill serves as a tribute to Governor Phil Murphy and First Lady Tammy Murphy, acknowledging their impactful leadership and contributions to New Jersey's progress and community well-being over the past seven years.
Collection: Congressional Record
Status date: Nov. 21, 2024
Status: Issued
Source: Congress

Category:
Societal Impact (see reasoning)

The text discusses the leadership of Phil and Tammy Murphy in New Jersey, mentioning their initiatives in various sectors such as innovation and generative artificial intelligence (AI). However, the emphasis is primarily on their achievements in social policies, economic development, and environmental leadership rather than specific impacts or implications of AI technologies. Therefore, while AI is mentioned, the relevance to the categories is limited.


Sector:
Hybrid, Emerging, and Unclassified (see reasoning)

The text does not specifically address the use of AI in particular sectors such as politics, government, healthcare, etc. It mentions generative AI in passing, but there are no legislative or regulatory discussions related to AI application or impact in these sectors. Thus, the relevance remains minimal throughout.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Description: ELECTIONS -- DECEPTIVE AND FRAUDULENT SYNTHETIC MEDIA IN ELECTION COMMUNICATIONS - Prohibits synthetic media within ninety (90) days of an election.
Summary: The bill prohibits the distribution of deceptive synthetic media—manipulated images, audio, or video—within 90 days of an election. Its aim is to safeguard electoral integrity by preventing misinformation.
Collection: Legislation
Status date: May 2, 2024
Status: Engrossed
Primary sponsor: Jacquelyn Baginski (4 total sponsors)
Last action: Referred to Senate Judiciary (May 13, 2024)

Category:
Societal Impact
System Integrity (see reasoning)

This text is primarily focused on AI-generated synthetic media and its implications in election communications. The mention of 'artificial intelligence' and 'synthetic media' indicates a direct relevance to the impact AI can have on society, specifically in political contexts. The legislation aims to address the potential harmful effects of misleading synthetic media on electoral processes, which aligns closely with the theme of social impact. Data governance is less relevant as this text primarily focuses on usage and prevention rather than the governance of data itself. System integrity may apply due to the transparency and manipulation concerns surrounding synthetic media, but it's not the primary focus. Robustness is not particularly relevant here since the legislation does not specify performance benchmarks or compliance requirements for AI systems.


Sector:
Politics and Elections
Judicial system (see reasoning)

The legislation is highly relevant to the sector of Politics and Elections as it specifically addresses the use of AI in election communications, aiming to curb deceptive practices that may distort public perception during electoral periods. While there may be implications for Government Agencies and Public Services in the enforcement of these regulations, the primary focus remains within the electoral context, making it less relevant overall in that regard. The Judicial System could see implications if lawsuits arise from the legislation, but this is secondary to the act's main purpose. Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified sectors do not directly connect to the text, yielding lower relevance scores.


Keywords (occurrence): artificial intelligence (2) synthetic media (19) show keywords in context

Summary: The bill encompasses various proposals presented in Congress addressing issues like election security, healthcare coverage, environmental policy, and education funding, aiming to enhance governance and public welfare.
Collection: Congressional Record
Status date: Nov. 21, 2024
Status: Issued
Source: Congress

Category: None (see reasoning)

The text primarily announces the introduction of various bills and joint resolutions in Congress, with only one mention related to 'the training of artificial intelligence models' which pertains to copyright concerns in S. 5387. This suggests potential legal issues surrounding AI development but does not provide significant detail to strongly categorize the legislation. Therefore, while AI is mentioned, the overall relevance to the four categories is limited. Social Impact may slightly relate to the effects of AI on society through the copyright lens, while System Integrity has weak ties due to the mention of regulatory aspects. Data Governance might also be somewhat relevant since the bill touches on copyright in relation to AI but does not explicitly focus on data management itself. The Robustness category does not appear to be aligned with the text. In conclusion, while there are mentions of AI, they do not reinforce major thematic discussions relevant to the categories.


Sector: None (see reasoning)

The text primarily consists of proposals for various legislative actions and does not specifically address sectors like Politics and Elections, Government Agencies, Healthcare, or the Judicial System with direct discourse on AI. The only relevance to a sector appears in the context of potential impacts on government agencies due to the mention of a bill concerning AI and copyright. However, this is minimal and does not carry significant weight across the other sectors either. Most bills focus on health, finance, and national security issues without a substantial AI element, resulting in low relevance scores across the board. Thus, while the discussions are in the legislative spirit, they do not touch specifically about the designated sectors of AI use and regulation.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Summary: The bill aims to address biases in artificial intelligence algorithms, ensuring they don’t perpetuate discrimination in key areas like employment, banking, and healthcare, particularly impacting marginalized communities.
Collection: Congressional Record
Status date: Nov. 21, 2024
Status: Issued
Source: Congress

Category:
Societal Impact
Data Governance (see reasoning)

The text discusses the pervasive impact of AI on marginalized communities, particularly regarding bias and discrimination resulting from AI algorithms. It highlights the risk of algorithms perpetuating social inequalities and proposes comprehensive legislation aimed at regulating AI to protect civil rights. This focus on the societal implications of AI technology and the need to address issues such as bias and accountability strongly aligns with the Social Impact category. The discussion around the legislation's aims further emphasizes the need for data governance to ensure fairness and accountability in AI systems, but the social implications are more pronounced. Thus, the reasoning heavily supports a higher score in Social Impact, with moderate relevance to Data Governance for its mention of algorithm bias. System Integrity and Robustness are not directly addressed, leading to lower scores in these areas.


Sector:
Politics and Elections
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Nonprofits and NGOs (see reasoning)

The text predominantly addresses the societal consequences of AI, particularly in marginalized communities, making it most relevant to the Politics and Elections sector as it discusses legislative efforts to address these issues through proposed laws. Similarly, it touches on Government Agencies and Public Services since successful regulation of AI systems relates to governmental oversight and application to public service sectors. However, it does not explicitly delve into political campaigning, judicial processes, healthcare, or academic institutions, leading to lower scores for those sectors and moderate scores for Politics and Elections and Government Agencies and Public Services.


Keywords (occurrence): artificial intelligence (2) algorithm (5) show keywords in context

Description: Urges Congress and President to enact "Do Not Disturb Act."
Summary: The bill urges Congress and the President to enact the "Do Not Disturb Act" to enhance protections against spam and scam calls, particularly for vulnerable populations.
Collection: Legislation
Status date: May 10, 2024
Status: Introduced
Primary sponsor: Annette Quijano (sole sponsor)
Last action: Introduced, Referred to Assembly Consumer Affairs Committee (May 10, 2024)

Category:
Societal Impact (see reasoning)

The 'Do Not Disturb Act' aims to enhance consumer protection against spam and scam calls, particularly those utilizing AI technologies to carry out their fraudulent activities. This act is relevant to the 'Social Impact' category as it addresses potential harms caused by AI in the form of scams, focusing on the need for regulation to safeguard citizens including vulnerable groups like senior citizens. However, while it relates to AI misuse, it does not specifically intervene in broader societal impacts beyond consumer protection. In terms of 'Data Governance', while it does address certain protections on data transactions relating to these calls, it does not extensively delve into data management or protection measures that are central to this category. The act seeks to ensure the integrity and security of communication systems rather than focusing on AI system integrity itself, making 'System Integrity' and 'Robustness' less relevant. The bill does mention AI in the context of scams but predominantly focuses on consumer protections and regulations regarding unwanted calls; thus it falls short of the comprehensive approach required for 'Robustness'.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text does not specifically address the use of AI within political campaigns, nor does it focus on governance or services by state or federal agencies in a significant manner. While it could touch on aspects of 'Government Agencies and Public Services' in relation to how the act impacts consumer rights and protections within communication frameworks, it is not expressly focused on public service delivery systems. The mention of scams could theoretically relate to the 'Judicial System' as it implies a need for legal action against these practices but lacks direct engagement with judicial processes. The bill is primarily concerning consumer protections, making it most relevant to the 'Private Enterprises, Labor, and Employment' sector as it concerns telecommunications services. The 'Hybrid, Emerging, and Unclassified' sector could capture the nuances of AI and consumer rights in communications but does not monopolize the bill's focus.


Keywords (occurrence): artificial intelligence (1) show keywords in context
Feedback form