5015 results:
Summary: The bill involves various executive communications from multiple U.S. government departments, transmitting regulations and reports to Congress for review, mainly relating to agriculture, housing, energy, and environmental policies.
Collection: Congressional Record
Status date: Dec. 3, 2024
Status: Issued
Source: Congress
Societal Impact (see reasoning)
The text appears to involve various executive communications, primarily focused on regulations and final rule submissions across multiple departments. The one relevant section explicitly mentions 'Advancing the Responsible Acquisition of Artificial Intelligence in Government'. This suggests a direct connection to AI in government operations, but the remaining text does not relate to the societal impacts, data governance, system integrity, or robustness associated with AI technologies. The presence of the related terms justifies a consideration under the Social Impact category, mainly due to its implications for responsible AI acquisition policies within the government context, signaling potential impacts on governance and ethical considerations. For the other categories, they lack direct references to AI or its operational framework, leading to scores that reflect this discrepancy.
Sector:
Government Agencies and Public Services (see reasoning)
The text contains references to executive communications and rules pertaining to a wide array of governmental functions, with the notable exception being the mention of AI in government acquisition. The sector that best aligns with the presence of this term is 'Government Agencies and Public Services', as it deals directly with how government agencies are integrating and regulating AI for public services. Other sectors do not have significant mentions in the text, resulting in primarily low ratings for those areas. However, the clear reference to AI's engagement with government systems justifies a higher score under this specific sector.
Keywords (occurrence): artificial intelligence (1)
Description: To require covered platforms to remove nonconsensual intimate visual depictions, and for other purposes.
Summary: The TAKE IT DOWN Act mandates platforms to remove nonconsensual intimate visual depictions, addressing the exploitation of individuals, especially through deepfakes, and imposing penalties for violations.
Collection: Legislation
Status date: July 10, 2024
Status: Introduced
Primary sponsor: Maria Salazar
(21 total sponsors)
Last action: Referred to the House Committee on Energy and Commerce. (July 10, 2024)
Societal Impact (see reasoning)
The TAKE IT DOWN Act specifically addresses the misuse of technology related to deepfakes, a form of AI-generated content. It emphasizes the need for accountability on digital platforms to prevent and mitigate the psychological and reputational harm caused by nonconsensual intimate visual depictions. This indicates a significant concern for the social impact of AI and the role it plays in the privacy and security of individuals. It does not deeply involve issues of data governance outside of consent and privacy aspects, nor does it address system integrity or robustness of AI systems directly. Therefore, the most fitting category is 'Social Impact.'
Sector: None (see reasoning)
The legislation pertains primarily to the use of deepfake technology within the context of individual rights and protections rather than any specific sector such as healthcare or government. However, it has implications for the technology sector, especially regarding the regulation of platforms that deal with user-generated content. Importantly, it addresses the impact of AI technologies on personal privacy within public and online spaces but does not engage with broader sector-specific applications. Therefore, it does not strongly fit within the predefined sectors.
Keywords (occurrence): deepfake (4) show keywords in context
Summary: H.R. 9639 asserts Congressional authority under Article I, Section 8 of the Constitution to legislate on a single subject: Artificial Intelligence.
Collection: Congressional Record
Status date: Sept. 17, 2024
Status: Issued
Source: Congress
Societal Impact (see reasoning)
The text explicitly mentions 'Artificial Intelligence,' making this legislation directly relevant to the category of Social Impact, as it implies a broader discourse on AI's implications on society. Despite this focus on AI, no other categories like Data Governance, System Integrity, or Robustness are mentioned or supported within the sparse text. This limits the relevance to mainly one category regarding social implications as outlined in the legislation.
Sector: None (see reasoning)
The reference to 'Artificial Intelligence' within the legislative context could potentially imply impacts across several sectors; however, the lack of additional context makes it difficult to align it specifically with areas like Politics and Elections or Government Agencies. The text simply does not provide enough substance to assess relevance in sectors outside of a general AI reference. Thus, the scores reflect a low relevance overall, with only a slight nod towards the Political and Elections sector due to the legislative framework.
Keywords (occurrence): artificial intelligence (1)
Description: Concerns social media privacy and data management for children and establishes New Jersey Children's Data Protection Commission.
Summary: This bill mandates social media platforms in New Jersey to enhance privacy and data management for children, requiring risk assessments and establishing the New Jersey Children's Data Protection Commission to oversee these standards.
Collection: Legislation
Status date: Jan. 9, 2024
Status: Introduced
Primary sponsor: Herbert Conaway
(2 total sponsors)
Last action: Introduced, Referred to Assembly Science, Innovation and Technology Committee (Jan. 9, 2024)
Societal Impact
Data Governance (see reasoning)
The text primarily addresses the privacy and data management concerns for children on social media platforms. It includes provisions for conducting Data Protection Impact Assessments, which inherently ties into the concept of assessing risks associated with AI algorithms used in social media, such as profiling and targeted advertising systems. This legislation emphasizes accountability for social media platforms in protecting children while navigating the intersection of AI technology and data handling, which hints at potential social impacts. Therefore, it is closely aligned with the Social Impact and Data Governance categories. Although it does cover security measures, the focus is primarily on data privacy rather than system integrity or performance benchmarks, yielding lower relevance in those areas. Hence, the scores reflect this differentiation.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text discusses regulations concerning social media platforms that children are likely to access, hence it is relevant to sectors involving children and digital interactions. This primarily aligns with Government Agencies and Public Services because it establishes a commission and sets legal requirements for online services likely used by children, which are inherently governmental in nature. The focus on data management and privacy for minors may also be tangentially relevant to the Healthcare sector, particularly concerning children's well-being, but this is less direct. Overall, the legislation reflects significant relevance to Government Agencies and Public Services, while other sectors are relevant to a lesser degree.
Keywords (occurrence): automated (1) show keywords in context
Description: As introduced, requires each department of the executive branch to develop a plan to prevent the malicious and unlawful use of artificial intelligence for the purpose of interfering with the operation of the department, its agencies and divisions, and persons and entities regulated by the respective department; requires each department to report its plan, findings, and recommendations to each member of the general assembly no later than January 1, 2025. - Amends TCA Title 2; Title 4; Title 8;...
Summary: The bill mandates Tennessee executive branch departments to create plans by January 2025 to prevent the malicious use of artificial intelligence, ensuring its responsible application across agencies.
Collection: Legislation
Status date: Jan. 30, 2024
Status: Introduced
Primary sponsor: Raumesh Akbari
(sole sponsor)
Last action: Assigned to General Subcommittee of Senate State and Local Government Committee (March 19, 2024)
Societal Impact
System Integrity
Data Robustness (see reasoning)
The text explicitly addresses the utilization and regulation of artificial intelligence within state departments to prevent malicious and unlawful use. This indicates a strong relevance to the Social Impact category, as it seeks to mitigate risks to individuals and entities that could arise from AI applications. The text does not delve into data management or system integrity concerns but does imply a need for robustness in AI planning and implementation, suggesting that there are overarching performance concerns that may fit into the Robustness category, albeit not directly stated.
Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)
The legislation pertains predominantly to state executive agencies and their operation with AI, thus indicating considerable relevance to the Government Agencies and Public Services sector. It doesn't make any specific provisions for political use, judicial application, or private sector implications. This limits its relevance to other sectors while maintaining a moderate importance to the Academic and Research Institutions sector given that findings and recommendations could influence research and educational policies around AI. The remaining sectors receive a lower relevance score as they do not appear directly connected to the content of this Act.
Keywords (occurrence): artificial intelligence (3) machine learning (1) neural network (1) show keywords in context
Description: An Act amending Title 18 (Crimes and Offenses) of the Pennsylvania Consolidated Statutes, in computer offenses, providing for artificial intelligence; and imposing a penalty.
Summary: This bill mandates the use of watermarks on artificial intelligence-generated content to indicate its origin and imposes penalties for non-compliance, aiming to enhance transparency and accountability in digital media.
Collection: Legislation
Status date: Nov. 6, 2024
Status: Introduced
Primary sponsor: Johanny Cepeda-Freytiz
(15 total sponsors)
Last action: Referred to CONSUMER PROTECTION, TECHNOLOGY AND UTILITIES (Nov. 6, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text specifically addresses the use of artificial intelligence in creating content and establishes penalties for failing to watermark AI-generated materials. This directly relates to social impact as it deals with accountability in AI-generated content and potential misinformation. It also implicates data governance due to the requirement for watermarks and the definitions provided for transparency. System integrity is somewhat relevant since it discusses the security of identity and likeness, but it is not the primary focus. Robustness is less relevant here as the main goal is more about legal compliance rather than benchmarking AI performance.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily pertains to the impact of AI in the creative industry regarding content generation and the necessity for watermarks, thereby fitting best with the Private Enterprises, Labor, and Employment sector due to its implications for businesses involved in content creation. It also relates to Government Agencies and Public Services because the law is delivered via legislative processes, impacting how public and private entities interact with AI-generated materials. There's less direct relevance to other sectors, such as Healthcare or the Judicial System.
Keywords (occurrence): artificial intelligence (10) show keywords in context
Summary: The bill addresses the youth mental health crisis, emphasizing bipartisan efforts to mitigate the negative impacts of social media on children, including introducing legislation to regulate social media use.
Collection: Congressional Record
Status date: Nov. 19, 2024
Status: Issued
Source: Congress
Societal Impact (see reasoning)
The text primarily discusses youth mental health and addresses the impact social media has on this issue. It emphasizes the effects of social media algorithms on mental health, which could relate to the discussion of social impact due to the societal consequences of social media use. It also introduces legislation tied to algorithm design and its implications for youth, directly connecting to the accountability of system developers and the psychological harm caused by these systems. However, there is minimal explicit focus on data governance, system integrity, or robustness regarding AI's more technical aspects, despite the mention of algorithms. Therefore, relevance to these categories is assigned lower scores overall.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Nonprofits and NGOs
Hybrid, Emerging, and Unclassified (see reasoning)
The text is heavily focused on the issues surrounding youth mental health and social media, which may not specifically relate to any defined sector. It touches upon legislation designed to address mental health concerns amplified by social media usage but does not explicitly mention regulations for specific sectors like healthcare or government efficiency. The discussions primarily reflect a national concern that encompasses multiple sectors but does not fit neatly into one, hence leading to lower scores for sector-specific relevance.
Keywords (occurrence): algorithm (1) show keywords in context
Description: Concerning fabricated intimate or sexually explicit images and depictions.
Summary: Substitute House Bill 1999 establishes penalties for disclosing fabricated intimate or sexually explicit images, particularly involving minors, and amends relevant criminal laws to enhance protections against such depictions.
Collection: Legislation
Status date: March 14, 2024
Status: Passed
Primary sponsor: Tina Orwall
(16 total sponsors)
Last action: Effective date 6/6/2024. (March 14, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text specifically addresses the topic of fabricated intimate images, referencing the use of AI in the digitization process. This involves the creation or alteration of images using artificial intelligence and highlights the legal implications of such actions. Thus, it is directly related to issues concerning AI in a significant way, particularly in terms of societal impacts, data governance, system integrity, and robustness in the context of safeguarding minors from harmful content. Nevertheless, the main focus is on legal accountability rather than technical robustness or integrity standards, so some categories score higher than others based on direct relevance.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The text predominantly focuses on legal aspects related to fabricated images, which has implications across various sectors. However, the strongest relevance is likely with the chapter addressing the justice system, given the context of legal measures against unauthorized intimate images, particularly those involving minors. Some references to educational and research contexts are noted but are not the primary focus. Therefore, while multiple sectors might touch on themes in the bill, the direct implications and obligations surrounding its content primarily connect to the judicial system.
Keywords (occurrence): artificial intelligence (3) automated (3) show keywords in context

Description: A BILL to be entitled an Act to amend Title 40 of the Official Code of Georgia Annotated, relating to motor vehicles and traffic, so as to provide for the operation of miniature on-road vehicles on certain highways; to provide for standards for registration of such vehicles; to provide for issuance of license plates for miniature on-road vehicles; to provide for an annual licensing fee for such vehicles; to provide for issuance of certificates of title by the Department of Revenue for such ve...
Summary: This bill provides regulations for operating miniature on-road vehicles in Georgia, including registration standards, temporary permit procedures, and local authority restrictions, enhancing the state's vehicular laws.
Collection: Legislation
Status date: Feb. 29, 2024
Status: Engrossed
Primary sponsor: J Collins
(6 total sponsors)
Last action: Senate Read Second Time (March 11, 2024)
System Integrity (see reasoning)
The text primarily deals with the operation and regulation of miniature on-road vehicles, providing guidelines for registration, licensing, and operation within certain legal frameworks. However, it includes a mention of 'automated driving systems' in the context of defining 'minimal risk conditions' where vehicles might operate autonomously under certain failure scenarios. This reference suggests a minimal intersection with AI, particularly in understanding the implications of autonomous driving systems on safety, regulatory standards, and vehicle operation. Most aspects of the proposed legislation are administrative and regulatory rather than directly addressing the profound societal impacts or data governance issues related to AI as a broader concept. Thus, the categories of 'Data Governance' and 'Robustness' may not be fully applicable due to the lack of comprehensive focus or mandates on data management and performance benchmarks specific to AI systems. The category 'System Integrity' may be viewed as relevant considering the emphasis on operational standards and maintenance systems, although the emphasis is limited. Therefore, the relevance scores reflect these considerations.
Sector: None (see reasoning)
The text does not explicitly mention any of the sectors related to AI use or regulation. It primarily discusses motor vehicle regulation rather than AI applications in contexts like politics, government services, healthcare, or others listed. The closest applicable sector could be 'Government Agencies and Public Services', considering the involvement of the Department of Revenue and the electronic permit issuance system, but it is still tangential. Therefore, relevance to the sectors remains limited, with most categories receiving score indicating no meaningful connection to the text.
Keywords (occurrence): automated (1) autonomous vehicle (1) show keywords in context
Description: Creates the Protect Health Data Privacy Act. Provides that a regulated entity shall disclose and maintain a health data privacy policy that clearly and conspicuously discloses specified information. Sets forth provisions concerning health data privacy policies. Provides that a regulated entity shall not collect, share, or store health data, except in specified circumstances. Provides that it is unlawful for any person to sell or offer to sell health data concerning a consumer without first ob...
Summary: The Protect Health Data Privacy Act establishes stringent rules for the collection, sharing, and sale of health data in Illinois, ensuring consumer consent, clear policies, and protective measures against unauthorized use.
Collection: Legislation
Status date: Feb. 2, 2024
Status: Introduced
Primary sponsor: Celina Villanueva
(sole sponsor)
Last action: Referred to Assignments (Feb. 2, 2024)
Societal Impact
Data Governance (see reasoning)
The text discusses health data privacy and the explicit requirement for consumer consent regarding the collection and processing of health data, which touches on various AI-related considerations. For example, it mentions algorithms and machine learning in relation to processing health data, highlighting concerns about fairness and transparency that align with social impact. However, it primarily focuses on privacy and data rights, which emphasizes data governance. Therefore, while there are connections to social aspects regarding consumer rights and potential biases in AI-driven data processing, the primary focus remains on the correct use and governance of health data.
Sector:
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The text is highly relevant to the Healthcare sector as it directly addresses health data rights, privacy policies, and the mandated consent process specifically within healthcare contexts. It outlines the responsibilities of regulated entities handling health data, thereby connecting directly with legislative measures aimed at protecting healthcare data. Although there are implications for other sectors related to the treatment of personal data, the primary focus is undoubtedly healthcare.
Keywords (occurrence): machine learning (1) show keywords in context
Description: Hospitals; emergency departments; licensed physicians. Requires any hospital with an emergency department to have at least one licensed physician on duty and physically present at all times. Current law requires such hospitals to have a licensed physician on call, though not necessarily physically present on the premises, at all times. The bill has a delayed effective date of July 1, 2025 and is identical to
Summary: The bill mandates that Virginia hospitals' emergency departments have at least one licensed physician on duty at all times to ensure patient safety and care quality.
Collection: Legislation
Status date: April 4, 2024
Status: Passed
Primary sponsor: Stella Pekarsky
(sole sponsor)
Last action: Governor: Acts of Assembly Chapter text (CHAP0505) (April 4, 2024)
This text focuses primarily on establishing regulations for hospitals and emergency departments, with a significant emphasis on ensuring the physical presence of licensed physicians. It does not explicitly mention issues related to AI, data management, system security, or performance benchmarks associated with AI systems. As such, the categories of Social Impact, Data Governance, System Integrity, and Robustness do not have a relevant connection to the text. Therefore, the scores reflect the lack of relevance to AI-related issues in these categories.
Sector:
Healthcare (see reasoning)
The text primarily pertains to the Healthcare sector as it discusses regulations concerning hospitals and emergency services, specifically detailing the requirements for licensed physicians on duty. Despite the absence of explicit AI references, these regulations can indirectly touch upon the use of AI in healthcare, which could be relevant upon further contextualization. However, AI’s role is not specified in this legislation, leading to a lower relevancy score. Hence, while the text does discuss healthcare regulations, it doesn't directly engage with the nuances of AI applications within that sector. Subsequently, the other sectors such as Politics, Government, and academia have no connections with this text; thus, they receive the lowest scores.
Keywords (occurrence): artificial intelligence (2) show keywords in context
Summary: The bill honors Senator Laphonza R. Butler's notable contributions during her brief tenure, highlighting her advocacy for women's rights, mental health, and civic participation, alongside her historic significance as a trailblazer in the Senate.
Collection: Congressional Record
Status date: Dec. 3, 2024
Status: Issued
Source: Congress
Societal Impact (see reasoning)
The text is primarily a farewell speech for Senator Laphonza R. Butler and does not delve deeply into specific AI legislation or regulations. However, it makes a brief mention of Senator Butler's efforts to bring the workforce up to speed in the age of artificial intelligence, which can be classified under all four categories as AI's impact on society, data management, system integrity, and robustness are implied in her initiatives. Still, the general tone is more of a tribute than a detailed discussion on AI matters. Thus, overall relevance to all four categories is relatively low, with some connection to Social Impact due to the workforce aspect.
Sector: None (see reasoning)
The text focuses on Senator Butler's accomplishments and her contributions while in office it does not specifically address AI in the context of politics or any other sector. While there is a mention of efforts related to the workforce concerning AI, there are no specific legislative details that connect this directly to the sectors outlined. Therefore, scores reflect minimal relevance to most sectors, with a slight score given to Government Agencies and Public Services due to the mention of her championing various causes.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Creates Health Care Innovation Council within DOH for specified purpose; requires council to submit annual reports to Governor & Legislature; requires department to administer revolving loan program for applicants seeking to implement certain health care innovations in this state; authorizes department to contract with third party to administer program, including loan servicing, & manage revolving loan fund.
Summary: The bill establishes the Health Care Innovation Council in Florida, aiming to enhance healthcare delivery through innovative solutions and a revolving loan program for health care improvements.
Collection: Legislation
Status date: Jan. 8, 2024
Status: Introduced
Primary sponsor: Health Care Appropriations Subcommittee
(4 total sponsors)
Last action: Laid on Table, refer to SB 7018 (Feb. 21, 2024)
Societal Impact
Data Governance (see reasoning)
The bill creates the Health Care Innovation Council aimed at improving health care through innovative technologies, including possibly AI. Although the text does not explicitly mention AI, it references the integration of technologies and seeks to harness innovations. These innovations could include AI applications, thus the Social Impact category is relevant as it may pertain to the implications of AI in health care delivery, addressing efficiency, cost reduction, and quality of care. Data Governance is also relevant since the text emphasizes the need for standards and best practices around health data which are crucial for any AI implementations in healthcare. System Integrity might be less relevant since the focus is more on innovation than on security or transparency issues specific to AI. Robustness is not directly applicable as the legislation does not discuss performance benchmarks for AI systems or regulatory compliance. Overall, the Social Impact and Data Governance categories are most relevant to AI-related portions of the text.
Sector:
Healthcare (see reasoning)
The text relates to the Healthcare sector by establishing a council aimed at advancing health care innovations, including potentially those driven by AI. It addresses health care delivery models, better patient outcomes, and uses technology to improve services, which all indicate a focus on the Healthcare sector. While it critiques efficiencies in the healthcare workforce and service delivery, it stops short of evaluating other sectors such as politics or public services, making its relevance mainly contained to Healthcare.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Amend The South Carolina Code Of Laws By Adding Section 39-5-190 So As To Provide That Every Individual Has A Property Right In The Use Of That Individual's Name, Photograph, Voice, Or Likeness In Any Medium In Any Manner And To Provide Penalties.
Summary: The bill establishes that individuals possess property rights over their name, image, voice, and likeness, providing legal protections and penalties for unauthorized commercial use.
Collection: Legislation
Status date: April 10, 2024
Status: Introduced
Primary sponsor: Brandon Guffey
(8 total sponsors)
Last action: Referred to Committee on Judiciary (April 10, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text of the Ensuring Likeness, Voice, and Image Security Act strongly relates to Social Impact, particularly concerning the protection of individuals' rights over their likenesses in the context of AI-generated content and deepfake technologies. This legislation addresses misuse and unauthorized usage, which aligns with accountability and consumer protections. Data Governance is also relevant due to the mention of algorithms and technology that generate likenesses, signaling a need for secure and ethical data management. System Integrity is relevant due to the mention of safeguards against unauthorized use of likenesses, emphasizing the control over AI-driven processes. Robustness is less applicable as this act does not set benchmarks for AI performance but focuses more on rights protection and legal frameworks.
Sector:
Judicial system
Private Enterprises, Labor, and Employment
Nonprofits and NGOs (see reasoning)
The legislation has significant implications for multiple sectors. It affects the Judicial System by providing grounds for civil action against unauthorized use of likenesses, which could lead to legal disputes. It could also have implications for Private Enterprises, Labor, and Employment, particularly in industries where likeness and voice are monetized, such as entertainment. The text does not mention specific governmental applications of AI or its role in healthcare, academic institutions, or international settings, which makes those sectors irrelevant here. Its implications on AI raise concerns about ethical standards and consumer rights, linking it to the broader framework of Governance and possibly impacting Nonprofits and NGOs that advocate for individual rights.
Keywords (occurrence): algorithm (1) show keywords in context
Description: Imposes liability for misleading, incorrect, contradictory or harmful information to a user by a chatbot that results in financial loss or other demonstrable harm.
Summary: The bill holds chatbot proprietors liable for misleading or harmful information that leads to user financial loss, enforcing accountability and requiring clear user notifications about chatbot interactions.
Collection: Legislation
Status date: May 14, 2024
Status: Introduced
Primary sponsor: Kristen Gonzalez
(sole sponsor)
Last action: REFERRED TO INTERNET AND TECHNOLOGY (May 14, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text clearly addresses the role of chatbots, which are identified as forms of artificial intelligence (AI) systems. The focus on liability for misinformation provided by chatbots falls under the category of Social Impact as it speaks to the ramifications of AI on users and the accountability of AI systems for potential harm. The need for transparency and consumer protection further emphasizes the societal consequences of AI interactions. Data Governance is relevant as the chatbot's responsibility includes providing accurate information and addressing inaccuracies, which aligns with the management of data accuracy. System Integrity is also moderately relevant since the legislation imposes obligations on chatbot operators to ensure their systems provide accurate information and maintain user trust. Robustness may be less relevant, but the emphasis on compliance with policies and user safeguards suggests a concern with standards for AI performance.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text pertains mainly to the use of chatbots within the Private Enterprises, Labor, and Employment sector, as it specifically addresses the liability of businesses and organizations that utilize chatbots to interact with customers. The regulation relates directly to how these businesses manage interactions with users and the accuracy of the information provided. While there is a role for Government Agencies and Public Services, which could be seen as relevant due to the potential for government entities as proprietors, the main thrust of the text is more aligned with private enterprises due to the imposition of liability on chatbot operators. No direct relevance to sectors like Politics and Elections, Judicial System, Healthcare, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, or Hybrid, Emerging, and Unclassified is established in the text.
Keywords (occurrence): artificial intelligence (1) chatbot (12) show keywords in context
Description: Revising conditions under which a person commits the offense of driving under the influence or boating under the influence, respectively; providing that the disposition of an administrative proceeding relating to a specified fine does not affect certain criminal action; adding specified grounds for issuance of a search warrant; revising probation guidelines for felonies in which certain substances are contributing factors, etc.
Summary: The bill modifies Florida laws regarding operating vehicles and vessels under the influence, defining terms, revising penalties, and establishing conditions for arrest and affirmative defenses, enhancing accountability for impaired driving.
Collection: Legislation
Status date: March 8, 2024
Status: Other
Primary sponsor: Lori Berman
(2 total sponsors)
Last action: Died in Criminal Justice (March 8, 2024)
The text primarily addresses legal definitions and offenses related to driving and boating under the influence of alcohol and other impairing substances. It does not contain any explicit references or implications concerning AI technologies, their impact, or their governance. Hence, the legislation does not fall into any of the specified categories of Social Impact, Data Governance, System Integrity, or Robustness, scoring low in relevance to each. Instead, the focus remains entirely on human behavior concerning substance use and legal proceedings surrounding that behavior.
Sector: None (see reasoning)
Similarly, the text is concerned with legal terminology, penalties, and procedures related to DUI and BUI offenses. It does not engage with AI's role, regulation, or implications within any of the specified sectors (Politics and Elections, Government Agencies and Public Services, Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, Hybrid, Emerging, and Unclassified). There are no mentions or relevance to how AI may influence these sectors, leading to a score of 1 across all sector categories.
Keywords (occurrence): automated (2) autonomous vehicle (7) show keywords in context
Description: As introduced, requires a person to include a disclosure on certain content generated by artificial intelligence that the content was generated using artificial intelligence; makes it an unfair or deceptive act or practice under the Tennessee Consumer Protection Act of 1977 to distribute certain content generated using artificial intelligence without the required disclosure. - Amends TCA Title 47.
Summary: This bill amends Tennessee law to require disclosure for AI-generated content, ensuring transparency about its origins, and allows for legal action against those who fail to comply.
Collection: Legislation
Status date: Jan. 30, 2024
Status: Introduced
Primary sponsor: Bill Powers
(sole sponsor)
Last action: Assigned to General Subcommittee of Senate Commerce and Labor Committee (March 13, 2024)
Societal Impact
Data Governance (see reasoning)
The bill explicitly addresses the use of artificial intelligence, particularly focusing on the requirement for disclosures regarding AI-generated content. This connects strongly with issues of social impact, as it seeks to protect consumers from potential deception and misinformation generated by AI systems. It also touches on data governance as it mandates the disclosure of AI-generated content to provide transparency, which is a key aspect of ensuring that individuals are informed about the digital content they consume. These considerations directly relate to the implications of AI in society and the management of data that is manipulated or generated through AI frameworks. Given the nature of the content and its potential impacts, all categories are relevant to varying degrees.
Sector: None (see reasoning)
The bill primarily addresses consumer protection in the context of AI-generated content, which relates most closely to public welfare and consumer rights. It does not specifically address the use of AI in political systems, government services, judicial applications, healthcare, employment, research institutions, or international cooperation, thus lowering the relevance of the other sectors. The bill’s focus on AI disclosures means it is primarily applicable to general consumers rather than specific sectors; this also explains why certain sectors received lower scores. Politics and elections, government agencies and public services, judicial system, healthcare, and the other sectors show minimal to no relevance. Therefore, only a couple of sectors are deemed relevant based on the implications discussed.
Keywords (occurrence): artificial intelligence (5) automated (1) show keywords in context
Description: COMMERCIAL LAW -- GENERAL REGULATORY PROVISIONS -- AUTOMATED DECISION TOOLS - Requires companies that develop or deploy high-risk AI systems to conduct impact assessments and adopt risk management programs, would apply to both developers and deployers of AI systems with different obligations based on their role in AI ecosystem.
Summary: The bill mandates companies developing or deploying high-risk AI systems to conduct impact assessments and implement risk management programs, ensuring accountability and reducing potential harm from AI decisions.
Collection: Legislation
Status date: March 22, 2024
Status: Introduced
Primary sponsor: Louis Dipalma
(6 total sponsors)
Last action: Committee recommended measure be held for further study (April 11, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This legislation primarily addresses the development and deployment of high-risk AI systems, specifically focusing on required impact assessments and risk management programs. It discusses the implications of artificial intelligence (AI) in making consequential decisions that significantly affect individuals, illustrating the potential for bias in AI outputs and the need for transparency in AI deployment. Therefore, it has clear relevance to Social Impact due to its focus on accountability and the ethical implications of AI usage. Data Governance is also relevant as the law mandates accurate documentation and management of data used in AI systems to prevent bias and protect privacy. System Integrity is pertinent due to the consideration of risk management and oversight in AI processes. Robustness is relevant, but to a lesser extent, as it mentions general requirements for assessing the performance of AI systems without delving into specific benchmarks. Overall, the text's provisions primarily address social implications arising from AI deployment, with robust requirements for both developers and deployers of AI.
Sector:
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
The legislation is highly relevant to various sectors that involve the application of AI technology. Specifically, it directly pertains to the Government Agencies and Public Services sector as it outlines regulatory requirements for developers and deployers of AI, which likely includes public service applications. It is also relevant to Private Enterprises, Labor, and Employment as it determines how businesses use AI systems to make significant decisions impacting individuals' lives, including employment and financial decisions. While it has implications for the Judicial System regarding potential bias in AI outcomes affecting legal aspects, its focus does not specifically target judicial processes. Healthcare may be considered due to the impact on AI systems used in healthcare decision-making, but it is not a primary focus. The legislation does not directly address political implications or nonprofit usage of AI, falling short in relevance to these sectors. Overall, the strongest connections are with Government Agencies and Public Services and Private Enterprises, Labor, and Employment.
Keywords (occurrence): artificial intelligence (3) machine learning (1) automated (4) show keywords in context
Description: Amends the Criminal Code of 2012. Provides that certain forms of false personation may be accomplished by artificial intelligence. Defines "artificial intelligence".
Summary: The bill amends Illinois' Criminal Code to include provisions that allow for false personation via artificial intelligence, defining AI and expanding existing fraud laws.
Collection: Legislation
Status date: Jan. 19, 2024
Status: Introduced
Primary sponsor: Meg Loughran Cappel
(sole sponsor)
Last action: Rule 3-9(a) / Re-referred to Assignments (March 15, 2024)
Societal Impact
System Integrity (see reasoning)
The text discusses amendments to the Criminal Code specifically related to false personation facilitated by artificial intelligence (AI). The relevance to each category is as follows: - **Social Impact:** The legislation clearly addresses potential harms and impacts on society due to AI's ability to facilitate false personation. This has implications for trust in social transactions and public interactions, leading to significant societal issues. Thus, it is deemed extremely relevant. - **Data Governance:** While the document defines AI, it does not specifically address issues around data collection or management, which is central to the data governance category. This category is slightly relevant mainly due to the potential underlying requirements for accurate data in AI systems, but not explicitly mentioned in this text. - **System Integrity:** The legislation implies a need for control over AI in preventing misuse (false personation). However, it does not extend to broader governance or technical standards relevant to system integrity, resulting in a moderately relevant assessment. - **Robustness:** There are no mentions of benchmarks, auditing, or assessments of AI performance in this text. Therefore, the relevance of this category is considered not relevant.
Sector:
Judicial system
Hybrid, Emerging, and Unclassified (see reasoning)
The legislation primarily addresses the use of AI in a criminal context, focusing on implications for false personation. Each sector's relevance is evaluated as follows: - **Politics and Elections:** The text does not address the regulatory impacts of AI in political contexts, making it not relevant. - **Government Agencies and Public Services:** Though it pertains to criminal law, it does not explicitly address the use of AI in governmental service delivery. Therefore, it is considered slightly relevant. - **Judicial System:** The text pertains to criminal law and could have implications for how AI is treated within the judicial framework; however, it does not outline specific regulations for AI in judicial contexts, hence, moderately relevant. - **Healthcare:** There are no direct implications concerning healthcare applications of AI, resulting in no relevance here. - **Private Enterprises, Labor, and Employment:** While it concerns false personation, this does not touch on employment practices directly related to AI, making this sector not relevant. - **Academic and Research Institutions:** The text does not touch on educational or research implications, so relevance is not applicable. - **International Cooperation and Standards:** There are no discussions about international standards or cooperation regarding AI in this text, leading to no relevance. - **Nonprofits and NGOs:** The text does not address AI within the nonprofit sector, resulting in no relevance. - **Hybrid, Emerging, and Unclassified:** Given that the legislation is specific to criminal implications related to AI and does not fit neatly into other categories, this sector receives a moderate relevance score.
Keywords (occurrence): artificial intelligence (3) automated (1) show keywords in context
Description: To require the Election Assistance Commission to develop voluntary guidelines for the administration of elections that address the use and risks of artificial intelligence technologies, and for other purposes.
Summary: The "Preparing Election Administrators for AI Act" mandates the Election Assistance Commission to create voluntary guidelines addressing the use and risks of artificial intelligence in election administration.
Collection: Legislation
Status date: May 10, 2024
Status: Introduced
Primary sponsor: Chrissy Houlahan
(9 total sponsors)
Last action: Referred to the House Committee on House Administration. (May 10, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly addresses the use and risks of artificial intelligence technologies in the context of election administration. It discusses the need for voluntary guidelines on how AI can affect elections, including the risks and benefits associated with its use, highlighting issues such as cybersecurity, accuracy of election information, and the potential for disinformation. Given the connection to social issues, the security and transparency of AI systems in elections, and designing governance measures around AI use, the categories reflect varying degrees of relevance. The 'Social Impact' category is highly relevant due to attention to biases and misinformation, while 'Data Governance', 'System Integrity', and 'Robustness' may relate indirectly but are less emphasized in the context of the text's focus on election administration.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The text centers on the implications of AI technologies specifically within the electoral process, highlighting guidelines for election officials. This context strongly aligns with the 'Politics and Elections' sector as it addresses the intersection of AI and democratic processes. While 'Government Agencies and Public Services' is also relevant due to the involvement of the Election Assistance Commission, it is secondary to the more focused implications for politics and elections. Other sectors such as 'Judicial System', 'Healthcare', and 'Private Enterprises' do not apply here, as they do not intersect with the content of this text.
Keywords (occurrence): artificial intelligence (7) show keywords in context