4944 results:
Description: Establishing the Algorithmic Addiction Fund; providing that the Fund includes all revenue received by the State from a judgment against, or settlement with, technology conglomerates, technology companies, social media conglomerates, or social media companies relating to claims made by the State; requiring the Secretary of Health to develop certain goals, objectives, and indicators relating to algorithm addiction treatment and prevention efforts; requiring the Secretary to establish a certain ...
Summary: The bill establishes the Algorithmic Addiction Fund, aimed at tackling algorithmic addiction through treatment and prevention initiatives, funded by revenues from settlements against tech companies.
Collection: Legislation
Status date: Jan. 31, 2024
Status: Introduced
Primary sponsor: Katie Hester
(sole sponsor)
Last action: Hearing 2/20 at 1:00 p.m. (Jan. 31, 2024)
Societal Impact (see reasoning)
The text centers on the establishment of an Algorithmic Addiction Fund aimed at addressing the health impacts associated with algorithmic addiction, which relates to the broader social implications of AI technology. The fund intends to provide resources for treatment, prevention, and educational campaigns, positioning it within the context of mental health and societal well-being. The relevance to Social Impact is strong as it discusses the societal consequences of technology use, particularly around mental health issues that could stem from AI interactions. For Data Governance, while the legislation does pertain to funding allocation and some oversight mechanisms, it does not focus specifically on data management or privacy issues. System Integrity and Robustness are also less relevant because the text is primarily about treatment and intervention rather than the technical aspects of AI system performance, security, or standards. Thus, only Social Impact is scored highly for its direct relevance.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text primarily addresses algorithmic addiction in a public health context, making its relevance to sectors like Healthcare and Government Agencies and Public Services more pertinent, though not explicitly mentioned. It incorporates discussions about treatment, intervention programs, and cooperation with various stakeholders, indicating a focus on health and community outreach. Consequently, the relevance to Healthcare is moderate due to its focus on issues intersecting with mental health and addiction, while the Government Agencies and Public Services sector is somewhat relevant, given the involvement of the Secretary of Health and state agencies in administering the fund. Other sectors like Politics and Elections, Private Enterprises, and Judicial System do not find significant relevance here, as the text does not explicitly engage with those themes.
Keywords (occurrence): algorithm (1) show keywords in context
Description: An Act amending Title 50 (Mental Health) of the Pennsylvania Consolidated Statutes, providing for protection of minors on social media; and imposing penalties.
Summary: This Pennsylvania bill aims to protect minors on social media by requiring parental consent for account creation, monitoring chats for harmful content, and imposing penalties on platforms that fail to comply.
Collection: Legislation
Status date: May 8, 2024
Status: Engrossed
Primary sponsor: Brian Munroe
(20 total sponsors)
Last action: Referred to COMMUNICATIONS AND TECHNOLOGY (May 28, 2024)
Societal Impact
Data Governance (see reasoning)
The text primarily addresses legislation concerning the protection of minors on social media. It focuses on issues such as the monitoring of chats for flagged content and the requirement for parental consent when minors create social media accounts. These aspects indicate a focus on the social impact of AI systems as they relate to young users, particularly regarding emotional and psychological risk factors associated with social media. Therefore, it is deemed very relevant to the Social Impact category. The Data Governance category is also moderately relevant due to the references to data protection, consent, and the mining of data concerning minors. However, there are fewer elements that connect directly to System Integrity and Robustness, primarily because this act does not explicitly reference the security or performance metrics of AI systems, thus receiving lower or negligible relevance scores in these categories. Overall, it can be concluded that the legislation is aimed at addressing the social implications of AI-driven social media, rather than the operational integrity or robustness of AI systems themselves.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The bill discusses AI's relevance concerning minors engaging with social media platforms, which is closely related to the Private Enterprises, Labor, and Employment sector, as social media are business enterprises that operate under specific regulations concerning user data and protection. It mentions social media companies explicitly, emphasizing their accountability and the need for regulations to safeguard minors accessing their services. The relevance to Government Agencies and Public Services is also noticeable as it deals with legislation impacting public welfare, particularly concerning minors' safety online, thus earning a moderately high relevance score. However, sectors like Politics and Elections, Healthcare, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified do not find a strong connection to the text, resulting in low scores.
Keywords (occurrence): automated (5) recommendation system (4) show keywords in context
Summary: The bill involves the submission of various executive communications, including funding opportunities and regulatory updates, from federal agencies to relevant congressional committees for oversight and consideration.
Collection: Congressional Record
Status date: July 5, 2024
Status: Issued
Source: Congress
The text primarily consists of executive communications and does not explicitly discuss any aspects related to AI technology. While it mentions a 'Quality Control Standards for Automated Valuation Models' in one of the letters, it does not delve into matters like AI's societal impact, data governance, systems integrity, or performance benchmarks relevant to AI. Therefore, the relevance of all categories is low since the text lacks any broader implications of AI.
Sector: None (see reasoning)
Similar to the category reasoning, the text does not specifically address any sectors related to AI use. While it references the Department of Labor and some regulatory actions, it does not indicate any application or regulation of AI in the specified sectors, leading to a consistent low relevance across all sectors.
Keywords (occurrence): automated (1)
Description: An Act to amend 632.85 (title) and 632.85 (3); and to create 632.85 (1) (d) and 632.851 of the statutes; Relating to: prior authorization for coverage of physical therapy, occupational therapy, speech therapy, chiropractic services, and other services under health plans.
Summary: This bill mandates prior authorization guidelines for physical, occupational, and speech therapy, and chiropractic services, aiming to expedite approvals and reduce barriers to care under health plans in Wisconsin.
Collection: Legislation
Status date: April 15, 2024
Status: Other
Primary sponsor: Nancy VanderMeer
(23 total sponsors)
Last action: Failed to pass pursuant to Senate Joint Resolution 1 (April 15, 2024)
System Integrity (see reasoning)
The text primarily revolves around prior authorization for various health services and does not specifically address the broader social implications of AI nor any unique impacts that AI systems have on individuals or community welfare. Although there are mentions of algorithms used for managing health coverage, the discussion lacks depth on ethical considerations or biases stemming from AI use in health care, which is vital to categorize it within Social Impact. Hence, it receives a low relevance score. For Data Governance, while algorithms are mentioned, the legislation does not delve into the accuracy and management of data within AI systems or detailed data governance issues, positioning it at a low relevance score. System Integrity is somewhat relevant due to the emphasis on algorithm transparency and accountability, indicating a need for secure AI methodologies; however, the extent of AI governance mentioned is limited. Robustness is not significantly represented due to the absence of a focus on performance benchmarks or compliance issues pertinent to AI systems. Overall, the AI-related elements primarily highlight algorithm transparency rather than the broader implications or engagements typically associated with AI and its governance.
Sector:
Healthcare (see reasoning)
The document discusses health care services and prior authorizations related to physical, occupational, and speech therapies. It explicitly addresses health plans but does not touch upon AI application in the delivery of these services, so its relevance to healthcare is moderate, leaning towards low as it does not specifically mention the integration of AI in health services. The legislation does not include references to political campaign uses or governmental processes, making it irrelevant to Politics and Elections and Government Agencies and Public Services. No AI implications in judicial contexts are present, rendering its relevance to the Judicial System as not applicable. For Private Enterprises, Labor, and Employment, while the bill does have implications for providers of care under managed health services, it does not address significant labor practices or corporate governance in AI contexts. There is only minor relevance in an educational context concerning terms of the service, with no mention of academic contributions regarding AI or research institutions. This proposal's health insurance mandate topic aligns slightly with the idea of international cooperation but does not distinctly engage with global AI ethics or regulations. Thus, the bill remains largely unlinked with most sector categories with minimal relevance across them.
Keywords (occurrence): algorithm (2) show keywords in context
Description: All-terrain vehicles; definition
Summary: Senate Bill 1052 in Arizona amends definitions regarding all-terrain vehicles in transportation statutes, clarifying specifications for vehicles designed for recreational nonhighway travel and their operation on public highways.
Collection: Legislation
Status date: March 12, 2024
Status: Engrossed
Primary sponsor: Frank Carroll
(3 total sponsors)
Last action: House third reading FAILED voting: (19-39-2-0) (June 4, 2024)
System Integrity (see reasoning)
The legislation has only a minor focus on AI-related terms within a context primarily concerned with the definitions surrounding various types of vehicles, particularly autonomous and automated driving systems. While it defines 'automated driving systems' and 'autonomous vehicles,' the scope of these definitions is narrow and does not delve into broader implications for technology ethics, bias, societal impact, or cooperative regulations concerning AI's role in the transportation sector. Therefore, its relevance to broader AI-related categories is limited and implies minimal societal or governance issues directly related to AI in this context.
Sector:
Government Agencies and Public Services (see reasoning)
The text mainly concerns the definition and regulatory framework around vehicles, particularly those with automated driving features. The mention of 'automated driving systems' and 'autonomous vehicles' lends some relevance to the sector focusing on Government Agencies and Public Services, since these definitions impact public safety and regulatory oversight. However, there is no extensive discussion on how these technologies are used or regulated by government agencies or in public services beyond basic definitions, leading to a lower relevance score overall.
Keywords (occurrence): automated (8) autonomous vehicle (2) show keywords in context
Description: Prohibits users of algorithmic decision-making from utilizing algorithmic eligibility determinations in a discriminatory manner. Requires users of algorithmic decision-making to send corresponding notices to individuals whose personal information is used. Requires users of algorithmic decision-making to submit annual reports to the Department of the Attorney General. Provides for appropriate means of civil enforcement.
Summary: The bill addresses algorithmic discrimination in Hawaii, prohibiting discriminatory use of algorithmic decision-making in determining access to important life opportunities and requiring disclosures and annual reports from entities that use such systems.
Collection: Legislation
Status date: Jan. 17, 2024
Status: Introduced
Primary sponsor: David Tarnas
(13 total sponsors)
Last action: Referred to HET/JHA, referral sheet 1 (Jan. 24, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly addresses algorithmic discrimination and mentions the use of algorithmic decision-making directly connected to Artificial Intelligence and Machine Learning. This falls under the Social Impact category as it aims to prevent discriminatory practices that affect individuals' access to life opportunities based on algorithmic processes. [...] The Data Governance category is also highly relevant here as the bill discusses personal information, data auditing practices, privacy protections, and mandates transparency in how data is used. System Integrity is relevant as well since the bill mandates human oversight to correct inaccurate determinations and includes auditing provisions to ensure compliance. Lastly, Robustness is relevant due to the emphasis on auditing algorithms and ensuring compliance with ethical guidelines and standards to avoid biases. Overall, all four categories have strong relevance due to the clear connections between the bill’s purposes and protections surrounding algorithmic processes tied to AI.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The legislation mentions the use of algorithms and their risks to discrimination, making it very relevant to the Private Enterprises, Labor, and Employment sector as companies utilizing AI systems must adhere to the new rules to avoid discriminatory practices. Both Government Agencies and Public Services, as well as Judicial System sectors, are relevant as this law could affect how government services implement algorithms and any legal repercussions that ensue. Academic and Research Institutions could also be somewhat relevant, as research around fairness in AI and algorithmic decisions may fall under the purview of academic studies, but it is less direct compared to other sectors. The Healthcare sector has the least relevance here, as the bill does not specifically pertain to healthcare applications. Overall, the most relevant sectors reflect direct impacts on private entities and public agencies.
Keywords (occurrence): artificial intelligence (2) machine learning (2) algorithm (3) show keywords in context
Summary: The bill establishes requirements for the physical protection, notification, storage, transmission, and destruction of Unclassified Controlled Nuclear Information (UCNI) to prevent unauthorized access and ensure compliance with federal regulations.
Collection: Code of Federal Regulations
Status date: Jan. 1, 2024
Status: Issued
Source: Office of the Federal Register
Data Governance
System Integrity (see reasoning)
The text focuses primarily on the handling and protection of sensitive information known as UCNI (Unclassified Controlled Nuclear Information). It details protocols for access, physical protection, transmission methods, and how to process information through automated systems, particularly emphasizing encryption algorithms. However, there is no specific focus on AI technologies or their societal impacts, data governance, system integrity, or robustness in AI systems. Therefore, the relevance to the provided categories is limited.
Sector:
Government Agencies and Public Services (see reasoning)
The provided text pertains to protocols concerning sensitive information management rather than specific applications or impacts of AI across defined sectors. While there is mention of Automated Information Systems (AIS) which indirectly relates to data management and system processes, it does not directly address AI implications in sectors such as healthcare or government services. The text mainly discusses regulatory frameworks applicable to sensitive information, thus limiting its relevance to specific sectors.
Keywords (occurrence): automated (2)
Description: An Act to amend 11.1303 (title); and to create 11.1303 (2m) of the statutes; Relating to: disclosures regarding content generated by artificial intelligence in political advertisements, granting rule-making authority, and providing a penalty. (FE)
Summary: The bill mandates disclosures for political ads featuring AI-generated content, requiring statements indicating AI involvement. It establishes penalties for noncompliance and grants rule-making authority to the Ethics Commission.
Collection: Legislation
Status date: April 15, 2024
Status: Other
Primary sponsor: Romaine Quinn
(31 total sponsors)
Last action: Failed to pass pursuant to Senate Joint Resolution 1 (April 15, 2024)
Societal Impact (see reasoning)
The text focuses primarily on the requirement for disclosures regarding content generated by artificial intelligence in political advertisements. This highlights the legislation's direct implications on society, especially with respect to informing the public about the authenticity of information they encounter, which ties directly into the social impact of AI. The legislation addresses the risk of misinformation and manipulation that can arise from synthetic media in political discourse. The bill also involves the imposition of penalties for non-compliance, further emphasizing accountability. Therefore, I rate Social Impact as very relevant. Data Governance is slightly relevant as it pertains to the use of data in AI, but is not the central focus of the text. System Integrity is not relevant because the text does not address security or transparency of AI systems. Robustness is not relevant since there are no discussions about benchmarks or performance measures of AI systems mentioned in the text.
Sector:
Politics and Elections (see reasoning)
The legislation specifically addresses the use of AI in political advertisements, indicating a clear focus on the political and electoral processes. It sets out requirements for disclosure of AI-generated content in political communications, directly linking the regulation of AI to political integrity and voter awareness. Thus, I assess Politics and Elections as extremely relevant. The Government Agencies and Public Services sector is slightly relevant because government bodies will likely be involved in the enforcement of this legislation but is not the main focus of the bill. Other sectors such as Judicial System, Healthcare, Private Enterprises, Academic Institutions, International Cooperation, Nonprofits, and Hybrid sectors do not see relevance in this bill as it is narrowly focused on political advertising.
Keywords (occurrence): artificial intelligence (2) synthetic media (10) show keywords in context
Description: Amends the Illinois Credit Union Act. Provides that a credit union regulated by the Department of Financial and Professional Regulation that is a covered financial institution under the Illinois Community Reinvestment Act shall pay an examination fee to the Department subject to the adopted by the Department. Provides that the aggregate of all credit union examination fees collected by the Department under the Illinois Community Reinvestment Act shall be paid and transferred promptly, accompa...
Summary: The bill amends the Illinois Credit Union Act, updating regulations and fee structures for credit unions, aiming to ensure effective oversight and operational funding for the Department of Financial Institutions.
Collection: Legislation
Status date: May 22, 2024
Status: Enrolled
Primary sponsor: David Koehler
(4 total sponsors)
Last action: Sent to the Governor (June 20, 2024)
The text primarily addresses regulations related to credit unions, specifically concerning examination fees and the structure of governance for credit unions in Illinois. However, it does not make any direct or explicit references to AI, algorithms, machine learning, or any related technologies. Therefore, all categories that encompass AI-related impacts, data management, system integrity, and performance benchmarks are deemed not relevant since the content does not touch upon AI topics, implications, or regulations. The legislation focuses solely on the financial activities and supervision of credit unions without implicating AI in any form.
Sector: None (see reasoning)
Similar to the analysis on the categories, the text focuses on legal and administrative details related to credit unions. There is no mention of AI applications or regulations across various sectors including politics, government, judicial, healthcare, etc. Therefore, it is not relevant to any of the sectors outlined. It only addresses credit unions and their examination fees, lacking references to the impact or control of AI in the specified sectors.
Keywords (occurrence): algorithm (1) show keywords in context
Summary: The bill establishes guidelines for counterintelligence evaluations, including polygraph examinations for Department of Energy employees, to safeguard national security while protecting individual rights.
Collection: Code of Federal Regulations
Status date: Jan. 1, 2024
Status: Issued
Source: Office of the Federal Register
The provided text primarily focuses on the guidelines, requirements, and definitions related to polygraph examinations within the Department of Energy and does not directly address AI systems, their impacts, governance, integrity, or robustness. It discusses aspects of counterintelligence evaluations and the rights of individuals involved, without any indication of AI or related technologies. Therefore, all categories are scored as 1, indicating not relevant.
Sector: None (see reasoning)
The text is oriented towards polygraph protocols and counterintelligence within the DOE and does not address any application or regulation of AI technologies within sectors like politics, healthcare, or public services. Thus, all sectors score a 1 for not relevant.
Keywords (occurrence): automated (1) show keywords in context
Summary: Executive Order 14091 aims to enhance racial equity and support underserved communities in the U.S. by establishing equitable policies, improving government services, and fostering collaboration across federal agencies.
Collection: Code of Federal Regulations
Status date: Jan. 1, 2024
Status: Issued
Source: Office of the Federal Register
Societal Impact
Data Governance (see reasoning)
Upon analysis of the text regarding Executive Order 14091, there are explicit references to the impact of AI technologies, specifically mentioning the need to root out bias in the design and use of new technologies such as artificial intelligence. This indicates a strong focus on ensuring equitable outcomes as related to AI systems, making the relevance to the Social Impact category very significant. The text discusses the broader implications of equitable policy-making and service delivery that could be influenced by AI systems, which reinforces its association with Social Impact. The references to equitable data practices suggest some aspects of Data Governance, particularly regarding bias and equitable access to resources and opportunities, but they are less pronounced than the focus on social implications. System Integrity and Robustness do not appear to be directly discussed, although considerations for transparency in AI systems could tangentially fit them. Hence, Social Impact scores the highest for its emphasis on addressing systemic biases potentially exacerbated by AI technologies, followed by some relevance in Data Governance for the mention of data practices related to equity.
Sector:
Government Agencies and Public Services (see reasoning)
The Executive Order primarily addresses issues of racial equity and support for underserved communities through the Federal Government. The text does not specifically target the use and regulation of AI in the context of political campaigns (Politics and Elections) nor does it discuss legal aspects (Judicial System). However, it does offer some insights into how government agencies can leverage AI in ensuring equitable outcomes and service delivery. This indicates a moderate relevance to Government Agencies and Public Services, particularly on how AI can aid in these broader goals. The text's focus on equity and community welfare, while touching on technology, does not align well with sectors like Healthcare, Private Enterprises, and Academic Institutions. The scope of AI's implementation in this executive order does not extend effectively into International Cooperation or NGO contexts, thereby rendering them irrelevant. Overall, the Government Agencies and Public Services sector receives a moderate score due to the text's potential implications for these entities using AI systems to foster progressive policies.
Keywords (occurrence): artificial intelligence (2) automated (2) show keywords in context
Description: A BILL to be entitled an Act to amend Chapter 2 of Title 21 of the Official Code of Georgia Annotated, relating to elections and primaries generally, so as to establish the criminal offense of election interference with a deep fake and solicitation of such; to provide for definitions; to provide for exceptions; to provide for punishment; to provide for the State Election Board to publish results of investigations into such offenses; to provide for legislative findings; to provide for related ...
Summary: Senate Bill 392 establishes the crime of election interference through deep fakes in Georgia, defining related offenses and penalties to protect election integrity amid rising artificial intelligence use.
Collection: Legislation
Status date: Jan. 24, 2024
Status: Introduced
Primary sponsor: John Albers
(12 total sponsors)
Last action: Senate Read and Referred (Jan. 25, 2024)
Societal Impact
System Integrity
Data Robustness (see reasoning)
The text explicitly addresses legislation related to the use of 'deep fake' technologies, which fall under the broader category of Artificial Intelligence (AI) specifically concerning misinformation in the electoral process. The focus on election interference through deep fakes clearly pertains to societal implications (Social Impact), as it affects the integrity of elections, the public's perception of fairness, and the potential for psychological harm through deception. It also raises concerns regarding the accountability of AI developers in the context of misinformation and potential electoral manipulation, which ties it to social regulation standards. The relevance to Data Governance is less direct, as the primary focus is not on data management or collection but rather the implications of media generated via AI technologies. However, addressing how AI is utilized in the electoral context may touch upon governance aspects indirectly due to the need for responsible use of AI systems. System Integrity may be applicable in ensuring the integrity of election processes against AI-driven threats, while Robustness could relate to the standards of deep fake detection technologies but is not the primary focus of this text.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The text relates closely to the Political and Elections sector due to its direct focus on establishing laws against the use of deep fakes in electoral processes. It incorporates legislative measures aimed at protecting democracy and ensuring fair elections, which is fundamental to this sector. Government Agencies and Public Services have a moderate relevance, as it implies the involvement of state bodies like the State Election Board in enforcing these regulations. Other sectors such as the Judicial System or Healthcare do not hold relevance to this text, as they do not pertain to issues of law or medical applications. The legislation does not have direct implications for Private Enterprises, Academic Institutions, or International Cooperation and Standards either, as its primary focus is on electoral integrity and misinformation rather than business practices or educational contexts.
Keywords (occurrence): artificial intelligence (2) deepfake (8) show keywords in context
Description: A bill to improve menopause care and mid-life women's health, and for other purposes.
Summary: The Advancing Menopause Care and Mid-Life Women’s Health Act aims to enhance menopause care and mid-life women's health through improved research, public health initiatives, education, and training for healthcare professionals.
Collection: Legislation
Status date: May 2, 2024
Status: Introduced
Primary sponsor: Patty Murray
(17 total sponsors)
Last action: Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (May 2, 2024)
The text does not explicitly address any aspects of AI systems, their societal impacts, or relevant governance issues concerning AI technology. Although there is a vague reference to diagnostic tools that utilize artificial intelligence, this is not sufficient to engage with the broader implications of AI as highlighted in the defined categories. The core focus of the legislation appears to be menopause care and mid-life women's health rather than AI-related concerns, resulting in low relevance across the categories.
Sector: None (see reasoning)
The text does not specifically address the deployment of AI technologies in the sectors listed, aside from a generic mention of AI in the context of diagnostic tools. It primarily concentrates on health policies, research funding, and women's health issues rather than the application of AI or its governance within these sectors. Therefore, the relevance is minimal and scores considerably low.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: As introduced, enacts the "Tennessee Artificial Intelligence Advisory Council Act." - Amends TCA Title 4.
Summary: The bill establishes the Tennessee Artificial Intelligence Advisory Council to assess AI's impact on the economy and labor market, develop strategic initiatives, and recommend policy changes for responsible AI use by 2025.
Collection: Legislation
Status date: Jan. 31, 2024
Status: Introduced
Primary sponsor: Dawn White
(sole sponsor)
Last action: Assigned to General Subcommittee of Senate Government Operations Committee (March 13, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly addresses AI through the establishment of the Tennessee Artificial Intelligence Advisory Council, outlining its roles and responsibilities related to the impact, governance, and ethical use of AI systems. Several components aim to understand and guide the economic and social implications of AI, such as the labor market impacts and ensuring ethical regulations, which resonate strongly with the Social Impact category. The need for a governance framework indicates relevance to Data Governance, as it relates to the responsible implementation of AI within the state. Additionally, the mention of risk analyses and beneficial use cases indicates an emphasis on system considerations tied to System Integrity. However, the text doesn’t specify benchmarks or auditing processes critical for Robustness, though the development of guidelines does suggest a commitment to robust system performance indirectly.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The text emphasizes the use of AI in various sectors through the creation of an advisory council that includes representatives from economic development, labor, education, and technology. This shows direct intent to address how AI should be integrated within the Government Agencies and Public Services sector, as the council aims to improve state operations via AI. Moreover, the consideration of labor market impacts demonstrates significance within the Private Enterprises, Labor, and Employment sector. Input from educational representatives hints at relevance in Academic and Research Institutions, particularly in enhancing educational systems in response to AI changes. However, the implications for Politics and Elections or Nonprofits and NGOs are less pronounced.
Keywords (occurrence): artificial intelligence (18) show keywords in context
Description: Prohibits the distribution of synthetic media messages in advertisements before an election that a person knows or should have known are deceptive and fraudulent deepfakes of a candidate. Effective 7/1/3000. (HD1)
Summary: The bill prohibits the distribution of deceptive and fraudulent deepfake advertisements about candidates within 90 days of an election to combat political misinformation in Hawaii.
Collection: Legislation
Status date: March 5, 2024
Status: Engrossed
Primary sponsor: Trish La Chica
(16 total sponsors)
Last action: Referred to JDC. (March 7, 2024)
Societal Impact (see reasoning)
The text directly addresses the impact of deepfake technology on elections and public trust, particularly focusing on deceptive and fraudulent representations of political candidates. It seeks to regulate the use of synthetic media in political advertisements, which is highly relevant to the Social Impact category due to the potential harm to democratic processes and public perception. The legislation aims to prevent misinformation and its negative societal effects, hence scoring it a 5. It also touches on potential juridical processes related to misinformation, including litigative avenues for persons harmed by deceptive practices; however, this does not rise significantly enough to affect the score for System Integrity or Robustness. The emphasis on accountability and prevention of harm positions this text primarily within the Social Impact category, hence, the relevance to the other categories is minimal. Regarding Data Governance, there is an implied concern about data integrity related to the production of synthetic media, but it is secondary to the main focus on public trust and misinformation, leading to a score of 2. The legislation’s focus on preventing fraudulent and deceptive practices lessens its relevance to System Integrity (score of 2) and Robustness (score of 2) as these focus more on structural and functional aspects of AI systems rather than on the impact of such systems. Therefore, Social Impact stands out as the most relevant category for this text.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The text clearly relates to the intersection of artificial intelligence with politics, specifically how AI-generated content like deepfakes affects elections and voter perception. Therefore, it is strong in the Politics and Elections sector, earning a score of 5. While there is some relevance to Government Agencies and Public Services due to the legislative oversight implied through the campaign spending commission's investigative powers, this connection is weaker, meriting a score of 3. The text does not specifically discuss AI applications in the Judicial System, Healthcare, Private Enterprises, Labor, and Employment, Academic and Research Institutions, International Cooperation and Standards, or Nonprofits and NGOs, leading to scores of 1 for those sectors. As the legislation does not fall under emerging or hybrid sector discussions either, the Hybrid, Emerging, and Unclassified category also merits a score of 1. Thus, this text's primary relevance is found within the Politics and Elections sector, with lesser significance concerning government regulation oversight.
Keywords (occurrence): deepfake (9) synthetic media (10) show keywords in context
Summary: The bill proposes modernization of the Bureau of Industry and Security's IT systems for enhanced export control, utilizing advanced technologies, while authorizing $100 million in funding over four years.
Collection: Congressional Record
Status date: July 11, 2024
Status: Issued
Source: Congress
Societal Impact
Data Governance
System Integrity (see reasoning)
The text of Senate Amendment 2614 explicitly mentions the incorporation of 'artificial intelligence,' 'machine learning,' and 'modern data sharing interfaces' within the context of the Bureau of Industry and Security's modernization of information technology systems. This relates especially to the collection and management of data (Data Governance), as it discusses enhancing productivity and efficiency, streamlining data processes, and ensuring cyber security. The necessity for transparency and integrity in the Bureau's functioning aligns with concerns in System Integrity. Moreover, the 'effective use of export controls' and the associated impacts signal consideration for broader social impacts such as privacy, security, and economic implications, making it relevant to Social Impact as well. Robustness scores lower as there is no specific mention of benchmarking or compliance metrics for evaluating AI systems.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
International Cooperation and Standards (see reasoning)
This amendment primarily pertains to Government Agencies and Public Services as it discusses modernizing the Bureau of Industry and Security's operations. It focuses on the adoption of advanced technologies within a government agency, improving its capabilities to manage export controls and interact with data efficiently. Although it touches on aspects that might concern Private Enterprises due to its implications for industry and commerce, the text's focus remains firmly rooted in government functions and responsibilities. As such, it does not strongly pertain to other sectors like Judiciary or Healthcare.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: To amend the Controlled Substances Act to require electronic communication service providers and remote computing services to report to the Attorney General certain controlled substances violations.
Summary: The Cooper Davis and Devin Norring Act mandates electronic communication providers report certain controlled substance violations to the Attorney General, aiming to curb the distribution of illicit drugs and enhance law enforcement efforts.
Collection: Legislation
Status date: July 2, 2024
Status: Introduced
Primary sponsor: Angela Craig
(7 total sponsors)
Last action: Referred to the Committee on Energy and Commerce, and in addition to the Committee on the Judiciary, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (July 2, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text includes specific references to AI-related technology, particularly in the context of how communication service providers can use algorithms and machine learning to identify and report illegal activities related to controlled substances. This addresses issues around the algorithms' capabilities and their implications for reporting mechanisms and accountability, which directly ties into the categories defined. Therefore, the categories of Social Impact, Data Governance, and System Integrity are relevant.
Sector:
Government Agencies and Public Services (see reasoning)
The text does not explicitly address any specific sectors within its context, as it centers around legal amendments concerning controlled substances and reporting duties for providers. While there are implications for Government Agencies and Public Services in terms of enforcement, it doesn't directly discuss the regulation or use of AI in sectors like Politics, Healthcare, or others explicitly. Therefore, all sector scores will be low, with the most relevant being for Government Agencies and Public Services, given the reporting requirements to the Attorney General.
Keywords (occurrence): machine learning (2) algorithm (2) show keywords in context
Summary: The bill proposes supplemental appropriations for national security, primarily providing funding for military personnel and operations related to the Ukraine conflict and border security efforts combating fentanyl.
Collection: Congressional Record
Status date: Feb. 5, 2024
Status: Issued
Source: Congress
The text does not contain any explicit references to AI-related terms such as 'Artificial Intelligence', 'Algorithm', 'Machine Learning', etc. Instead, it focuses on military appropriations and national security, specifically funding for operations related to Ukraine and other defense initiatives. Given the absence of keywords related to AI and the focus on budget allocations for military personnel and operations, the relevance to all categories is minimal. Therefore, all categories score a 1, indicating that they are not relevant.
Sector: None (see reasoning)
The text primarily addresses congressional appropriations for military and national security issues, with no mention of AI applications or regulations affecting sectors such as politics, government services, healthcare, or others listed. Consequently, all sectors are deemed irrelevant to the content of this text, resulting in a score of 1 for all sectors.
Keywords (occurrence): artificial intelligence (2) machine learning (1) show keywords in context
Summary: The bill mandates annual reports comparing U.S. capabilities in lethal autonomous weapon systems to adversaries, enhancing oversight on military technology development and deployment.
Collection: Congressional Record
Status date: July 10, 2024
Status: Issued
Source: Congress
Societal Impact
System Integrity (see reasoning)
The text explicitly discusses lethal autonomous weapon systems and the implications of AI technologies in military contexts. These references are crucial for assessing the societal impacts of AI, especially regarding warfare and security. Therefore, the relevance to the Social Impact category is strong. The mention of automation in weapon systems also aligns with concerns about accountability and potential harm, further asserting its importance in this category. For Data Governance, while there are implications regarding data used in AI systems for weaponry, there is less focus on regulations for secure and accurate data management, resulting in a lower relevance score. The text concerns the security and effectiveness of AI systems in defense operations but is less about their transparency and oversight, placing System Integrity at a moderate relevance level. Robustness receives a low score, as the emphasis is primarily on the military capabilities regarding autonomous weapon systems rather than benchmarks for AI performance. Overall, findings establish that the Social Impact category is critically relevant due to civil and ethical implications, while the others are relevant to a lesser extent.
Sector:
Government Agencies and Public Services (see reasoning)
The text directly pertains to military contexts, which emphasizes the use of AI in defense systems. As such, the content would be relevant to the realm of government agencies and public services due to the intersection with national security. However, the direct impact on political processes or judiciary frameworks is minimal, leading to lower scores in those sectors. The healthcare sector is irrelevant given the military focus, and while certain discussions on AI use in autonomously functioning systems can be seen as emerging and potentially hybrid, it does not fit neatly into those descriptions. The categorization of the text primarily aligns with Government Agencies and Public Services, recognizing its implications for military operations and national security.
Keywords (occurrence): automated (11) show keywords in context
Description: Regulates the use of deep fakes and artificial intelligence technology in political advertising. (gov sig)
Summary: The bill regulates deep fakes and AI technology in political advertising, mandating disclosures when such technology is used to emulate candidates, ensuring transparency and integrity in electoral communications.
Collection: Legislation
Status date: June 20, 2024
Status: Vetoed
Primary sponsor: Royce Duplessis
(2 total sponsors)
Last action: Vetoed by the Governor. (June 20, 2024)
Societal Impact
System Integrity
Data Robustness (see reasoning)
The text clearly addresses the implications and regulations surrounding the use of artificial intelligence and deep fake technology specifically in the context of political advertising. It emphasizes the need for transparency and accountability to maintain the integrity of the electoral process. Because it specifically outlines the requirements for disclosing the use of AI technologies that could manipulate media representations of candidates, it is highly relevant to the 'Social Impact' category, which focuses on AI's societal implications, particularly in terms of misinformation and public trust. It also pertains to 'System Integrity' due to the legislative efforts aimed at ensuring transparency and ethical standards in the electoral process. Moreover, the regulation of AI to prevent fraud and misinformation indicates its importance for 'Robustness'. However, the relevance to 'Data Governance' is minimal, as there are no explicit mentions of data management or collection aspects. Therefore, the scores reflect stronger ties to Social Impact, System Integrity, and Robustness compared to Data Governance.
Sector:
Politics and Elections (see reasoning)
The text primarily addresses the intersection of AI with political processes, specifically the use of deep fakes and AI technologies in political advertising. Hence, it is highly relevant to the 'Politics and Elections' sector. There are no mentions or implications regarding AI in government public services, healthcare, or other sectors, which renders those sectors largely irrelevant. Although technology may have indirect influence on private enterprises, labor, and employment, that is not the focus of this specific legislation, leading to a low relevance score in that sector. Therefore, the most fitting sector is Politics and Elections, with lower relevance assigned to others.
Keywords (occurrence): deepfake (3) show keywords in context