5026 results:


Summary: The bill approves the Thrane & Thrane Sailor 3026D Gold VMS for vessels operating in the Northwestern Hawaiian Islands Marine National Monument, ensuring compliance with tracking and communication requirements.
Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text describes specifications and procedures for a Vessel Monitoring System (VMS), which primarily involves GPS position reporting and communication between vessels and land-based systems. There are references to automation in the context of the operation of the transceiver unit (e.g., automatic GPS position reporting and two-way communications), but it lacks depth in discussing broader social impacts, data governance, system integrity, or robust AI performance benchmarks. Therefore, it does not align closely with the categories provided, but the mention of automated systems gives it a slight relevance to the 'Robustness' category, albeit limited. Overall, the text appears more focused on technical specifications rather than addressing legislative considerations associated with AI systems.


Sector: None (see reasoning)

While the text discusses the use of a technology system (VMS) that could utilize AI for position reporting and communication, it does not specifically address AI or its regulations in a legislative context. There is no mention of how AI may influence sectors like politics, healthcare, or education. Instead, the information is more aligned with maritime operations and suggests a technical nature of communication systems. Thus, no strong relevance can be established for the sectors delineated.


Keywords (occurrence): automated (1)

Description: Creates the Protect Health Data Privacy Act. Provides that a regulated entity shall disclose and maintain a health data privacy policy that clearly and conspicuously discloses specified information. Sets forth provisions concerning health data privacy policies. Provides that a regulated entity shall not collect, share, or store health data, except in specified circumstances. Provides that it is unlawful for any person to sell or offer to sell health data concerning a consumer without first ob...
Summary: The Health Data Privacy Act mandates clear policies on health data collection, sharing, and sales, requiring consumer consent and offering rights for data deletion and security. It aims to protect consumer privacy regarding health information.
Collection: Legislation
Status date: May 16, 2023
Status: Introduced
Primary sponsor: Ann Williams (19 total sponsors)
Last action: Rule 19(a) / Re-referred to Rules Committee (April 19, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text focuses extensively on the implications of health data privacy, emphasizing secure data management, consumer rights, and consent related to health information. This is particularly relevant to Social Impact, as it addresses consumer protections, potential discrimination based on consent decisions, and ensures that AI applications processing health data adhere to ethical standards. The Data Governance category is highly relevant as the Act involves regulations regarding the management of health data privacy, secure processing, and protections against unauthorized sharing. System Integrity is also pertinent given the provisions for secure handling and retention of private health data, although less directly than the other two categories. Robustness is less relevant as the text does not discuss benchmarks or performance metrics for AI systems. Overall, the legislation is primarily aimed at protecting individual rights and ensuring responsible handling of health data, which strongly correlates with Social Impact and Data Governance, with support for System Integrity.


Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)

The legislation predominantly concerns the regulation and management of health data privacy, impacting various sectors interacting with health services and consumer data. As such, it is highly relevant to the Healthcare sector, where AI is increasingly involved in processing health-related information. Additionally, impacts may extend to Private Enterprises, Labor, and Employment because organizations dealing with health data must comply with these regulations, affecting labor practices and corporate governance within health-related businesses. While some principles may influence other sectors, such as Government Agencies and Public Services, the primary focus remains within the healthcare sector specifically, hence scoring lower for those additional sectors. Overall, this bill is chiefly relevant to the Healthcare sector, with significant implications for Private Enterprises.


Keywords (occurrence): machine learning (1) show keywords in context

Summary: The bill reviews the Fiscal Year 2024 budget request for the Department of Homeland Security, critiquing its focus on border management rather than security, amid ongoing border challenges.
Collection: Congressional Hearings
Status date: April 19, 2023
Status: Issued
Source: House of Representatives

Category: None (see reasoning)

The text primarily addresses budgetary concerns and policies within the Department of Homeland Security (DHS), focusing on border security and immigration rather than explicitly on AI issues. There is no mention of Artificial Intelligence or relevant terminology related to automation or system integrity within the provided text. Therefore, the relevance to the categories is minimal.


Sector: None (see reasoning)

The text discusses the DHS specifically, addressing policy and budget proposals that impact border security and migration. Although it involves government functions and services, it does not mention or imply any use of AI technologies in these contexts. Hence, this text does not fit any sector concerning AI applications effectively.


Keywords (occurrence): automated (3) show keywords in context

Description: A bill to bolster the AUKUS partnership, and for other purposes.
Summary: The TORPEDO Act of 2023 aims to strengthen the AUKUS partnership by streamlining export regulations and enhancing defense cooperation among the U.S., Australia, and the U.K.
Collection: Legislation
Status date: May 4, 2023
Status: Introduced
Primary sponsor: James Risch (3 total sponsors)
Last action: Read twice and referred to the Committee on Foreign Relations. (May 4, 2023)

Category:
Data Robustness (see reasoning)

The text primarily discusses the AUKUS partnership and the collaboration on advanced defense technologies, specifically mentioning artificial intelligence (AI) as one of the focal points under Pillar Two of the AUKUS initiative. This indicates a direct relevance to national security and the strategic utilization of AI in defense capabilities. However, there is no specific mention of social implications, data governance, system integrity, or robustness in relation to AI deployment in this context. Therefore, the relevancy scores reflect the primary focus on advanced technology partnerships and implications rather than broader social concerns or systemic integrity.


Sector:
Government Agencies and Public Services
International Cooperation and Standards (see reasoning)

The text mentions the role of AI in enhancing defense capabilities, particularly in trilateral cooperation among government agencies within the defense sector of the AUKUS partnership. It shows the application of AI technologies specifically in defense contexts and military applications instead of broader sectors like healthcare or private enterprises. However, the mention of advanced technologies indicates moderate relevance to the government sector. The implications for politics and elections or other sectors are not explored in the text.


Keywords (occurrence): artificial intelligence (2) show keywords in context

Summary: The bill establishes data requirements for state departments of transportation (DOTs) to assess pavement condition on National Highway System routes, emphasizing full data collection and prohibiting sampling for reporting metrics.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Data Governance (see reasoning)

The text primarily addresses data collection and performance metrics related to pavement conditions, specifically focusing on the National Highway System and its evaluation. Although it involves systematic data governance aspects such as the collection and reporting of performance measures, it does not mention AI directly or relate to the impact of AI on society, data governance details like data accuracy or collection standards, nor provide insights into system integrity or robustness in terms of AI systems and benchmarks. This makes the text more relevant to general data governance in the context of transportation management rather than AI-specific legislation or implications.


Sector:
Government Agencies and Public Services (see reasoning)

The text mainly revolves around data collection, performance metrics, and regulations regarding pavement management, which are more focused on transportation rather than sectors related to the use of AI in government agencies, healthcare, employment, and other specified sectors. While it details processes relevant to state DOTs, it does not directly address the implications of AI in politics, healthcare, or other related fields, making it less relevant to those specific sectors. However, it does consider processes that could potentially interact with governance or public services through data utilization.


Keywords (occurrence): automated (2) show keywords in context

Summary: This bill establishes Good Laboratory Practices (GLP) standards for testing inhalation exposure to motor vehicle emissions to ensure data integrity for health effects assessments under the Clean Air Act.
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily focuses on Good Laboratory Practices (GLP) standards for inhalation exposure health effects testing and regulations stipulated by the Environmental Protection Agency (EPA). There is no mention or relevance of AI technologies, algorithms, automation, or other related concepts throughout the text. Therefore, the legislation does not fit any of the categories that specifically pertain to AI or its governance. As such, each category receives a score of 1, indicating a complete lack of relevance to AI-related concerns.


Sector: None (see reasoning)

The text outlines regulatory practices for conducting laboratory studies related to health effects from inhalation exposure to motor vehicle emissions. It does not address the use of AI in political campaigns, government services, judicial decisions, healthcare applications, business environments, education, international standards, nonprofit organizations, or any hybrid or emerging sectors. Thus, all sectors score 1, indicating no relevance.


Keywords (occurrence): automated (3) show keywords in context

Summary: The bill establishes regulations for retinal diagnostic software devices, providing guidelines for their classification, validation, and usage in diagnosing retinal diseases while ensuring user safety and performance accuracy.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Societal Impact
Data Governance
Data Robustness (see reasoning)

The text addresses a specific retinal diagnostic software device that incorporates an adaptive algorithm for evaluating ophthalmic images. This directly relates to the category of Robustness due to the emphasis on algorithm development and performance benchmarks. It also fits into Data Governance because it discusses software verification, validation documentation, and the management of clinical performance data, ensuring the data used in the software is accurate and reliable. Social Impact is relevant to some extent, as the text touches on the application of technology to improve health outcomes, indirectly addressing the societal benefit of advancing healthcare through AI-driven diagnostics. System Integrity is less relevant since the focus is more on software performance rather than on security or transparency. Overall, the text primarily emphasizes the technical specifications and requirements for the AI-driven software device in a healthcare context, which aligns it more closely with Robustness and Data Governance.


Sector:
Healthcare (see reasoning)

This legislative text directly pertains to the healthcare sector, as it describes a retinal diagnostic software device intended for use in medical diagnostics. It addresses clinical performance metrics, user training, and the necessary evaluations for the software's effectiveness and safety, all of which are central to healthcare. While the implications of AI in business and regulations may hold some relevance, the primary focus on a healthcare application firmly categorizes this text under Healthcare. Other sectors are less applicable as they involve different contexts unrelated to medical diagnostics or AI applications in healthcare settings.


Keywords (occurrence): algorithm (2) show keywords in context

Summary: The bill updates the Department of Defense's policies for addressing fraud and corruption in procurement activities, ensuring coordinated responses and timely communication regarding investigations and remedies.
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily focuses on DoD directives regarding the management of fraud and corruption in procurement activities. It does not reference AI or its associated technologies, concepts, or terms. Therefore, the categories assigned for AI-related legislation, such as social impact, data governance, system integrity, or robustness, are not relevant in this context. They all receive a score of 1 as there are no direct engagements with AI that would require consideration under these categories.


Sector: None (see reasoning)

The text discusses regulatory frameworks, procedures, and responsibilities within the Department of Defense concerning the procurement of goods and services, handling of fraudulent activities, and the coordination of investigations. While this may involve operations that touch on government functions, it does not specifically address any sectors related to AI applications. Hence, all sectors relating to AI, such as politics and elections, healthcare, or private enterprises, receive a score of 1, indicating no relevance.


Keywords (occurrence): automated (1)

Summary: The Fair Trade with China Enforcement Act aims to rectify the imbalanced trade relationship with China by restricting investments, prohibiting sensitive technology exports, and imposing tariffs, thereby enhancing U.S. economic security.
Collection: Congressional Record
Status date: July 18, 2023
Status: Issued
Source: Congress

Category:
Societal Impact
System Integrity (see reasoning)

The text primarily discusses economic policies and security concerns related to the People's Republic of China, particularly in relation to the Made in China 2025 initiative, which emphasizes sectors like artificial intelligence. The references to 'artificial intelligence' and its implications for national security and economic competition signify a direct relevance to the category of Social Impact. In terms of Data Governance, while the document implies the need for accurate and secure handling of data related to these technologies, it does not provide specific mandates or regulatory frameworks, making it less relevant. The System Integrity category is touched upon due to the discussion of technology exports and national security; however, clarity on transparency and oversight measures is lacking. The Robustness category addresses performance benchmarks, but there are no explicit mentions of performance standards or auditing related to AI technologies. Therefore, Social Impact receives a higher score due to the emphasis on AI and economic security concerns, while the others are scored lower or irrelevant according to the specifics mentioned in the text.


Sector:
Government Agencies and Public Services
International Cooperation and Standards (see reasoning)

The text touches on the relevance of artificial intelligence in the context of national security, particularly regarding Chinese investment and technological influence in strategic sectors. Given its discussion of AI in the context of economic security and potential threats posed by foreign influence, it is highly relevant to Government Agencies and Public Services, reflecting the application of AI in public policy and security. However, it does not specifically address the use of AI in other sectors such as Healthcare, Private Enterprises, or Academic Institutions, making those categories less relevant. Politics and Elections are also touched upon indirectly through the framing of economic and national security as a public policy concern. Thus, Government Agencies and Public Services scores the highest due to its direct application in the legislative context discussed, while the other sectors score lower.


Keywords (occurrence): artificial intelligence (2) show keywords in context

Summary: The bill classifies various clinical devices, including pipetting and diluting systems, as Class I (general controls), exempting them from premarket notification to facilitate their availability for diagnosis and treatment of disorders.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily discusses medical devices related to clinical use, including pipetting and diluting systems, osmometry, and mass spectrometry, all of which are critical for medical diagnostics and treatment. However, there are no explicit references to AI technologies or their applications, particularly regarding social impacts, data governance, system integrity, or robustness tied to AI. Therefore, it does not directly relate to any of the categories about AI legislation.


Sector:
Healthcare (see reasoning)

The text relates to clinical devices used in healthcare settings, detailing their classifications and intended uses. It does not, however, make any mention of AI technologies that would connect these devices to the broader themes of politics, government services, the judicial system, private enterprises, academic research, or international standards regarding AI usage. Therefore, it remains within the healthcare context without extending to AI-related applications.


Keywords (occurrence): automated (1) show keywords in context

Summary: The bill establishes monitoring methodologies for criteria pollutants at Ambient Air Quality Monitoring Stations, ensuring adherence to standards for accurate air quality assessment and regulatory compliance.
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily focuses on methodologies for monitoring ambient air quality, detailing procedures and criteria for the use of specific monitoring technologies. While it does mention 'automated analyzers', which implies some level of automation in the monitoring process, it does not delve into the broader impacts, governance, or integrity concerns associated with autonomous systems or AI in general. Therefore, while there is a slight tie to automated processes, it does not engage deeply with AI's role in society, data management, system integrity or robustness. Thus, the relevance to the defined categories is limited. - Social Impact: There is minimal discussion on societal impacts of automated systems regarding air quality, therefore it scores a 2. - Data Governance: The text outlines protocols for monitoring but lacks details on issues like data privacy, bias in data collection or management, leading to a score of 1. - System Integrity: The procedures mentioned focus more on compliance and method approval rather than on AI system integrity. Nevertheless, the use of automated systems does hint at some integrity concerns, warranting a score of 2. - Robustness: There is no detailed discussion about performance benchmarks or compliance measures relevant to AI performance metrics, resulting in a score of 1.


Sector:
Government Agencies and Public Services (see reasoning)

The content of the document is largely technical and specific to environmental monitoring techniques and standards and does not address the intersection of AI technologies with the listed sectors. However, given that the methodologies pertain to regulatory measures around health and safety concerning air quality, there are limited connections to some sectors: - Politics and Elections: The document does not pertain to political campaign regulation related to AI. It does not discuss automation in terms of electoral processes, which leads to a score of 1. - Government Agencies and Public Services: The text is relevant to government monitoring practices, albeit not specific to AI, and could be moderately relevant for its automated aspect, resulting in a score of 3. - Judicial System: There is no mention of legal implications or regulatory compliance issues related to the judicial use of AI or automated systems, so it scores 1. - Healthcare: There are no health-specific AI applications covered; thus, it receives a score of 1. - Private Enterprises, Labor, and Employment: The discussion does not delve into employment impacts from automated monitoring systems, yielding a score of 1. - Academic and Research Institutions: While it might be relevant to researchers engaged in environmental monitoring methodologies, the text does not specifically address the use of AI in research practices, hence a score of 2. - International Cooperation and Standards: The text lacks significant international implications or standards related to AI monitoring or air quality, meriting a score of 1. - Nonprofits and NGOs: There is no engagement with nonprofit or NGO roles, so it receives a 1. - Hybrid, Emerging, and Unclassified: The text does not fit neatly into this category either, but because of the mention of automated processes, it secures a score of 2.


Keywords (occurrence): automated (3) show keywords in context

Description: Stormwater Program Revisions
Summary: The Stormwater Program Revisions bill establishes deadlines for decisions on stormwater permit applications, creates automated completeness reviews, and integrates technology to streamline processing within North Carolina's Environmental Quality Department.
Collection: Legislation
Status date: April 6, 2023
Status: Introduced
Primary sponsor: Michael Lee (7 total sponsors)
Last action: Re-ref to Agriculture, Energy, and Environment. If fav, re-ref to Appropriations/Base Budget. If fav, re-ref to Rules and Operations of the Senate (April 13, 2023)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text primarily discusses the management and regulation of stormwater permits and includes provisions for technology-assisted permitting initiatives that utilize machine learning and AI. This allows us to draw connections to the categories based on the implications of those technologies in terms of their societal impacts, governance of data related to applications, integrity of the systems used, and performance benchmarks for AI. However, the AI-focused applications are limited in their scope to these operational aspects rather than broader issues, which influences their scores across the categories.


Sector:
Government Agencies and Public Services
Academic and Research Institutions
Hybrid, Emerging, and Unclassified (see reasoning)

The text addresses the implementation of AI in the realm of stormwater management permits, thus having potential implications across multiple sectors. The most relevant sectors include Government Agencies and Public Services because it discusses application processes and regulatory reviews involving state departments. The healthcare sector is not relevant, while political applications of AI are not directly addressed here. Academic institutions are mentioned regarding funding for research into AI applications, indicating moderate relevance. Therefore, scores vary based on the weight of connections to each sector.


Keywords (occurrence): artificial intelligence (1) machine learning (2) automated (4) show keywords in context

Description: Relative to the use of artificial intelligence for personal defense.
Summary: The bill affirms the right to use autonomous artificial intelligence for personal defense under the Second Amendment, establishing guidelines for its application in self-defense situations.
Collection: Legislation
Status date: Dec. 15, 2023
Status: Introduced
Primary sponsor: Matthew Santonastaso (sole sponsor)
Last action: Inexpedient to Legislate: Motion Adopted Voice Vote 02/01/2024 House Journal 3 P. 5 (Feb. 1, 2024)

Category:
Societal Impact
System Integrity (see reasoning)

This text primarily discusses the use of artificial intelligence (AI) in personal defense, highlighting its implications in terms of legal rights under the Second Amendment. The specific provisions regarding autonomous AI making decisions for defense purposes tie closely to societal concerns about the implications of AI in legal and ethical contexts. As such, the Social Impact category is relevant due to the legal and societal ramifications of permitting AI to take life-protective actions. Data Governance isn't as relevant here as there are few mentions regarding data management or data use within the AI context in personal defense, focusing instead on legal rights and usage. System Integrity is only slightly relevant due to the need for certain legal standards and action parameters related to the functioning of AI in these situations, but the text does not delve deeply into the security or transparency concerns of AI systems. Robustness is not relevant as the text does not discuss benchmarks, compliance or performance standards for AI. This leads to a strong emphasis on societal implications and legal usage, providing a better fit with Social Impact, while the other categories are either minimally or not relevant at all.


Sector:
Government Agencies and Public Services
Judicial system (see reasoning)

The text focuses on the implications of using AI in the context of personal defense, directly relating to legal frameworks around self-defense and the Second Amendment, which fall under various jurisdictions. As it addresses rights to use AI within personal rights frameworks, it’s most closely aligned with the Judicial System category due to the legal considerations and potential impact on court cases involving AI decision-making. While it has indirect implications for Government Agencies and Public Services in terms of law enforcement and public safety, it's less direct. The other sectors such as Healthcare, Private Enterprises, Academic and Research Institutions, and Nonprofits and NGOs hold minimal relevance as they are not brought up at all within the text. Therefore, the strongest sector connection is to Judicial System, with lesser relevance to Government Agencies and Public Services without robust discourse.


Keywords (occurrence): artificial intelligence (23) show keywords in context

Summary: The bill establishes obligations for OASIS users, requiring advance notification to responsible parties when initiating or increasing significant automated queries, to ensure effective communication and compliance in energy sector operations.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text discusses obligations related to automated queries within the context of the Open Access Same-Time Information System (OASIS), but does not delve into broader social impacts, data governance, system integrity, or robustness of AI systems in a way that aligns directly with the established categories. There is mention of 'automated queries,' which pertains to automation but lacks the broader implications or detailed frameworks associated with the specified categories. Consequently, this text appears to have limited relevance to the defined categories in terms of addressing their specific criteria for evaluation.


Sector: None (see reasoning)

The text pertains to regulatory obligations for users of OASIS, focusing on automated queries in the energy sector rather than on the specific application of AI in any related sector. Although it touches upon automation, it does not align closely with the concepts or implications surrounding politics, public services, or any specified sector in the context of AI usage. Therefore, the relevance of the text to these sectors is notably low.


Keywords (occurrence): automated (2) show keywords in context

Summary: This bill outlines application requirements for states seeking federal grants to enhance highway safety, focusing on reducing motor vehicle fatalities through data-driven activities, law enforcement participation, and occupant protection initiatives.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily focuses on the requirements for state grants under highway safety regulations, detailing programs aimed at reducing fatalities and injuries related to motor vehicles. Keywords directly related to Artificial Intelligence (AI) or its relevant terms are absent, indicating that AI-specific topics are not addressed in the text. Therefore, the relevance to the categories of Social Impact, Data Governance, System Integrity, and Robustness is negligible. The focus is large on manual compliance and reporting mechanisms concerning traffic laws rather than any automated decision-making systems or AI technologies.


Sector: None (see reasoning)

The text is centered around highway safety and vehicle-related policies, with no direct reference to AI applications in sectors such as Politics and Elections, Government Agencies, Judicial Systems, Healthcare, etc. Without mentions of AI, its applications, or regulations about its usage in any of these areas, it is clear that the text does not fall into any of the specified sectors. The repeated focus on traffic laws and safety guidelines solidifies the lack of relevance to the identified sectors.


Keywords (occurrence): automated (1) show keywords in context

Summary: The bill classifies various coagulation and hematology instruments, establishing performance standards and exemptions for certain devices in in vitro coagulation studies and hematological assessments.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text focuses on the classification and identification of blood coagulation instruments, which are automated devices. While it does mention automation, there is no explicit discussion of the social impact of AI (e.g., bias, consumer protections), data governance (e.g., data accuracy, privacy concerns), system integrity (e.g., security measures, oversight), or robustness (e.g., performance benchmarks) directly related to AI. The automation mentioned may imply some level of AI application, but without direct engagement with AI's broader implications, the relevance to these categories is limited.


Sector: None (see reasoning)

The text centers around medical devices used for coagulation studies, specifically mentioning their classifications and functionalities. There is no direct reference to AI applications within healthcare, nor does it delve into the regulatory aspects concerning AI in healthcare like diagnostics or patient data management. Therefore, the relevance to the specified sectors is minimal.


Keywords (occurrence): automated (11) show keywords in context

Summary: The bill establishes regulations for exporting vehicles, requiring documentation and inspections to confirm they are not stolen. It outlines penalties for non-compliance and details how to electronically file export information using the Automated Export System (AES).
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text predominantly discusses procedural aspects of vehicle exportation and the use of the Automated Export System (AES) for collecting export information. There are no explicit mentions or implications of artificial intelligence, algorithms, machine learning, or any related concepts within the provided text. Therefore, none of the categories—Social Impact, Data Governance, System Integrity, and Robustness—are particularly relevant. Since the text is largely regulatory and administrative in nature, it fails to address the impact of AI on society or the integrity and governance of AI systems. As such, the scores assigned will reflect the lack of relevant AI-specific content.


Sector: None (see reasoning)

The text does not pertain to any specific sector related to AI application or regulation, including political processes, judicial actions, healthcare, or any industrial sectors. It's strictly focused on export regulations for vehicles and the AES system without any connection to AI or its implications in various sectors. Thus, every sector score is equally low, reflecting the complete lack of relevance.


Keywords (occurrence): automated (1)

Description: To direct the Administrator of the Environmental Protection Agency to establish a consortium relating to exposures to toxic substances and identifying chemicals that are safe to use.
Summary: The SUPERSAFE Act directs the EPA to establish a consortium using supercomputing to identify toxic substances and promote safer chemical alternatives for consumer and industrial use.
Collection: Legislation
Status date: May 18, 2023
Status: Introduced
Primary sponsor: Zoe Lofgren (sole sponsor)
Last action: Referred to the Subcommittee on Environment, Manufacturing, and Critical Materials. (May 19, 2023)

Category:
Societal Impact
Data Governance (see reasoning)

The text of the SUPERSAFE Act explicitly mentions the use of artificial intelligence and machine learning in the context of supercomputing to identify toxic substances and develop safer alternatives. This indicates a relevance to the Social Impact category, as this legislation aims to address the safety and health impacts of chemicals on society. It also touches upon Data Governance, as it involves the management of data necessary for identifying toxic substances. However, it does not delve deeply into ensuring secure data management or the reliability of AI systems, which are crucial aspects of System Integrity and Robustness. Therefore, while it mentions the use of AI technologies, it does not promote benchmarks or standards for AI performance, leading to a lower relevance to Robustness and System Integrity.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

The SUPERSAFE Act primarily revolves around environmental safety and the use of AI technologies to address toxic substances, which relates significantly to government agencies tasked with public welfare and safety. While it mentions research institutions and industry, it does not explicitly target sectors such as healthcare or judicial systems, making those categories less relevant. The focus on public safety and regulatory measures positions this text solidly within Government Agencies and Public Services.


Keywords (occurrence): artificial intelligence (1) machine learning (2) show keywords in context

Summary: The bill establishes a vetting process for contractors and subcontractors working with USAID, requiring them to submit specific forms and pass background checks to ensure compliance before contract approval.
Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily addresses the vetting process for contractors and subcontractors involved with USAID. It establishes who must undergo vetting, the responsibilities of the vetting official, and the consequences of not passing vetting. There's no explicit mention of AI technologies or implications related to AI, nor does it discuss issues such as accountability in AI outputs, consequences of AI for consumer welfare, or the ethical used of AI. Therefore, while it has implications for the integrity of the partner vetting process, it doesn't align closely with broader societal impacts of AI usage, data governance, system integrity, or robustness in terms of AI performance evaluation. Hence, relevance scores for all categories are low.


Sector: None (see reasoning)

The text revolves entirely around a vetting process for contractors associated with USAID. It does not address the role of AI in any sector, from government to healthcare or private enterprise. The procedures described are foundational elements concerning governance and compliance of contracting rather than the influence or regulation of AI technologies. As such, it cannot be placed into any of the defined sectors with relevance. It focuses strictly on vetting rather than how different sectors may apply or regulate AI. Consequently, all sector relevance scores are low.


Keywords (occurrence): automated (1)

Summary: The bill establishes regulations for automated antimicrobial susceptibility systems, categorizing them as Class II devices, and outlines testing protocols and labeling requirements to ensure effective diagnosis and treatment of bacterial infections.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The provided text discusses a fully automated antimicrobial susceptibility system used in clinical diagnostics that incorporates technology to assess bacterial susceptibility. This document is quite specifically about medical technology rather than broader implications of AI systems. Each section outlines the performance characteristics and regulatory requirements of devices used for antimicrobial testing; however, it does not engage with legislation about the impacts of AI practices, data governance or security surrounding these systems' algorithms or decision-making. Therefore, the relevance of this text to the 'Social Impact,' 'Data Governance,' 'System Integrity,' and 'Robustness' categories is very low.


Sector:
Healthcare (see reasoning)

This text is highly relevant to the Healthcare sector since it deals with a medical device designed to detect antimicrobial susceptibility. It outlines regulatory requirements and performance metrics related to clinical settings, which is predominant in healthcare discussions. However, while it touches on aspects of technology aligned with AI or automated systems, it primarily does not discuss AI in a direct regulatory or legislative context. Thus, the scoring is skewed toward relevance to healthcare over AI-specific legislation.


Keywords (occurrence): automated (2) show keywords in context
Feedback form