4764 results:


Summary: This bill aims to establish a national standard for data privacy to address gaps in protection for Americans' personal information, enhancing security and clarity in regulations across various sectors.
Collection: Congressional Hearings
Status date: April 27, 2023
Status: Issued
Source: House of Representatives

Category:
Data Governance (see reasoning)

The text discusses the importance of data privacy and the need for a comprehensive, national standard to protect consumers' personal information. Although there are several references to privacy standards and sector-specific regulations, there is no direct mention of AI technologies or systems that require regulation or oversight. The primary focus is on data privacy legislation in general rather than AI-specific impacts. As such, while the relevance to categories can be inferred indirectly due to AI often operating on data, the explicit references needed to strongly associate this document with AI are absent, leading to lower scores across categories.


Sector:
Healthcare (see reasoning)

The text mainly addresses data privacy regulations and their impact on consumers, particularly regarding sectors like healthcare and education. While it touches on issues that could potentially affect various sectors employing AI technologies (data management, privacy), it doesn't specifically discuss the application or regulation of AI in these sectors. Therefore, while it has some relevance to data governance as it concerns managing personal data, it lacks sufficient direct references to AI to garner higher scores across the sectors.


Keywords (occurrence): automated (2) algorithm (1) show keywords in context

Summary: The bill allocates funding for the Departments of Labor, Health and Human Services, and Education for fiscal year 2024, emphasizing investments in healthcare, child services, and public health initiatives to enhance American well-being.
Collection: Congressional Hearings
Status date: March 22, 2023
Status: Issued
Source: Senate

Category: None (see reasoning)

The text primarily discusses the appropriations for various departments, with a focus on addressing health and human services, including mental health and behavioral healthcare. While there are mentions of substance use treatment and biomedical research, there are no explicit references to AI-related keywords such as Artificial Intelligence, Machine Learning, or Automated Decision systems. Thus, while the implications of technology in healthcare could indirectly relate to AI, the document does not directly address AI systems or legislation regarding their social impact, data governance, system integrity, or robustness.


Sector: None (see reasoning)

The text focuses heavily on the budget appropriations for various HHS initiatives without any specific mention of AI applications within any of the sectors described. While healthcare and mental health are significant topics covered, they do not explicitly connect to AI regulations or applications in the healthcare setting, nor do they provide insight into how AI relates to political or public services. Therefore, all sectors receive a score indicating no relevance.


Keywords (occurrence): artificial intelligence (2) automated (1) show keywords in context

Description: A bill to establish the Chief Artificial Intelligence Officers Council, Chief Artificial Intelligence Officers, and Artificial Intelligence Governance Boards, and for other purposes.
Summary: The AI LEAD Act establishes a Chief Artificial Intelligence Officers Council and governance boards to enhance responsible AI deployment, ensuring accountability, coordination, and adherence to democratic values across federal agencies.
Collection: Legislation
Status date: July 13, 2023
Status: Introduced
Primary sponsor: Gary Peters (2 total sponsors)
Last action: Placed on Senate Legislative Calendar under General Orders. Calendar No. 495. (Sept. 10, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The AI LEAD Act establishes governance structures for artificial intelligence (AI) within federal agencies, which directly correlates with issues of accountability, transparency, and ethical deployment of AI systems. The emphasis on the responsibilities of Chief Artificial Intelligence Officers and Governance Boards suggests that social implications, data governance concerns, system integrity, and robustness metrics are critical to this legislation, ensuring AI adoption is responsible and aligned with democratic values. The act advocates for risk management and oversight of AI technologies, clearly supporting implications relevant to social impact and system integrity. It also touches on robustness through mandates for compliance with standards and improvement of government operations using AI.


Sector:
Government Agencies and Public Services
Academic and Research Institutions
International Cooperation and Standards (see reasoning)

The AI LEAD Act is highly relevant to both Government Agencies and Public Services and the emerging regulatory framework surrounding AI usage in those sectors. By creating specific roles for Chief Artificial Intelligence Officers and Governance Boards, it aims at enhancing how federal agencies deploy AI technologies in a manner that is efficient, accountable, and protective of civil rights. It further addresses the integration of AI practices in governmental decision-making processes and operations, demonstrating interdependence on public trust and engagement with both federal and external stakeholders, including industry, academia, and other levels of government.


Keywords (occurrence): artificial intelligence (96) show keywords in context

Description: To direct the Federal Trade Commission to establish standards for making publicly available information about the training data and algorithms used in artificial intelligence foundation models, and for other purposes.
Summary: The AI Foundation Model Transparency Act of 2023 mandates the Federal Trade Commission to create standards for disclosing training data and algorithms of AI models for transparency and consumer protection.
Collection: Legislation
Status date: Dec. 22, 2023
Status: Introduced
Primary sponsor: Don Beyer (5 total sponsors)
Last action: Referred to the House Committee on Energy and Commerce. (Dec. 22, 2023)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The AI Foundation Model Transparency Act of 2023 focuses extensively on AI-related terminology and concepts, particularly concerning foundation models. It emphasizes the need for transparency regarding training data and algorithms used in these models, which directly ties into all four categories. In terms of Social Impact, the legislation aims to prevent misinformation and improve consumer protection, which is crucial for society. Data Governance is relevant as the bill outlines standards for accurate data management in AI systems. System Integrity is highlighted through the call for regulations that dictate transparency and accountability in AI processes. Robustness is also relevant as the standards concern performance benchmarks and ethical considerations around AI technologies. Thus, all categories are significantly connected to the text's subject matter.


Sector:
Government Agencies and Public Services
Academic and Research Institutions
International Cooperation and Standards
Hybrid, Emerging, and Unclassified (see reasoning)

This legislation can be categorized under multiple sectors. It directly addresses the use and regulation of AI by Government Agencies (the FTC in this case) signaling important implications for government operations. While it doesn't specifically articulate itself as addressing Politics and Elections, it indirectly connects to these by virtue of the emphasis on inference related to electoral processes. The bill does not strictly fall under Healthcare or Private Enterprises either, as it does not focus primarily on those frameworks. However, because it covers foundational AI models that could impact multiple sectors and places significant responsibility on the federal commission, it aligns well with Government Agencies and Public Services, while being broadly applicable to the Hybrid, Emerging, and Unclassified sector. Hence, it holds relevance primarily in Government context and could slightly touch upon Political implications.


Keywords (occurrence): artificial intelligence (5) foundation model (13) show keywords in context

Summary: This bill outlines test methods for measuring sulfur dioxide, nitrogen oxides, sulfuric acid mist, opacity, and carbon monoxide emissions from stationary sources to ensure compliance with environmental standards.
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text predominantly focuses on methods for determining emissions of various pollutants, including sulfur dioxide and nitrogen oxides, from stationary sources. There are no explicit references to AI-related aspects or technologies that would relate to the predefined categories of Social Impact, Data Governance, System Integrity, or Robustness. Given the technical and environmental nature of the document, it does not address the aspects of AI's impact on society, data handling, system security, or performance standards. Thus, it is not relevant to the AI categories listed.


Sector: None (see reasoning)

The text discusses air quality testing methodologies under the Environmental Protection Agency guidelines, specifically related to the monitoring of pollutant emissions from stationary sources. No AI applications, regulations, or implications are mentioned in regard to any of the predefined sectors, such as Politics and Elections, Government Agencies, Healthcare, etc. Therefore, it does not connect to any specific sector as defined.


Keywords (occurrence): automated (1) show keywords in context

Description: An Act amending the act of October 27, 1955 (P.L.744, No.222), known as the Pennsylvania Human Relations Act, further providing for definitions; providing for use of automated employment decision tool; and further providing for civil penalties.
Summary: This bill amends the Pennsylvania Human Relations Act to regulate the use of automated employment decision tools, requiring bias audits, individual consent, and introducing civil penalties for non-compliance.
Collection: Legislation
Status date: Sept. 29, 2023
Status: Introduced
Primary sponsor: Ed Neilson (12 total sponsors)
Last action: Referred to LABOR AND INDUSTRY (Sept. 29, 2023)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text discusses automated employment decision tools, which directly involve the use of AI technologies such as algorithms and machine learning models. It sets forth definitions for 'automated employment decision tool,' which encompasses various AI methodologies including neural networks and decision trees. Moreover, the requirement for a bias audit of these tools addresses the societal impact of AI in hiring decisions, raising concerns around fairness and discrimination. Consequently, this text is highly relevant to the categories of Social Impact, Data Governance, and System Integrity. In terms of Social Impact, it aims to protect individuals from potential biases in hiring facilitated by AI, highlighting the societal consequences of these technologies. Regarding Data Governance, it emphasizes the importance of accurate, transparent data management practices surrounding automated decision-making. For System Integrity, it enforces accountability in the development and auditing of these tools to ensure their ethical use in employment settings. Robustness is less relevant, as the text does not primarily focus on performance benchmarks but rather on fairness and transparency.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

This legislation is particularly relevant to the Private Enterprises, Labor, and Employment sector because it explicitly outlines how AI tools will be used in recruitment and hiring processes. It addresses the implications these tools have on employment practices and ensures that they do not promote discrimination. Additionally, it may involve aspects pertinent to Government Agencies and Public Services, as employment agencies will need to comply with these regulations. However, its primary focus remains within the realm of employment practices in the private sector, making the other sectors less relevant.


Keywords (occurrence): automated (12) show keywords in context

Description: To support research, development, demonstration, and other activities to develop innovative vehicle technologies, and for other purposes.
Summary: The Shifting Forward Vehicle Technologies Research and Development Act aims to foster innovation in vehicle technologies through research, development, and demonstration activities, focusing on sustainability, efficiency, and reduced emissions.
Collection: Legislation
Status date: July 28, 2023
Status: Introduced
Primary sponsor: Haley Stevens (2 total sponsors)
Last action: Referred to the House Committee on Science, Space, and Technology. (July 28, 2023)

Category:
System Integrity
Data Robustness (see reasoning)

The text primarily outlines legislation focused on vehicle technologies with an emphasis on innovation, research, and development in the transportation sector, without explicit references to the societal, ethical, or legal impacts of AI. However, it does mention the use of advanced computing and machine learning for optimization, which could have implications for system integrity and robustness but does not engage with issues like accountability, bias, or effects on public trust that would fall under Social Impact. The focus is more technical and process-oriented rather than socio-ethical, thereby limiting its relevance to that category. On the other hand, it does touch upon measures for security (System Integrity) in vehicle energy storage and interconnections, and it also highlights benchmarks for performance (Robustness); however, these sections are not comprehensive enough to constitute a very high relevance rating. Overall, while the text resonates with certain aspects of AI implications, it is not primarily focused on them.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

This legislation directly interacts with multiple sectors, notably Government Agencies and Public Services due to its involvement with the Department of Energy and its implications for vehicle technology development. There's a clear indication of coordination with federal government activities, hinting at its significant positioning within public sector initiatives. The text does not directly deal with Politics and Elections, Judicial Systems, or Healthcare, which might limit its relevance in those categories. However, the operational nature of vehicle technologies can implicate Private Enterprises since the bill addresses competitive manufacturing and energy-efficient practices that could influence business practices and labor markets.


Keywords (occurrence): machine learning (2) automated (3) show keywords in context

Description: To promote a 21st century artificial intelligence workforce.
Summary: The Jobs of the Future Act of 2023 aims to prepare the American workforce for emerging artificial intelligence technologies, assess their impact on jobs, and promote necessary skills and educational access.
Collection: Legislation
Status date: July 6, 2023
Status: Introduced
Primary sponsor: Darren Soto (5 total sponsors)
Last action: Referred to the Committee on Education and the Workforce, and in addition to the Committee on Science, Space, and Technology, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (July 6, 2023)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly addresses the impact of artificial intelligence on the workforce, which falls under the social impact category, notably concerning job displacement and the preparation of workers for a new AI-driven economy. It identifies vulnerabilities and opportunities for different demographics regarding workforce involvement with AI, which could lead to consumer protections and fair job practices. Additionally, it discusses the skills and education needed, aligning with data governance as it implies the necessity for managing data regarding workforce impacts effectively. There's a clear reflection on managing AI responsibly that may touch on system integrity given that the report requires evaluation methods for various industries relying on AI. However, robustness is not explicitly addressed in terms of performance metrics or audit standards for AI systems. Thus, social impact and data governance are directly relevant, while system integrity has some indirect connection. Robustness is less relevant.


Sector:
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

The legislation pertains mainly to workforce development in the context of AI, primarily targeting education and training policies, indicating strong relevance to academic and research institutions. Given its focus on collaboration with local educational agencies and institutions, it aims to influence the educational structure and workforce readiness. An indirect impact relates to private enterprises due to the anticipated changes in labor practices brought by AI. Although it engages with industry stakeholders, the emphasis doesn't lean specifically toward governmental agencies or the judicial system, making other sectors less relevant. Therefore, sectors such as academic and research institutions and private enterprises have moderate relevance, while the rest score lower.


Keywords (occurrence): artificial intelligence (16) show keywords in context

Summary: The bill establishes release detection methods for underground storage tanks (USTs) to prevent environmental contamination. It outlines requirements for inventory control, manual gauging, testing methods, and alternative detection methods to ensure compliance and protect public health.
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
System Integrity (see reasoning)

The text discusses methods for detecting releases from underground storage tanks, focusing on various techniques including manual gauging, automatic tank gauging, and vapor monitoring. While the text does not mention AI specifically, it touches on automated systems such as automatic tank gauging which implies the use of technology for monitoring. This hints at the broader category of System Integrity, which includes ensuring that automated systems have integrity and can accurately report conditions. However, the specific focus on leakage detection does not align closely with the requirements of other categories, such as Data Governance or Social Impact, which require explicit mentions of accountability, data management, or societal impacts. Overall, the relevance to the categories appears moderate, primarily connected through the element of automation in detection. Therefore, a score of 3 is appropriate for System Integrity with slight relevance for the others but not significant enough to warrant inclusion.


Sector: None (see reasoning)

The text primarily addresses regulatory practices related to environmental safety and monitoring of underground storage tanks. It describes methodologies for ensuring compliance with environmental standards, which corresponds loosely to Government Agencies and Public Services. However, there is no mention of political processes, judicial systems, healthcare applications, or any specific impact on the workforce, thus making it not very relevant to the sectors that explicitly outline these domains. The best fit would be a rating of 2 for Government Agencies and Public Services, as the regulations will likely be implemented by government entities, but the text doesn't delve deeply into its application. All other sectors will receive lower scores as they do not relate to the content of this regulatory framework.


Keywords (occurrence): automated (1) show keywords in context

Summary: The bill addresses the challenges posed by artificial intelligence, emphasizing bipartisan efforts to establish standards, safeguard human rights, and create a national data privacy standard to protect individuals' information.
Collection: Congressional Record
Status date: June 14, 2023
Status: Issued
Source: Congress

Category:
Societal Impact
Data Governance (see reasoning)

The text explicitly addresses significant social implications of AI, particularly regarding its potential misuse by criminals and oppressive states, which aligns with the Social Impact category. It discusses the need for ethical guidelines and standards, also reflecting on the human rights concerns related to AI misuse. There are references to the impact of AI on society, such as extortion and surveillance, indicating a direct relevance to societal effects. Data governance is relevant as the text highlights the need for a national data privacy standard, emphasizing the protection of personal information. However, the focus on data governance is less prominent than the social implications. The System Integrity category is less applicable because the text does not delve into legislation concerning the security and transparency of AI systems specifically, though it touches on the necessity of adaptive existing laws to meet challenges. The Robustness category does not feature strongly as the text does not discuss benchmarks for AI performance or auditing standards. Therefore, Social Impact and Data Governance receive the highest scores, while the other categories are not as relevant.


Sector:
Politics and Elections
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
International Cooperation and Standards (see reasoning)

This legislative text primarily focuses on the implications of AI within the realms of society and data protection. The mention of misuse of AI technology by criminals and oppressive governments situates it heavily in discussions pertinent to Politics and Elections, as it indirectly addresses the role of governments and political entities in managing AI technology for public safety and human rights. The discussions regarding American values and standards in AI place it within the purview of Government Agencies and Public Services, as it indicates how government is expected to handle AI regulations. The discussions on data privacy standards also correlate with private enterprises, labor, and employment as they touch on how AI interacts with personal information. However, no clear references to the judicial system, healthcare, academic institutions, or nonprofits are indicated. Therefore, the focus is narrower and primarily relevant to Politics and Elections and Government Agencies and Public Services.


Keywords (occurrence): artificial intelligence (5) show keywords in context

Summary: The bill outlines the specific requirements for depositing copies and phonorecords for copyright registration, detailing definitions, submission procedures, and conditions for varying types of works to ensure compliance with U.S. copyright law.
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text pertains primarily to the procedures and requirements for depositing copies and phonorecords for copyright registration, and it does not reference or connect to AI, algorithms, machine learning, or any any relevant technology that might fall under the selected categories of Social Impact, Data Governance, System Integrity, or Robustness. The content is heavily focused on copyright law and does not address how AI influences society, data management, system security, or performance benchmarks. As such, all categories score very low relevance.


Sector: None (see reasoning)

The text discusses copyright processes and requirements for deposits in the Copyright Office, without any linkage to specific sectors like Politics and Elections, Government Agencies and Public Services, Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, or Hybrid, Emerging, and Unclassified. Its content does not implicate AI's roles or applications in these areas; thus, all sectors score very low relevance.


Keywords (occurrence): automated (4) show keywords in context

Summary: The bill outlines procedures for the public to request agency records from NASA, ensuring transparency and accessibility under the Freedom of Information Act (FOIA). It offers guidelines on submitting requests and specifies the responsibilities of different NASA centers in maintaining public records.
Collection: Code of Federal Regulations
Status date: Jan. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily outlines the processes and requirements for how to make requests for agency records under the Freedom of Information Act (FOIA) within NASA. However, it lacks any specific references to AI-related technologies or concerns. Consequently, no category directly aligns with the content of the text since it does not address social impacts of AI, data governance concerning AI data, system integrity of AI systems, or the robustness of AI technologies. It does touch on automated record-keeping, but the context does not specifically implicate AI technologies or benchmarks. Therefore, all categories score low in relevance.


Sector: None (see reasoning)

The text does not address any specific use of AI across the mentioned sectors, such as politics, government services, healthcare, or any others listed. The discussion relates to information requests and transparency but does not integrate AI's role in these sectors. Since AI is not mentioned explicitly or in relation to the sectors, all sectors rank low in relevance.


Keywords (occurrence): automated (1) show keywords in context

Summary: The bill establishes import regulations for onions, mandating compliance with grade, size, quality, and maturity standards based on harvest seasons to protect local producers.
Collection: Code of Federal Regulations
Status date: Jan. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text provided pertains strictly to regulations surrounding the importation of agricultural products, specifically onions and tomatoes. There is no mention of Artificial Intelligence, algorithms, data governance, transparency in systems, or any AI-related frameworks that might relate to the defined categories. Hence, all categories are deemed not relevant to the content of the text.


Sector: None (see reasoning)

Similarly, the sector categories address the application of AI across various domains such as politics, healthcare, and international cooperation. The text is focused solely on import regulations for agricultural products, which does not engage with any aspect of AI or related sectors. Consequently, all sector categories score as not relevant.


Keywords (occurrence): automated (1)

Summary: The bill mandates the electronic transfer of government benefits and outlines compliance rules for government agencies, focusing on consumer access to account information and error resolution processes.
Collection: Code of Federal Regulations
Status date: Jan. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily discusses electronic fund transfers for government benefits, focusing on compliance obligations for account-holding institutions and the issuance of access devices. There are no explicit references to AI technologies or their impact on society, data governance, system integrity, or robustness in the context of AI. While there may be some implications of automation due to the electronic nature of fund transfers, the text lacks direct relevance to AI as defined by the keywords provided. Thus, scores in all categories are low as the text does not engage with AI-related topics significantly.


Sector: None (see reasoning)

The text relates specifically to electronic fund transfers of government benefits and outlines regulations and compliance for financial institutions. Although these transfers could involve operational technologies, no specific mention is made of AI, its applications in sectors such as Politics and Elections, Government Agencies, or other specified areas. Therefore, all sector scores are low as the legislation does not engage with AI or its implications in these sectors.


Keywords (occurrence): automated (9) show keywords in context

Summary: The bill establishes guidelines for investigations and adjudications related to Common Access Cards (CAC), enhancing security measures for national security positions while expediting processes for certain personnel, including wounded warriors.
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
System Integrity (see reasoning)

The text primarily outlines procedures and policies related to the investigation and adjudication around Common Access Cards (CAC). The relevance to AI categories is minimal. The text does not explicitly discuss artificial intelligence technologies or their implications for society. It focuses instead on security credentials and personnel checks under national security guidelines, slightly touching on automated adjudicative processes. However, this mention of automated processes does not delve into AI implications, thus yielding a low relevance score in all categories, mainly due to the lack of a strong connection to AI technologies or practices in the security and access control context.


Sector:
Government Agencies and Public Services (see reasoning)

The text deals mainly with credentialing and security measures within government contexts, ensuring personnel suitability for national security positions. Its relevance to sectors varies; while some procedures could relate to integrated systems in government agencies, the focus is primarily on qualifications and security clearances rather than the impact of AI on these processes or their outcomes. Thus, the scores reflect a limited engagement with essential aspects of the sectors defined.


Keywords (occurrence): automated (1)

Summary: The bill establishes requirements for safeguarding Controlled Unclassified Information (CUI) within federal agencies, outlining standards for safe handling, dissemination, and protection to mitigate unauthorized disclosure risks.
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Data Governance (see reasoning)

The text focuses on safeguarding Controlled Unclassified Information (CUI) and providing mechanisms for proper dissemination controls. It primarily addresses standards for securing information within government agencies and the management of information classified as CUI rather than any specifics surrounding AI technologies or their outputs. While AI may play a role in data processing within such systems, there are no explicit references to AI-related technologies or issues. Therefore, the relevance to the categories is limited. Social Impact (score: 1): The text does not address the effects of AI on society, such as discrimination or consumer protections. Data Governance (score: 3): While data governance is relevant in terms of safeguarding information, the text centers more on CUI standards than on AI-specific data governance issues. System Integrity (score: 2): The focus is on safeguarding controlled information rather than broader system integrity issues relevant to AI systems. Robustness (score: 1): There are no mentions of AI performance benchmarks or compliance, thus rendering this category largely irrelevant.


Sector:
Government Agencies and Public Services (see reasoning)

The text explicitly pertains to the safeguarding controls of sensitive information within government frameworks. There are no indications of AI applications or regulatory aspects concerning specific sectors, such as politics or healthcare. Politics and Elections (score: 1): No mention of AI's role in political processes. Government Agencies and Public Services (score: 4): The text is strongly relevant as it outlines information security measures applicable to federal agencies. Judicial System (score: 1): No references to the judicial implications of AI. Healthcare (score: 1): No discussion on AI's application in healthcare or medical contexts. Private Enterprises, Labor, and Employment (score: 2): Some tangential relevance exists concerning information management and security within businesses, but it is not the primary focus. Academic and Research Institutions (score: 1): No relevance to academic or research contexts. International Cooperation and Standards (score: 2): The text implies some standards but lacks a direct connection to international standards or cooperation. Nonprofits and NGOs (score: 1): No direct link to AI applications in these organizations. Hybrid, Emerging, and Unclassified (score: 2): The text does not fit neatly into this category as it primarily focuses on governmental processes.


Keywords (occurrence): automated (1) show keywords in context

Summary: The bill mandates NASA to establish a systematic review process for declassifying records, ensuring classified information is reviewed periodically and appropriately, while emphasizing training to prevent over-classification.
Collection: Code of Federal Regulations
Status date: Jan. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily focuses on the procedures for the declassification of information by NASA, outlining responsibilities and guidelines for systematic reviews. It largely deals with classification principles, the roles of various officials, and management of classified documents rather than addressing issues directly relevant to AI. The text does not explicitly mention AI or its implications. Therefore, all categories related to social impact, data governance, system integrity, and robustness are not relevant to the declassified information procedures laid out in this text. The absence of any AI-related terminology means that the scores reflect a lack of relevance to AI legislation or its impacts.


Sector: None (see reasoning)

The text does not engage with sectors that incorporate AI use or regulation. Instead, it is focused on the declassification processes within NASA, touching upon responsibilities and guidelines for information management rather than sectors that involve AI such as politics, healthcare, or public services. As a result, there is no relevant sector identified within the provided text. All sectors are scored as not relevant.


Keywords (occurrence): automated (1) show keywords in context

Summary: The bill outlines procedures for financial institutions regarding the lookback period and protected amounts when processing garnishment orders related to accounts holding federal benefit payments. It aims to safeguard certain funds during garnishment.
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text provides examples related to the lookback period and protected amounts in the context of garnishment orders. However, there is no mention of AI or any of the specific terms related to AI technologies. The content focuses on financial institutions and regulations concerning account reviews and garnishment processes without delving into how AI might interact with these processes. Therefore, this text is not relevant to the categories defined.


Sector: None (see reasoning)

The text discusses procedures for garnishment orders and the management of account reviews. It does not mention or imply the use of AI in any sector. The content is strictly related to financial institutions and tax processes without involving matters within any of the listed sectors such as healthcare, judicial system, or public services. Hence, it is not relevant to any of the defined sectors.


Keywords (occurrence): automated (1) show keywords in context

Summary: The bill establishes regulatory requirements for automated external defibrillators (AEDs) and related devices, ensuring they undergo premarket approval to enhance safety and effectiveness in treating cardiac emergencies.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily discusses the regulatory requirements and classifications related to automated external defibrillators (AEDs) and other medical devices. It does mention automation in the context of AEDs which interpret ECG data and automatically deliver shocks. However, the focus is more on regulatory compliance and device classification rather than the broader social implications of AI, data management issues, system integrity concerns, or robustness assessment. The relevance to social impact is therefore minimal, as it does not consider the societal effects of deploying such devices but instead focuses on operational specifications. In terms of data governance, while there is a reference to data interpretation, it does not delve into the management or governance of data related to these devices. Regarding system integrity and robustness, the references to performance standards are not specific to certifications or audits relevant to AI systems, leading to limited relevance across all categories.


Sector:
Healthcare (see reasoning)

The text details regulations concerning the use of automated external defibrillators (AEDs) but does not explicitly discuss their role in politics, judicial processes, public service enhancement, healthcare management, workplace applications, academic institutions, or international cooperation. However, because it pertains to medical devices, it has limited relevance to the healthcare sector where AI applications might be increasingly integrated into emergency medical systems. The mentions of potential data processing point somewhat towards healthcare but do not strongly anchor the text in that space. Thus, it has some relevance to healthcare due to the medical nature of the devices but lacks a broader application to other sectors.


Keywords (occurrence): automated (4) show keywords in context

Summary: The bill proposes fiscal year 2024 appropriations for agriculture, rural development, and related agencies, aiming to support programs, improve public health, and enhance food sovereignty among indigenous communities.
Collection: Congressional Record
Status date: Oct. 16, 2023
Status: Issued
Source: Congress

Category: None (see reasoning)

The text primarily discusses appropriations for various departments under the Department of Agriculture, including significant financial allocations for offices concerning communications, civil rights, information technology, research, and extensions. However, it does not mention artificial intelligence (AI) or related technologies directly. The legislative provisions focus on agriculture, public health, and various departmental operations without explicitly addressing the implications or applications of AI, algorithms, or automated systems. Therefore, relevance to the predefined categories is minimal, without distinct references to AI's social impact, data governance, system integrity, or robustness. Hence, scores reflect this lack of AI context.


Sector: None (see reasoning)

The text pertains largely to the Department of Agriculture's fiscal appropriations, management, and operational frameworks. While it mentions funding for research and education, such as the National Institute of Food and Agriculture and agricultural research services, there are no direct references to AI applications or regulations within the context of politics, public service, the judicial system, healthcare, private enterprises, academia, international standards, or nonprofits. Consequently, its relevance to specific sectors is negligible, leading to low scores across the board.


Keywords (occurrence): automated (3)
Feedback form