4162 results:


Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text discusses the definitions and requirements for mechanized claims processing and information retrieval systems, particularly focusing on automated systems that manage public assistance programs. Key phrases like 'automated application processing and information retrieval system' indicate the use of automated decision-making processes, which falls under the relevance of the categories. However, the text does not delve into the societal implications, data governance issues, system integrity specifics, or robustness benchmarks in detail, leading to scores that reflect moderate relevance in most categories, and lower relevance in others.


Sector:
Government Agencies and Public Services (see reasoning)

The text pertains predominantly to government operations, particularly in the administration of public assistance programs. As a result, it is particularly relevant to the sector of Government Agencies and Public Services, but it also touches on protocols that could apply to judiciary or regulatory frameworks for maintaining the integrity and reliability of these systems. The specifics of applications in politics, healthcare, and other specified sectors are notably absent, resulting in lower scores in those categories.


Keywords (occurrence): automated (4) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily deals with regulations surrounding downhole well maintenance and the conditions under which gas vented or flared during such maintenance operations is considered royalty-free. While the text mentions automated well control systems, it lacks explicit discussion of AI-related implications, such as accountability, bias, or consumer protection, which would relate to the Social Impact category. Similarly, it does not convey elements connecting to data management or governance, thus making it less relevant for Data Governance. There is no indication of oversight mechanisms for system transparency or security that are associated with the System Integrity category. As for Robustness, the text does not reference performance benchmarks or regulatory compliance specific to AI technologies. Therefore, this legislation does not significantly engage with AI concepts or implications to warrant a high-relevance score in any of the specified categories.


Sector: None (see reasoning)

This text relates specifically to the regulation of gas venting and flaring in the context of well maintenance, which primarily falls under environmental regulations rather than direct applications of AI. It does not address issues pertinent to political campaigns, public services, legal systems, healthcare, employment, academic institutions, or international cooperation regarding AI technologies. Instead, it pertains to operational practices in the oil and gas industry. As there are no significant connections to any of the sectors defined, the relevance across all sectors is minimal.


Keywords (occurrence): automated (2) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text does not discuss AI-related legislation, regulations or technologies. Instead, it outlines hunting and fishing regulations in various wildlife refuges in Montana. Consequently, all categories related to AI—Social Impact, Data Governance, System Integrity, and Robustness—are not applicable as there's no mention of AI, algorithms, or related technologies within the context of this document. None of the legislation aligns with aspects of AI governance or regulation, and therefore, they are rated as 'Not relevant'.


Sector: None (see reasoning)

The text focuses exclusively on wildlife refuge rules and does not engage with any specific sector related to AI, such as Politics and Elections, Government Agencies and Public Services, Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, or Hybrid, Emerging, and Unclassified. Thus, all sectors receive a score of 'Not relevant'.


Keywords (occurrence): automated (7) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text provided primarily discusses automated systems used in marine contexts, particularly regarding safety and operational controls. While there is mention of automation, the text lacks specific references to AI technologies or concepts (such as algorithms, machine learning, etc.). Therefore, its relevance to the categories of Social Impact, Data Governance, System Integrity, and Robustness is minimal as it focuses more on safety and operational reliability rather than on AI-related implications or governance. In particular, there is no discussion of societal implications, data management concerns, security measures specific to AI, or benchmarks for AI performance. Thus, scores reflecting this limited relevance are assigned.


Sector: None (see reasoning)

This text deals with safety protocols and system specifications in the context of the Coast Guard and maritime safety rather than AI applications across various sectors. While it touches on automated systems, it does not specifically engage with the impact of AI or its applications in the predefined sectors such as government agencies, healthcare, or judicial matters. The mentions of automation and systems could imply some technical governance, but there are no specific references to AI or its broader societal implications. As such, the scores reflect that lack of direct relevance to the defined sectors.


Keywords (occurrence): automated (5) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text primarily relates to the establishment and operation of automated management information systems for state plans under the Social Security Act. It discusses requirements for automated systems, which suggests elements of system integrity such as the need for security against unauthorized access, as well as aspects of data governance, as it involves the collection and management of personal data. However, it does not delve into specific AI technologies such as algorithms or machine learning directly but is concerned more with automation and data processing requirements. This suggests a relevance to some of the categories but not specifically to AI benchmarks or performance guidelines, which would pertain more to robustness.


Sector:
Government Agencies and Public Services
Healthcare (see reasoning)

This text is relevant to the Government Agencies and Public Services sector as it outlines requirements for state systems that assist in managing public welfare programs, particularly through automated processing and information retrieval systems. Given the context is deeply rooted in the delivery and management of aid through state agencies, it is also relevant to the Healthcare sector in terms of the provision of medical assistance programs, though less directly. The legislation does not address elements related to politics, judicial systems, or nonprofit organizations, making those sectors less relevant.


Keywords (occurrence): automated (3) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily discusses debt reporting and the involvement of credit reporting agencies within the scope of administrative debt collection. It does not mention or imply any relevance to Artificial Intelligence (AI), algorithms, automation, or related technologies that are crucial for associating it with the predefined categories. Therefore, the legislation lacks any substantive connection to the societal impacts, data management, system integrity, or performance needs of AI systems. As such, all categories are rated 1 (not relevant).


Sector: None (see reasoning)

Similar to the category reasoning, the text does not involve any discussion about the application or regulation of AI within sectors such as politics, government services, healthcare, etc. It focuses instead on procedures related to debt collection and reporting to credit agencies, which do not engage any AI-related considerations. Consequently, all sector ratings are also 1 (not relevant).


Keywords (occurrence): automated (2) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily addresses Medicare incentives for services provided in Health Professional Shortage Areas (HPSA). It focuses on payment structures and definitions related to healthcare services rather than AI or its implications in those sectors. Key terms associated with AI, such as 'automation' or 'algorithm', are absent, suggesting minimal relevance to the categories focused on AI. The legislation's focus on payments and healthcare access does not directly tie into the broader implications of AI on social issues, data governance, system integrity, or robustness, thus leading to low scores across all categories.


Sector:
Healthcare (see reasoning)

The text is relevant to healthcare as it discusses incentive payments for services in Health Professional Shortage Areas (HPSA). The content specifically mentions payments and services offered by physicians in these areas, indicating a focus on improving access to healthcare. The discussion about primary care and specialty care aligns it closely with the healthcare sector. However, since there are no explicit references to AI in this context, the score will reflect this focus generically on healthcare rather than in the context of AI's role in that sector.


Keywords (occurrence): automated (1) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Data Governance (see reasoning)

The provided text does not directly address artificial intelligence or its related technologies such as algorithms or automated decision-making. The focus is primarily on the verification processes for work participation information, emphasizing documentation and compliance. Though there is mention of 'automated data processing systems,' it is generic and does not indicate sophisticated AI methodologies or implications for social impact resulting from AI technologies. Consequently, relevance across categories is limited with a clear absence of AI-centric details.


Sector:
Government Agencies and Public Services (see reasoning)

The text does not focus on the use of AI within specific sectors, but rather it discusses requisite documentation and verification procedures for work participation data, which primarily pertains to government processes. There is a slight mention of automated data processing, which could integrate AI systems; however, the text lacks depth in addressing AI applications across designated sectors, leading to low relevance scores.


Keywords (occurrence): automated (2) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily focuses on the specifications for document submissions to a regulatory board, with an emphasis on paper specifications, filing processes, and electronic submissions. It does not directly address or pertain to the impact of Artificial Intelligence on society, data governance in AI, system integrity of AI processes, or robustness in AI benchmarks. Therefore, all categories are assessed as not relevant because there are no mentions or implications of AI technologies or their implications in the regulatory framework.


Sector: None (see reasoning)

The text does not mention any specific sectors relating to politics and elections, government agencies and public services, judicial systems, healthcare, private enterprises, labor, employment, academia, international cooperation, NGOs, or any hybrid or emerging sectors. The discussion is strictly about document specifications with no reference to the application of AI or mention of any relevant sectors. Thus, all category scores are also assigned as not relevant.


Keywords (occurrence): automated (2) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

This document discusses federal policy related to the protection of human subjects and sensitive security information. It does not specifically address the impact of AI on society or individuals or mention issues such as bias, accountability, or misinformation, which are central to the Social Impact category. While it deals with data management related to sensitive information (potentially relevant to AI systems), it lacks a focus on data collection practices that would qualify for Data Governance. It makes no mention of security and transparency measures for AI systems outlined in System Integrity. Similarly, there are no references to benchmarks or standards for AI performance that would be relevant to Robustness. Overall, the text appears to pertain only minimally to the categories outlined, making it largely irrelevant across all categories.


Sector: None (see reasoning)

The text does not discuss AI's role in politics or elections, nor does it address AI applications in government agencies that aid public services or impact judicial systems. There is no mention of healthcare applications related to AI, nor is there any exploration of AI in the context of private enterprises or employment issues. Academic and research settings are only touched upon in terms of federal policy for human subjects—without any discussion of AI. The text does not reference international standards for AI or how nonprofits and NGOs might regulate AI. Lastly, it does not cover any aspects that fit into hybrid or emerging sectors related to AI. Therefore, no sectors show a relevant connection to the text.


Keywords (occurrence): automated (1) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text outlines functional requirements for the Model Tribal IV-D System, specifically discussing automated data processing and case management within the Tribal IV-D program. It highlights areas including the processing of data, security measures, and the maintenance and reporting of processed information. The categories can be evaluated based on their connection to these aspects of AI and data management. 'Social Impact' is relevant due to the implications of the system on community welfare, though it may not deeply delve into societal fairness metrics or harms. 'Data Governance' is highly relevant, as it discusses data management, secure processing, and safeguarding information, which are critical aspects of AI systems. 'System Integrity' is also relevant given the focus on security and access control over sensitive data processed by the system. 'Robustness' is less applicable since it does not focus on performance benchmarks or compliance to international standards in AI systems, thus scoring lower in relevance. Overall, the text emphasizes aspects that align closely with data governance and system integrity, thus scoring them higher.


Sector:
Government Agencies and Public Services (see reasoning)

The text primarily pertains to the operations of a Tribal IV-D system, which is part of public services focused on child support enforcement. As such, 'Government Agencies and Public Services' is directly relevant, particularly regarding the automated and data-driven processes described. The mention of intergovernmental agreements also supports this relevance. Other sectors like 'Judicial System' or 'Healthcare' do not apply as they do not align with the context of child support or automated data processing frameworks as outlined in this text. Similarly, 'Politics and Elections' is unrelated to the scope of the document. Therefore, the highest relevance is assigned to government services, which score significantly higher than others.


Keywords (occurrence): automated (5) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The provided text does not contain explicit references to AI or its related concepts such as artificial intelligence, algorithms, or automation. The discourse focuses on contractual obligations, proposal submission guidelines, and instructions related to procurement processes. While it includes mentions of 'sensitive automated systems,' there is no further elaboration on AI or its implications in the context of the document. Hence, all categories rated very low relevance as the text does not discuss AI's social impact, data governance, integrity, or robustness.


Sector: None (see reasoning)

The text primarily outlines procurement and proposal processes for government contracts with no clear references or implications regarding AI's role in various sectors like politics, healthcare, or public services. It lacks any mention of AI applications within these areas, therefore receiving the lowest scores across all sectors.


Keywords (occurrence): automated (1) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text discusses regulations concerning single car air brake tests in the railroad industry. Although it mentions 'automated tracking systems,' the entire focus is on specific procedural and administrative practices, with no direct mention of AI technologies such as Machine Learning, Algorithms, or any direct AI implications. Therefore, the text does not significantly fit within the parameters of Social Impact, Data Governance, System Integrity, or Robustness as it does not address broader societal implications, data management practices, integrity measures of AI systems, or performance benchmarks of AI. It is primarily concerned with safety and procedural compliance rather than the overarching implications of AI.


Sector: None (see reasoning)

The text primarily addresses procedures for managing air brakes in the railroad industry. While it touches upon 'automated tracking systems', it does not specifically discuss AI's role in politics, government services, the judicial system, healthcare, labor and employment, or any academic context. The mention of automated systems could tangentially relate to government agencies and public service in terms of improving regulation tracking, but it lacks a focus on AI applications within those arenas. Thus, none of the sectors directly apply, with very minimal relevance to Government Agencies and Public Services due to the mention of tracking systems, but not enough to warrant a higher score.


Keywords (occurrence): automated (4) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily deals with the procedures and guidelines for states to repay federal funds in the context of Medicaid. It does not refer to or involve any AI-related terms or concepts, nor does it address the social impact of AI, data governance concerning AI systems, system integrity in managing AI solutions, or the robustness of AI performance metrics. Therefore, its relevance to the specified categories of AI-related legislation is nonexistent. Additionally, the content centers around administrative and financial processes without consideration of technological impacts, including AI. As such, I find all categories to receive a relevance score of 1, indicating no relevance.


Sector: None (see reasoning)

The text is predominantly focused on Medicaid repayment procedures, without discussing any specific sectors that utilize or regulate AI technologies. There are no indications that the content intersects with political activities, government agencies' AI use, judicial systems, healthcare AI innovations, labor implications from AI, academic developments in AI, international standards for AI, nonprofit use of AI, or any emerging sectors that involve AI applications. Given this focus on administrative and financial aspects of Medicaid, all sector relevance scores are also a 1, indicating no relevance.


Keywords (occurrence): automated (3)

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily pertains to the practical use of a third-party telephone service by dispatching and maintaining railroads, focusing on protocols for reporting unsafe conditions at grade crossings. It does not directly address any societal concerns or impacts from AI technologies, nor does it mention specific data governance practices involving AI, the integrity of AI systems, or any benchmarks related to AI performance. The text mainly involves operational procedures rather than any legislative measures related to AI ethics, accountability, transparency, or robustness, leading to scores that reflect minimal relevance to the categories defined.


Sector: None (see reasoning)

The legislation outlined is focused on railroad operations and safety reporting practices rather than any sector directly related to the predefined sectors. Specifically, while the text discusses coordination among railroads and the use of telephone services, it does not directly address political uses of AI, governmental use, judicial applications, healthcare, private enterprise impacts, academic regulation, international standards, or the functions of nonprofits related to AI. Thus, all sectors receive a score indicating a lack of direct relevance.


Keywords (occurrence): automated (2) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily deals with the restoration priorities for communication services in critical situations, specifically focusing on the management of circuit availability for various levels of priority during national emergencies. It does not mention AI explicitly nor does it address issues related to AI's societal impact, data management within AI systems, the integrity or transparency of AI systems, or the benchmarks for AI performance. Thus, all categories related to AI are not applicable.


Sector: None (see reasoning)

This text pertains to the communication priorities set by the government during emergencies but does not specifically address the use or regulation of AI in any sector. While it refers to government services and industrial participation, it does not discuss AI applications in political processes, public services, healthcare, employment, or any other fields listed. Therefore, it scores low relevance across all sectors.


Keywords (occurrence): automated (1) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Data Governance
System Integrity (see reasoning)

The text addresses safety-critical electronic control systems, which may incorporate AI technologies for monitoring and control in trainsets. However, the document largely focuses on hardware and software safety protocols, certifications, and data preservation related to safety systems in trainsets rather than explicitly discussing AI implications. The relevance to social impact, data governance, system integrity, and robustness appear tangential, as the main emphasis is on safety and operational standards for trainsets without a direct link to AI principles or societal impacts of AI applications. Thus, it lacks significant relevance to the specified AI categories.


Sector:
Government Agencies and Public Services
Hybrid, Emerging, and Unclassified (see reasoning)

The content discusses the use and regulation of safety-critical electronic control systems in trainsets, which could intersect with government operations in public transportation. However, the text does not delve into specific applications or implications of AI technology but rather outlines compliance and procedural requirements for ensuring safety in trainset operations. The primary sectors affected include government agencies involved in transportation safety, though it lacks specificity in terms of how AI is directly involved in legislative measures related to these topics.


Keywords (occurrence): automated (3) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily discusses emergency notification systems related to safety at highway-rail and pathway grade crossings, focusing on the responsibilities of railroads for maintaining and responding to reports of unsafe conditions. It does mention automated systems and requires that these systems be capable of responding to reports, but there is no substantial discussion of social impacts from AI, data governance, systemic integrity, or robustness. The automated answering systems refer to efficiency in communication rather than AI-specific applications. Therefore, relevance scores for this text in the context of AI categories are low.


Sector: None (see reasoning)

The legislation primarily focuses on the operational responsibilities of railroads concerning emergency reporting and the management of safety conditions at crossings. While there is a mention of automated systems, it does not delve into the specific implications for sectors like Politics and Elections, Government Agencies, or others. The focus is too narrow concerning the application of AI to relate directly to any of the predefined sectors. As such, the relevance scores for each sector are minimal.


Keywords (occurrence): automated (2) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Data Governance (see reasoning)

The text primarily focuses on the technical requirements and responsibilities of manufacturers, suppliers, and installers of multi-line telephone systems regarding E911 service, specifically emphasizing automated dispatchable location and location accuracy. It does not directly address social impacts, data governance, system integrity, or robustness in terms of AI systems or technologies, leading to relatively low relevance for these categories. However, the use of automated systems for location delivery does imply some level of engagement with automated systems, potentially connecting with data governance around accurate location information.


Sector: None (see reasoning)

The text pertains to regulations affecting telecommunications, but it does not specifically address the use or regulation of AI in any particular sector such as politics, healthcare, or public services. While it does touch upon systems that may utilize automated features, it lacks explicit reference to AI technologies in the sectors defined. Therefore, the relevance to specific sectors is minimal.


Keywords (occurrence): automated (4) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily focuses on regulations regarding the verification of orders for telecommunications services. It emphasizes procedural requirements and consumer protections in the context of telecom carriers and does not explicitly or implicitly reference AI technologies or concepts. Thus, none of the categories regarding Social Impact, Data Governance, System Integrity, or Robustness are relevant here. Since the text deals with order verification specifics and does not include any references or implications of AI, it scores a 1 across all categories.


Sector: None (see reasoning)

The text outlines rules and procedures regarding telecommunications services but does not delve into the use or regulation of AI within any specific sectors such as politics, government, healthcare, or any other mentioned sectors. Given its focus solely on telecommunications processes, there is no relevance to Politics and Elections, Government Agencies and Public Services, Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, or Hybrid, Emerging, and Unclassified. Consequently, all sectors receive a score of 1.


Keywords (occurrence): automated (3) show keywords in context
Feedback form