4180 results:


Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily focuses on procedures for estimating seat belt use rates and adjusting state-submitted data accordingly. There is no mention of Artificial Intelligence or any relevant AI-related terms like algorithmic processing in a way that pertains to the operation or oversight of AI systems. Instead, the mention of 'algorithm' is related to a mathematical model for estimating seat belt use rather than the deployment or governance of AI systems. Consequently, this text does not significantly connect to any of the categories, as none of them inherently involves the processes described herein, which are primarily administrative and statistical concerning road safety without relating to broader implications of AI technology.


Sector: None (see reasoning)

The text involves data collection and statistical analysis concerning state-submitted seat belt use rates, but it does not directly address the use or regulation of AI in any of the specified sectors. While it mentions an algorithm, it is a basic calculation and not representative of AI applications or systems in any of the sectors described. Thus, there are no relevant connections to the specified sectors within the text.


Keywords (occurrence): algorithm (3) show keywords in context

Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Data Governance
System Integrity
Data Robustness (see reasoning)

The text discusses a coronary artery disease risk indicator that utilizes acoustic heart signals and mentions requirements for algorithms, clinical performance testing, and user instructions—all related to the assurance of AI or algorithmic processes. The need for validation and verification of these algorithms points to relevant aspects of robustness and system integrity. However, there is limited focus on the social impact of using AI in healthcare, such as patient privacy or the effects on care outcomes. The text is focused on technical specifications and testing of diagnostic devices rather than wider social implications or governance aspects, thus limiting its relevance to the categories.


Sector:
Healthcare
Academic and Research Institutions (see reasoning)

The text is primarily about medical devices that use algorithms for risk assessment in healthcare applications, such as predicting coronary artery disease. It explicitly addresses the use of AI algorithms, clinical testing, and regulatory compliance within healthcare contexts. This makes it highly relevant to the healthcare sector due to its focus on medical diagnostics, as well as touching upon broader impacts linked with data accuracy and human factors involved. Given this focus, the relevance to healthcare is strong, while its relevance to other sectors is limited.


Keywords (occurrence): algorithm (2) show keywords in context

Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text explicitly discusses AI technologies related to medical imaging, particularly focusing on devices designed to assist radiologists in detecting and analyzing medical images using algorithms and pattern recognition. The references to 'image analysis algorithms' and 'computer-assisted detection' indicate its relevance to the sector of healthcare, particularly in the assessment of AI's performance in critical medical capacities. Given that healthcare applications of AI inevitably intersect with overarching social impacts, data governance, system integrity, and robustness, it is reasonable to assign relevance to those categories. However, the document primarily emphasizes performance standards and classifications in healthcare without venturing deeply into policy implications or societal concerns beyond direct medical contexts. Therefore, while the text implicates important elements of social impact and governance, they are not its primary focus but still hold moderate relevance. The categories of system integrity and robustness gain significance as they relate to the verifiable performance and safety standards critical for medical applications of AI.


Sector:
Healthcare (see reasoning)

The text is centered on medical image analyzers and software intended to assist healthcare professionals. The references to 'computer-assisted detection' and the detailed analysis of algorithms indicate a focus on technologies directly applied in healthcare contexts. Additionally, the text emphasizes performance testing and validation for clinical use, underscoring its essential role in hospital settings. Therefore, this document strongly aligns with the healthcare sector, as it delineates critical standards and regulatory expectations for AI applications in medical imaging, highlighting their operational significance.


Keywords (occurrence): algorithm (5) show keywords in context

Collection: Code of Federal Regulations
Status date: Jan. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Data Governance
System Integrity (see reasoning)

The text primarily addresses issues related to deposit insurance determination and the operational facets of a covered financial institution. The discussions around automated processes for placing and removing provisional holds can imply a level of algorithmic decision-making, yet there is no substantial focus on AI technologies or their societal impacts. The legislative content does not encompass fairness or bias, consumer protections from AI, transparency and accountability in AI systems, or the psychological impact of automation, indicating a lack of direct relevance to Social Impact. Regarding Data Governance, the language used does touch on automated systems and the requirement for data management, but it doesn't address broader data governance issues like bias or inaccuracies. System Integrity is somewhat relevant due to the need for automated processes and security measures in the algorithms, but it remains indirect. Robustness, which deals with performance benchmarks and compliance auditing, is also marginally applicable but not explicitly detailed. Overall, the legislation does not have a meaningful connection to AI as defined by the stated categories.


Sector: None (see reasoning)

The text pertains to the regulations concerning large banks and FDIC protocols. While it outlines operational processes and requirements for financial institutions, the application of AI or automated systems is not a focal point. The legislation's emphasis on deposit accounts and resolution plans does not directly address sectors such as politics, healthcare, judicial systems, or employment practices. The Government Agencies and Public Services sector could see some relevance due to the nature of federal regulation and public service, but it requires a leap to connect it meaningfully. The text also does not address the unique qualities or legislation surrounding nonprofits, international standards, or emerging sectors. Essentially, the document caters to banking supervision rather than the application or regulation of AI across a broad range of sectors.


Keywords (occurrence): automated (8) algorithm (4) show keywords in context

Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The provided text primarily focuses on the requirements and specifications for continuous monitoring systems (CPMS) related to flare operations, which doesn't explicitly address the broader implications of AI in society, data governance, system integrity, or robustness. While there are some mentions of algorithms pertaining to data collection and monitoring, they do not extend to broader AI issues such as ethical implications, biases, or performance benchmarks typically seen in AI discussions. Therefore, the relevance to Social Impact, Data Governance, System Integrity, and Robustness categories is minimal.


Sector:
Government Agencies and Public Services (see reasoning)

The text discusses the regulations related to continuous monitoring systems, which could imply a connection to environmental standards in public services but does not specifically address AI applications in politics, the judicial system, healthcare, labor, or any specific sector mentioned. The lack of explicit AI tools or applications limits the relevance to specific sectors. Therefore, the scores reflect very low relevance, mostly towards Government Agencies for regulatory compliance and public service areas.


Keywords (occurrence): algorithm (3) show keywords in context

Collection: Code of Federal Regulations
Status date: Jan. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The document primarily discusses testing methods for measuring energy consumption of refrigeration products and does not have any explicit references or implications regarding artificial intelligence (AI) or related technologies. Therefore, it does not fit well within the categories focusing on the social impact, data governance, system integrity, or robustness of AI systems.


Sector: None (see reasoning)

The text does not address AI applications in any specific sectors like politics, government, healthcare, or private enterprises. It solely pertains to energy consumption measurement for refrigeration products, which is outside the scope of AI-related sectors.


Keywords (occurrence): algorithm (6) show keywords in context

Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text primarily discusses software related to photoplethysmography for over-the-counter use, mentioning performance testing, cyber security management, clinical data requirements, and various assurances around accuracy and functionality. Its relevance to 'Social Impact' is moderate because it involves healthcare implications and user interaction with diagnostic technologies, yet it lacks a strong focus on broader societal concerns such as discrimination, misinformation, or consumer protection. In terms of 'Data Governance,' there is a connection to the management of data input and output, which is critical for AI systems dealing with healthcare data, thus scored moderately relevant. 'System Integrity' is highly relevant as it mentions the importance of cybersecurity, oversight of software functionality, and user error mitigation, all of which are crucial for maintaining integrity in AI systems. Lastly, 'Robustness' applies as the text involves performance testing to ensure the reliability of the algorithm in different conditions, indicating a push towards establishing benchmarks for system performance. Hence, while the text does not explicitly state new standards, it implies necessary assessments for robustness. Overall, 'System Integrity' and 'Robustness' emerge as highly relevant categories, while 'Social Impact' and 'Data Governance' are moderately relevant.


Sector:
Healthcare (see reasoning)

The text is most relevant to the sector of 'Healthcare.' It directly refers to the use of a photoplethysmograph device, which analyzes physiological data for health-related purposes. The focus on performance testing, usability, and safety assessment aligns well with healthcare regulations. Although 'Government Agencies and Public Services' may be relevant regarding FDA involvement in regulation, it is less direct compared to the explicit healthcare application. Other sectors such as 'Judicial System,' 'Private Enterprises,' 'Academic,' etc., do not receive significant mention in the text, affirming that it should primarily be categorized under healthcare. Therefore, it aligns closely with that sector, while other sectors receive low scores due to a lack of direct relevance.


Keywords (occurrence): algorithm (2) show keywords in context

Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
System Integrity
Data Robustness (see reasoning)

The text primarily discusses the FDA regulation of various oral medical devices, with significant focus on the 'auto-titration device for oral appliances.' This device involves algorithm performance and closed loop algorithm validation, making it relevant to the categories of 'System Integrity' and 'Robustness.' However, it lacks explicit references to social impact, data governance considerations, or any broader discussions of impacts on society that would fit the criteria for 'Social Impact' and 'Data Governance.' Therefore, the most relevant categories are 'System Integrity' and 'Robustness', while the others are not significantly related.


Sector:
Government Agencies and Public Services
Healthcare (see reasoning)

The text does not specifically address political, judicial, or non-profit applications of AI and regulatory frameworks relevant to those sectors. However, since the auto-titration device for oral appliances is directly related to healthcare applications of AI, it receives a high score in the 'Healthcare' sector. The devices are part of FDA regulations, fitting under Government Agencies and Public Services but not as strongly as 'Healthcare.' Overall, the two most applicable sectors are 'Healthcare' and 'Government Agencies and Public Services,' while others do not apply.


Keywords (occurrence): algorithm (1) show keywords in context

Collection: Code of Federal Regulations
Status date: Jan. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily consists of various model disclosure clauses and forms related to electronic fund transfers, liability issues, and error resolution procedures. There are no direct references or implications about Artificial Intelligence (AI) or related technologies like algorithms, machine learning, or automated decision-making systems. Thus, the categories related to AI impact do not seem pertinent, as they address ongoing discussions about societal implications of AI, data governance within AI systems, system integrity, and robustness of AI algorithms. Overall, while electronic systems might use AI behind the scenes, this text does not engage with AI-related issues explicitly.


Sector: None (see reasoning)

The text focuses on standard procedures and disclosure clauses for financial transactions within federal regulations. It does not pertain to AI applications in political campaigns, government services, judicial systems, healthcare, or business operations. Instead, it discusses electronic fund transfers and associated processes, which are more aligned with financial regulation and consumer protection than with any specific sector dealing with AI's application. Therefore, none of the sectors' definitions directly relate to the contents of the text.


Keywords (occurrence): automated (4) show keywords in context

Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text primarily discusses a software application for contraception that utilizes an algorithm to analyze patient-specific data, making it highly relevant to several categories of AI legislation. The algorithmic nature of the software directly connects it to issues around system integrity, data governance due to the handling of personal fertility data, and social impact because of its implications for reproductive health and personal privacy. However, the text does not directly address the creation of benchmarks or performance standards typically associated with the robustness category. Therefore, scores vary accordingly.


Sector:
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)

The primary focus of the text is on a software application for contraception, which is inherently linked to healthcare due to its use in monitoring fertility and preventing pregnancy. It does not directly address issues specific to politics, government use, the judicial system, or nonprofit sectors. While it has implications for private enterprises, the strong connection to healthcare takes precedence. Thus, the highest relevance is assigned to the healthcare sector, with government implications noted.


Keywords (occurrence): algorithm (2) show keywords in context

Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily discusses on-board diagnostics (OBD) for engines over 14,000 pounds GVWR, which is related to monitoring engine performance and emissions rather than direct applications or implications of AI. Although the content involves monitoring systems similar to how AI could monitor or analyze data, it does not specifically engage with AI technologies or discuss their societal impacts, data governance, system integrity needs, or robustness standards. As the content lacks a direct relevance to the nuances of AI technology and regulations that govern these aspects, the scores for all categories will reflect this lack of relevance.


Sector: None (see reasoning)

The text concerns regulations regarding on-board diagnostics in heavy-duty engines rather than legislation about AI applications across different sectors. The categories provided focus on the role and regulation of AI across various sectors like politics, healthcare, and employment. There is no indication in the text that AI technologies are being used, evaluated, or legislated within the context of these regulations, thus the relevance scores will also be minimal.


Keywords (occurrence): algorithm (3) show keywords in context

Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text discusses regulations regarding devices that use software algorithms for the optical measurement of heart and respiratory rates, highlighting the need for validation, verification, and cybersecurity measures. The references to software algorithms for medical measurements indicate a relevance to both the Social Impact and Data Governance categories. The potential implications on patient safety and user error implications lend weight to the Social Impact aspect, while the necessity of data management (accuracy and bias) relates to Data Governance.


Sector:
Healthcare (see reasoning)

The text is primarily focused on healthcare regulations given that the devices discussed relate to medical measurements such as pulse and heart rates. The controls and software requirements, including performance testing and human factors assessments, show a direct application in healthcare settings. Although there are no explicit mentions of other sectors, the specific focus on the medical context of these devices solidifies their classification under Healthcare.


Keywords (occurrence): algorithm (2) show keywords in context

Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily focuses on the definitions and acronyms relating to classified information security within the Department of Defense and does not directly address AI-related issues. Therefore, it is not relevant to Social Impact, Data Governance, System Integrity, or Robustness categories. These categories are fundamentally concerned with aspects of AI's societal implications, data handling regulations, transparency in AI systems, and performance benchmarks, none of which are explicitly present in the text. Hence, all categories will score a 1 for 'Not relevant.'


Sector: None (see reasoning)

The text primarily deals with frameworks for safeguarding classified information and definitions related to various government and contractor roles. It does not specifically address how AI intersects with the sectors defined, including Politics and Elections, Government Agencies and Public Services, Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, or Hybrid, Emerging, and Unclassified. As such, it receives a score of 1 for 'Not relevant' across all sectors.


Keywords (occurrence): automated (1) show keywords in context

Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Data Governance
System Integrity (see reasoning)

The text primarily focuses on metadata requirements for recordkeeping rather than discussing AI systems directly. However, it is relevant to the 'Data Governance' category as it specifies requirements for the collection and management of data, which could involve AI systems that process or digitize records. 'Robustness' could be considered somewhat relevant as well, since ensuring accuracy and consistency in metadata may align with benchmarks for performance, especially as these could relate to any AI systems involved. The sections related to human oversight and validation processes hint at an aspect of 'System Integrity', but this is more about procedural adherence than systemic integrity of AI. There is no clear relevance to 'Social Impact'.


Sector:
Government Agencies and Public Services (see reasoning)

The text covers requirements that relate mainly to data management and digitization processes which could potentially involve AI technologies in a government context. However, there is little direct reference to applications in specific sectors such as healthcare, judiciary, etc. The mention of records management aligns somewhat with 'Government Agencies and Public Services', but it does not directly delve into AI applications in these sectors. Thus, the relevance is moderately acknowledged.


Keywords (occurrence): algorithm (3) show keywords in context

Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text does not discuss any issues related to social impact, data governance, system integrity, or robustness of AI systems, nor does it employ terms associated with AI technologies like algorithms, machine learning, or automated decision-making. It primarily focuses on the regulations concerning dual trading of security futures products and related market behavior, which are unrelated to the impact of artificial intelligence or the governance of AI technologies.


Sector: None (see reasoning)

The text does not specifically mention or address AI applications within politics, government functions, legal proceedings, healthcare, business environments, academic contexts, international standards, or NGO involvement. It focuses solely on trading regulations and does not revolve around AI technology or its impacts on any of these sectors.


Keywords (occurrence): algorithm (3) show keywords in context

Collection: Code of Federal Regulations
Status date: Oct. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The provided text primarily deals with the regulation and management of Electronic Logging Devices (ELDs) used by motor carriers to track drivers' duty status. There is no mention of artificial intelligence, algorithms, machine learning, or other AI-specific terminology that would categorize this legislation in relation to the social impact of AI, data governance pertaining to AI, system integrity relating to AI, or the robustness of AI systems. Therefore, the relevance of all the categories is minimal.


Sector: None (see reasoning)

The text focuses on the regulatory framework for electronic logging and data management within the transportation sector, particularly concerning drivers and motor carriers. It does not address AI applications, nor does it specify the use of AI in enhancing governmental operations or the involvement of AI in political contexts. Thus, it is not relevant to any of the specified sectors.


Keywords (occurrence): algorithm (2)

Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The provided text focuses on technical specifications and procedures related to measurement instruments used in emission testing, emphasizing engineering standards and practices. There is no specific mention of Artificial Intelligence or related technologies in the text. Therefore, it's unlikely to impact or relate to the categories of Social Impact, Data Governance, System Integrity, or Robustness as these categories deal with the broader implications and regulatory concerns about AI. As none of the defined AI-related terms are present, I will assign low relevance scores to all categories.


Sector: None (see reasoning)

The text does not discuss applications related to any particular sector, including politics, healthcare, or public services. It is strictly focused on procedures and standards for measurement instruments in the context of emissions, without intersecting with the defined sectors such as Government Agencies, Healthcare, or Private Enterprises. Thus, it is devoid of direct relevance to the sectors listed.


Keywords (occurrence): algorithm (1) show keywords in context

Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Societal Impact
Data Governance
Data Robustness (see reasoning)

The text addresses a specific retinal diagnostic software device that incorporates an adaptive algorithm for evaluating ophthalmic images. This directly relates to the category of Robustness due to the emphasis on algorithm development and performance benchmarks. It also fits into Data Governance because it discusses software verification, validation documentation, and the management of clinical performance data, ensuring the data used in the software is accurate and reliable. Social Impact is relevant to some extent, as the text touches on the application of technology to improve health outcomes, indirectly addressing the societal benefit of advancing healthcare through AI-driven diagnostics. System Integrity is less relevant since the focus is more on software performance rather than on security or transparency. Overall, the text primarily emphasizes the technical specifications and requirements for the AI-driven software device in a healthcare context, which aligns it more closely with Robustness and Data Governance.


Sector:
Healthcare (see reasoning)

This legislative text directly pertains to the healthcare sector, as it describes a retinal diagnostic software device intended for use in medical diagnostics. It addresses clinical performance metrics, user training, and the necessary evaluations for the software's effectiveness and safety, all of which are central to healthcare. While the implications of AI in business and regulations may hold some relevance, the primary focus on a healthcare application firmly categorizes this text under Healthcare. Other sectors are less applicable as they involve different contexts unrelated to medical diagnostics or AI applications in healthcare settings.


Keywords (occurrence): algorithm (2) show keywords in context

Collection: Code of Federal Regulations
Status date: Jan. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text provided does not pertain to AI technology or its applications in any direct manner. It focuses on the structure of data files related to the FDIC's operations, specifically about account identifiers, holds on funds, and data transmission aspects. There are no references to AI-driven technologies, algorithms, or data management practices that would link the text to any of the predefined categories concerning Social Impact, Data Governance, System Integrity, or Robustness. Hence, all categories are scored as not relevant.


Sector: None (see reasoning)

The text does not address any specific sectors related to politics, government services, healthcare, private enterprises, academic research, international cooperation, non-profits, or emerging sectors in relation to AI. Instead, it deals with banking data structures and processes that are largely unrelated to these sectors, so all sector scores reflect a complete lack of relevance.


Keywords (occurrence): automated (4) algorithm (3) show keywords in context

Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text pertains to donor testing regulations under the FDA, specifically addressing procedures for testing human specimens for communicable diseases. It does not mention AI, algorithms, or any related technologies. As such, the relevant categories like Social Impact, Data Governance, System Integrity, and Robustness are not applicable since the AI's influence on society, data management, system security, or performance benchmarks is not considered. The text may inform medical practices but does not involve any considerations about AI systems or their governance. Therefore, all categories are scored as 1: not relevant.


Sector: None (see reasoning)

The text outlines regulations for donor testing related to communicable diseases within healthcare. While this directly impacts healthcare practices, there is no mention of AI technologies influencing these regulations or their enforcement in any sector related to politics, government, or judicial systems. The text does not discuss AI implications for healthcare or any impact from AI technologies, leading to a score of 1 across the relevant sectors.


Keywords (occurrence): algorithm (2) show keywords in context
Feedback form