4161 results:
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register
The text primarily outlines reporting and recordkeeping requirements for monitoring systems related to environmental compliance. There is no explicit mention of AI technologies or their implication in systems or processes mentioned. While automated systems and monitoring may suggest a relevant context for AI, the absence of direct references to AI technologies diminishes the relevance to the specified categories. The focus is on compliance and data management without addressing broader societal implications, data governance, system integrity, or robustness in relation to AI.
Sector: None (see reasoning)
The text does not reference AI applications in the specified sectors directly, such as political systems, government operations, or healthcare. However, there is a mention of monitoring systems used for ensuring compliance in a regulatory context, which could tangentially relate to compliance in government agencies but lacks strong connections. The absence of AI-related content in the context of these sectors is highly evident.
Keywords (occurrence): automated (3) show keywords in context
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register
The text provided primarily discusses regulations related to hazardous waste, environmental management, and procedures for incorporation by reference within legal frameworks. It does not pertain to AI technologies, their impact on society, data governance principles, system integrity, or benchmarks for AI performance. Therefore, there are no relevant AI-related portions in the text impacting the categories of Social Impact, Data Governance, System Integrity, or Robustness.
Sector: None (see reasoning)
The text is focused on environmental regulations and the procedural aspects of incorporating relevant materials into legal standards, with no mention or implication of political activities, government operations, legal systems, healthcare, or other specified sectors relevant to AI use. As such, none of the sectors related to AI utilization are applicable here.
Keywords (occurrence): automated (1)
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register
The provided text from the Environmental Protection Agency primarily focuses on regulations regarding air quality and emissions, particularly in relation to stationary sources in Arizona. It does not mention topics directly related to Artificial Intelligence (AI), such as automated decision-making or algorithms, which are crucial for the relevance of the categories outlined. Given the absence of references to AI technologies or implications regarding their application in environmental regulations, none of the categories related to social impact, data governance, system integrity, or robustness can be substantiated as relevant to this legislation.
Sector: None (see reasoning)
Similarly, the text does not reference AI applications or regulations in any of the sectors. It focuses purely on air quality regulations without engaging with the complexities of AI in sectors such as healthcare, government services, or any other mentioned. The lack of direct relevance to AI ensures that all sector categorizations must receive a low relevance score.
Keywords (occurrence): automated (4) show keywords in context
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register
This text primarily discusses recordkeeping and reporting requirements related to regulated substances and does not mention artificial intelligence or any of its associated terms directly. As a result, there is a minimal to non-existent connection to the AI-related categories. The absence of references to AI concepts like automated decision-making, algorithms, or any form of AI processing indicates that none of the categories should be linked to the content of this document.
Sector: None (see reasoning)
The text outlines administrative and reporting obligations related to environmental regulations concerning specific substances and does not reference the use of AI or its applications in any of the specified sectors. There is no discussion about how AI might influence politics, government operations, healthcare, judicial matters, or the private sector. Therefore, all sectors score a 1, indicating no relevance.
Keywords (occurrence): automated (1)
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register
The text primarily discusses regulations and operating limits regarding environmental protection in the context of emissions capture systems and control devices. The text does not explicitly mention Artificial Intelligence or any related technologies. It focuses on procedural and efficiency mandates for reducing volatile organic compounds in spray booths, which does not have a direct impact on the categories of Social Impact, Data Governance, System Integrity, or Robustness as defined. As such, the relevance of the text to these categories is low.
Sector: None (see reasoning)
The text pertains to environmental regulations and operational standards for emission controls within manufacturing processes rather than areas directly related to the specified sectors like Politics and Elections, Government Agencies and Public Services, or Healthcare. There are no references to AI applications or regulations in the context of government services, judicial systems, or healthcare settings, nor any implications for nonprofit organizations or international standards in AI. Therefore, the relevance across all sectors is minimal.
Keywords (occurrence): automated (1) show keywords in context
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register
The text does not address AI concepts directly, nor does it involve the implications of AI on social issues, data handling in AI systems, AI system integrity, or performance benchmarks for AI. It primarily deals with government contractors, fraud, and legal frameworks related to contract management, without any mention of AI technologies or their implications, which makes it not relevant to any of the categories.
Sector: None (see reasoning)
The document is a glossary that outlines various regulations and definitions relevant to government contracting, fraud, and legal processes. It does not address specific sectors like politics, healthcare, or commercial enterprises directly, nor does it mention the application of AI in these areas, which renders it not relevant to the provided sectors.
Keywords (occurrence): automated (1)
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register
The text primarily discusses regulations and compliance standards for gaming operations, especially concerning internal controls and definitions relevant to Class II gaming. It does not explicitly mention or pertain to any aspects of AI technology. The terms related to AI such as algorithms, automated systems, or intelligent decision-making do not appear, nor does the context suggest relevance to the use of AI in gaming operations or surveillance technology within this regulatory framework. Therefore, it seems to be unrelated to the categories concerning Social Impact, Data Governance, System Integrity, or Robustness.
Sector: None (see reasoning)
The text outlines regulations for Class II gaming on Indian lands, focusing on operational standards and definitions of various gaming-related terms. Its primary context revolves around the gaming industry, compliance measures, and accounting standards, which do not directly involve the nine sectors defined. No references to the political implications, public services, healthcare applications, or use of AI in judicial systems are evident. Hence it does not fit into any sector categorically and therefore receives a score of 1 for all sectors.
Keywords (occurrence): automated (2) show keywords in context
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register
The text discusses specific manufacturing drawback rulings from U.S. Customs and Border Protection. However, it does not address any AI-related issues or implications. Consequently, it lacks relevance to the categories concerning social impact, data governance, system integrity, or robustness regarding AI. There are no mentions of algorithms, machine learning, or any keyword related to AI in the provided text. Therefore, all categories score a 1 due to their complete irrelevance.
Sector: None (see reasoning)
The text primarily deals with customs, regulations, and applications for manufacturing drawback rulings. It does not mention or imply any connection to the sectors defined, such as politics and elections, healthcare, or any sectors that could leverage AI use. As such, it receives a score of 1 across all sectors for lack of relevance.
Keywords (occurrence): automated (1)
Collection: Code of Federal Regulations
Status date: Jan. 1, 2023
Status: Issued
Source: Office of the Federal Register
The provided text focuses primarily on the procedures, definitions, and requirements related to counterintelligence evaluations and polygraph examinations within the Department of Energy (DOE). While it emphasizes the safeguarding of employee rights and privacy, there are notable mentions concerning secure access to classified information rather than explicit connections to AI. The lack of references or implications involving AI technology means that the relevance to the defined categories is minimal to non-existent. Thus, the categories will score very low as the text does not address the impact of AI on society, data governance, the integrity of AI systems, or the robustness of AI benchmarks.
Sector: None (see reasoning)
The text does not specifically address the use or regulation of AI in any sector. While it discusses the DOE, counterintelligence, and polygraph standards, none of these directly involve AI applications or implications. Consequently, all sectors score the lowest due to the absence of AI context in the legislative focus.
Keywords (occurrence): automated (1) show keywords in context
Collection: Code of Federal Regulations
Status date: Jan. 1, 2023
Status: Issued
Source: Office of the Federal Register
The provided text primarily discusses agency responsibilities related to employee deductions and contributions to the Fund, which do not directly reference artificial intelligence or its implications within these processes. There are no explicit mentions of AI-related technologies or concepts relevant to the categories of Social Impact, Data Governance, System Integrity, or Robustness. Hence, it is clear that this text is not relevant to these categories.
Sector: None (see reasoning)
The text does not address the specific use or regulation of AI in any of the outlined sectors, such as Politics and Elections, Government Agencies and Public Services, Judicial System, Healthcare, Private Enterprises, Labor, and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, or Hybrid, Emerging, and Unclassified. Therefore, it is not applicable to any sector mentioned.
Keywords (occurrence): automated (1) show keywords in context
Description: A bill to direct the Secretary of Health and Human Services and the Secretary of Education to coordinate and distribute educational materials and resources regarding artificial intelligence and social media platform impact, and for other purposes.
Collection: Legislation
Status date: June 20, 2024
Status: Introduced
Primary sponsor: Edward Markey
(sole sponsor)
Last action: Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (June 20, 2024)
Societal Impact (see reasoning)
The bill directly addresses the coordination and distribution of educational materials and resources regarding artificial intelligence (AI) and its impact on social media. This connection indicates relevance primarily to the Social Impact category, as it highlights the societal effects of AI, particularly concerning the educational aspect of understanding its influence on social media platforms. The absence of specific provisions regarding data governance, system integrity, or robustness in the provided description leads to lower relevance in these areas.
Sector:
Government Agencies and Public Services (see reasoning)
Given the focus on education and coordination between the Secretary of Health and Human Services and the Secretary of Education, the bill is quite relevant to the Government Agencies and Public Services sector. It also incorporates an element of addressing technology's impact on society, potentially spanning concepts relevant to Public Services due to the educational initiatives discussed. There is limited direct relevance to other sectors, such as Politics and Elections or Healthcare, as these areas are not explicitly mentioned in the description.
Keywords (occurrence): show keywords in context
Description: Regulates the use of deep fakes and artificial intelligence technology in political advertising. (gov sig)
Collection: Legislation
Status date: June 20, 2024
Status: Vetoed
Primary sponsor: Royce Duplessis
(2 total sponsors)
Last action: Vetoed by the Governor. (June 20, 2024)
Societal Impact
System Integrity
Data Robustness (see reasoning)
The text clearly addresses the implications and regulations surrounding the use of artificial intelligence and deep fake technology specifically in the context of political advertising. It emphasizes the need for transparency and accountability to maintain the integrity of the electoral process. Because it specifically outlines the requirements for disclosing the use of AI technologies that could manipulate media representations of candidates, it is highly relevant to the 'Social Impact' category, which focuses on AI's societal implications, particularly in terms of misinformation and public trust. It also pertains to 'System Integrity' due to the legislative efforts aimed at ensuring transparency and ethical standards in the electoral process. Moreover, the regulation of AI to prevent fraud and misinformation indicates its importance for 'Robustness'. However, the relevance to 'Data Governance' is minimal, as there are no explicit mentions of data management or collection aspects. Therefore, the scores reflect stronger ties to Social Impact, System Integrity, and Robustness compared to Data Governance.
Sector:
Politics and Elections (see reasoning)
The text primarily addresses the intersection of AI with political processes, specifically the use of deep fakes and AI technologies in political advertising. Hence, it is highly relevant to the 'Politics and Elections' sector. There are no mentions or implications regarding AI in government public services, healthcare, or other sectors, which renders those sectors largely irrelevant. Although technology may have indirect influence on private enterprises, labor, and employment, that is not the focus of this specific legislation, leading to a low relevance score in that sector. Therefore, the most fitting sector is Politics and Elections, with lower relevance assigned to others.
Keywords (occurrence): deepfake (3) show keywords in context
Description: To amend the Controlled Substances Act to require electronic communication service providers and remote computing services to report to the Attorney General certain controlled substances violations.
Collection: Legislation
Status date: July 2, 2024
Status: Introduced
Primary sponsor: Angela Craig
(7 total sponsors)
Last action: Referred to the Committee on Energy and Commerce, and in addition to the Committee on the Judiciary, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (July 2, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text includes specific references to AI-related technology, particularly in the context of how communication service providers can use algorithms and machine learning to identify and report illegal activities related to controlled substances. This addresses issues around the algorithms' capabilities and their implications for reporting mechanisms and accountability, which directly ties into the categories defined. Therefore, the categories of Social Impact, Data Governance, and System Integrity are relevant.
Sector:
Government Agencies and Public Services (see reasoning)
The text does not explicitly address any specific sectors within its context, as it centers around legal amendments concerning controlled substances and reporting duties for providers. While there are implications for Government Agencies and Public Services in terms of enforcement, it doesn't directly discuss the regulation or use of AI in sectors like Politics, Healthcare, or others explicitly. Therefore, all sector scores will be low, with the most relevant being for Government Agencies and Public Services, given the reporting requirements to the Attorney General.
Keywords (occurrence): machine learning (2) algorithm (2) show keywords in context
Description: An act to add Chapter 11.1 (commencing with Section 21760) to Division 8 of the Business and Professions Code, relating to social media platforms.
Collection: Legislation
Status date: May 21, 2024
Status: Engrossed
Primary sponsor: Akilah Weber
(sole sponsor)
Last action: In committee: Held under submission. (Aug. 15, 2024)
Societal Impact
Data Governance (see reasoning)
This act concerns the management of digital content and specifically mentions technologies related to digital content forgery such as deepfakes. It emphasizes accountability and standards for social media platforms in handling provenance data, which relates to the impact of AI and digital methods on public trust and information verification. This makes the Social Impact category very relevant. Additionally, while the act does not directly discuss the governance of data specifically related to biases or inaccuracies within datasets, it does establish guidelines for content integrity and verification, which slightly leans toward Data Governance. It does not address system integrity through mandates like human oversight, nor does it set performance benchmarks, making Robustness and System Integrity less relevant.
Sector:
Government Agencies and Public Services (see reasoning)
The legislation mentions the impact of digital technologies on social media platforms, focusing on the handling of provenance data which is central to online content authenticity. However, it doesn't specifically address the political implications of AI in decision-making or electoral processes, nor does it focus on regulation within the healthcare, judicial, or employment sectors. The dimensions of the government and public services are indirectly relevant as it addresses state departments potentially but not in a direct manner. Its implications for privacy and content management make it somewhat relevant to public services, but not strongly enough to warrant a higher score. Thus, the most relevant sectors are Government Agencies and Public Services due to its mention of policies governing how state departments interact with digital content but very indirectly related to the other sectors, hence obtaining low scores for them.
Keywords (occurrence): show keywords in context
Description: A bill to require transparency with respect to content and content provenance information, to protect artistic content, and for other purposes.
Collection: Legislation
Status date: July 11, 2024
Status: Introduced
Primary sponsor: Maria Cantwell
(3 total sponsors)
Last action: Read twice and referred to the Committee on Commerce, Science, and Transportation. (July 11, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text primarily addresses AI due to its focus on transparency regarding AI-generated content, such as deepfakes and synthetic media. This makes it relevant to the Social Impact category as it discusses the potential repercussions of deepfake technology on artists, journalists, and the public at large, including unfair competition in the digital marketplace. It is also related to Data Governance as it mandates the provision of content provenance information, which ties into secure data management practices in AI systems. Furthermore, the text touches upon aspects like cybersecurity and standards development, which connect to System Integrity and Robustness, respectively. However, the primary emphasis on public trust, transparency, and protecting content creators makes Social Impact the most relevant category.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions
International Cooperation and Standards
Hybrid, Emerging, and Unclassified (see reasoning)
The bill has clear implications for sectors such as Private Enterprises, Labor, and Employment, as it deals with how AI may affect competition and integrity within digital marketplaces that involve artists' and journalists' works. It also relates to Government Agencies and Public Services concerning the expected regulatory measures and standards to be developed under the act. The mention of the Under Secretary consulting with various institutions links it to International Cooperation and Standards. However, the bill does not explicitly target healthcare, judicial processes, or electoral concerns, thus lowering relevance in those sectors. That said, the strongest associations are with Private Enterprises, Labor, and Government Agencies.
Keywords (occurrence): artificial intelligence (11) deepfake (1) algorithm (1) show keywords in context
Description: For legislation relative to consumer health data. Consumer Protection and Professional Licensure.
Collection: Legislation
Status date: Feb. 16, 2023
Status: Introduced
Primary sponsor: Robyn Kennedy
(7 total sponsors)
Last action: Reporting date extended to Tuesday December 31, 2024, pending concurrence (Aug. 5, 2024)
Data Governance (see reasoning)
The text primarily focuses on consumer health data and establishes a framework for its governance. It covers privacy, consent, and security policies surrounding consumer health data, which includes the use of algorithms or machine learning under definitions regarding Consumer Health Data. However, the emphasis is more on health data management rather than broader AI-specific legislation. Therefore, the relevance to categories primarily concerned with AI is not strong but present, especially regarding data minimization and processing within AI systems. As such, it is most relevant to the Data Governance category, addressing the accuracy, consent, and collective processing of data that AI systems may utilize. The other categories have limited relevance since they do not directly engage with the socio-ethical, robustness, or systemic integrity aspects of AI legislation.
Sector:
Healthcare (see reasoning)
The legislation is specifically targeting consumer health data, making it closely tied to healthcare practices. Although it may impact various sectors indirectly due to its consumer protections and implications for data management, the strongest connection is to the healthcare sector where AI potentially plays a part in diagnostics and data handling. It does not explicitly address political campaigns, judicial applications, or employment practices, and it lacks broader implications for academic institutions or international standards. Therefore, it scores highest in Healthcare, while other sectors are less relevant.
Keywords (occurrence): machine learning (1) show keywords in context
Description: To provide for Department of Energy and Department of Agriculture joint research and development activities, and for other purposes.
Collection: Legislation
Status date: Dec. 5, 2023
Status: Engrossed
Primary sponsor: Frank Lucas
(11 total sponsors)
Last action: Received in the Senate and Read twice and referred to the Committee on Energy and Natural Resources. (Dec. 5, 2023)
Data Governance
Data Robustness (see reasoning)
The text discusses joint research and development activities between the Department of Energy and the Department of Agriculture, mentioning 'machine learning' and 'artificial intelligence' specifically. The relevance to 'Social Impact' is somewhat limited since the text does not address societal implications directly, focusing instead on technical advancements. For 'Data Governance', while there are mentions of data management and security in the collaborative processes, it does not deeply delve into issues of data protection or governance metrics, hence a lower relevance. 'System Integrity' is not explicitly mentioned, as the text lacks discussions on security, transparency, or oversight of AI technologies. 'Robustness', while relevant due to the mention of 'optimizing algorithms', does not prominently focus on performance benchmarks or auditing processes, so the score is moderate.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The text primarily relates to the intersection of the Department of Energy and the Department of Agriculture, focusing on collaborative research efforts. It has some implications for 'Healthcare' in terms of agricultural advancements potentially impacting food production systems. 'Government Agencies and Public Services' is relevant due to the involvement of federal agencies in the research and development processes outlined in the text. 'Private Enterprises, Labor, and Employment' may have indirect relevance owing to potential impacts on agricultural practices and technologies. However, there are no direct references to political/electoral impacts, judicial applications, or specific employment regulations, so scores have been calibrated accordingly.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: A bill to provide for Department of Energy and Department of Agriculture joint research and development activities, and for other purposes.
Collection: Legislation
Status date: Nov. 14, 2023
Status: Introduced
Primary sponsor: Ben Lujan
(3 total sponsors)
Last action: Read twice and referred to the Committee on Energy and Natural Resources. (Nov. 14, 2023)
Societal Impact
Data Robustness (see reasoning)
The document primarily focuses on joint research and development activities between the Department of Energy and the Department of Agriculture. The explicit mention of machine learning and artificial intelligence in relation to optimizing algorithms for agricultural and energy purposes indicates a considerable relevance to AI's social impact, especially concerning its potential implications on efficiency and sustainability. However, the focus on high-level research activities suggests less direct relevance to data governance, system integrity, or robustness as these aspects are not prominently detailed in this text.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text highlights collaborative research that involves machine learning and AI, particularly relating to agricultural practices and energy efficiency, which may affect multiple sectors. However, the focus on joint activities between the Department of Energy and the Department of Agriculture is primarily sector-specific to government agencies and public services, with some aspects relevant to agriculture. The advanced technologies and methodologies discussed could have implications for private enterprises in agriculture but do not directly address the broader impacts on sectors such as healthcare, politics, or education.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: A bill to support National Science Foundation education and professional development relating to artificial intelligence.
Collection: Legislation
Status date: May 22, 2024
Status: Introduced
Primary sponsor: Maria Cantwell
(2 total sponsors)
Last action: Placed on Senate Legislative Calendar under General Orders. Calendar No. 486. (Aug. 1, 2024)
Societal Impact (see reasoning)
The NSF AI Education Act of 2024 explicitly supports education and professional development in the field of artificial intelligence. Key aspects include the provision of scholarships for students studying AI and its applications across various sectors. This strong focus on educational initiatives relating to AI strongly aligns with the Social Impact category due to its potential influence on societal education about AI, workforce training, and the enhancement of skills in emerging technologies. Moreover, the Act does not cover aspects like data governance, system integrity, or robustness directly, as it primarily concerns educational policy rather than technical governance or compliance. Thus, the relevance to the categories varies significantly, with Social Impact being the most pertinent.
Sector:
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The legislation explicitly details programs and scholarships targeting artificial intelligence education within higher education institutions. Since it focuses on educating future AI professionals and fostering collaborations between academia and industry, it is primarily relevant to the Academic and Research Institutions sector. Moreover, its implications on vocational training and skills in the workforce provide insights into the Private Enterprises, Labor, and Employment sector, especially concerning employment prospects. The document does not explicitly address the use of AI in other sectors like Healthcare or Government Agencies, indicating a lower relevance for those categories. Thus, Academic and Research Institutions is the strongest category fit based on the text's content.
Keywords (occurrence): artificial intelligence (61) deep learning (1) show keywords in context
Description: Relative to Women in Animation.
Collection: Legislation
Status date: Aug. 5, 2024
Status: Introduced
Primary sponsor: Greg Wallis
(sole sponsor)
Last action: From printer. (Aug. 6, 2024)
Societal Impact (see reasoning)
The text prominently discusses the impact of AI technology on the animation industry, indicating challenges and opportunities due to its emergence. This intersection is crucial for understanding the social implications of AI as it relates to employment and creativity in animation, thereby making it highly relevant to the Social Impact category. It also touches on concerns regarding representation and equity within the animation field but does not directly address data governance, system integrity, or robustness, as it does not delve into data management, security of AI systems, or AI performance benchmarks.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text is primarily focused on the animation industry and its workforce, specifically highlighting the contributions of Women in Animation in closing the gender gap and the effects of AI. While it mentions broader impacts of AI on employment within animation, it doesn't explicitly reference political processes, government services, judiciary considerations, healthcare applications, or employment specifically in a corporate context. Therefore, its relevance is concentrated on the entertainment sector and does not align with the other defined sectors.
Keywords (occurrence): artificial intelligence (1) show keywords in context