5035 results:


Summary: The bill amends various federal laws to enhance workers' compensation for federal employees, establish ethics databases, improve cybersecurity training, and develop strategies for issues like digital divide and heat health risks.
Collection: Congressional Record
Status date: Dec. 17, 2024
Status: Issued
Source: Congress

Category:
Societal Impact
System Integrity
Data Robustness (see reasoning)

The text mentions the establishment of the National Artificial Intelligence Research Resource and the improvement of requirements for the Director of the National Institute of Standards and Technology regarding trustworthy artificial intelligence systems. This indicates a direct relevance to the social impact of AI through research and development, as well as to system integrity by focusing on trustworthy systems. As the legislation implies the importance of coordination, testing, and trust in AI systems, it makes it very relevant to both the social impact category and system integrity category, but less relevant to data governance and robustness as those aspects are not specifically addressed in the text.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

The text primarily addresses various legislative initiatives, including references to AI through the establishment of resources for AI research and the testing of trustworthy AI systems. While it indirectly relates to governmental public services in enhancing the operational aspects through AI initiatives and coordination efforts, it lacks specificity in relation to sectors such as healthcare, private enterprises, or the judicial system. The clear references to AI indicate relevance to government agencies, but overall these sections do not comprehensively address more focused sectors like healthcare or academic institutions, which results in a moderate to low relevance overall.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Description: Ignite, commended
Summary: The bill commends Ignite for acquiring Reliant Technologies, recognizing their contributions to defense and space industries and enhancing capabilities in AI and machine learning solutions.
Collection: Legislation
Status date: April 15, 2025
Status: Passed
Primary sponsor: James Lomax (sole sponsor)
Last action: Joint Rule 11 (April 15, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text revolves around a commendation for Ignite's acquisition of Reliant Technologies, particularly highlighting their activities in artificial intelligence and machine learning (AI/ML). The discussion of 'cutting-edge solutions in AI and machine learning,' 'AI-driven tools,' and specific applications in areas such as defense underscores the focus on technological contributions and their potential social implications, notably in support of the Warfighter and government partners. Therefore, the Social Impact category is relevant. The Data Governance category may find some relevance considering the mention of secure and compliant AI-driven tools, suggesting attention to data integrity. However, the emphasis on operational capabilities makes the System Integrity and Robustness categories less applicable as those aspects are not central in the text. Overall, the Social Impact category scores higher as there is significant relevance to the societal implications of the advancements in AI and machine learning mentioned in the text.


Sector:
Government Agencies and Public Services (see reasoning)

The text primarily addresses the acquisition of a company (Reliant Technologies) by Ignite that specializes in AI and related technologies with applications in defense and government services. The emphasis on AI/ML solutions in contexts related to defense and space operations suggests a strong relevance to Government Agencies and Public Services, as these technologies are ultimately aimed at enhancing government capacity. The Healthcare, Judicial System, Academic and Research Institutions, Nonprofits and NGOs, and other sectors do not find substantial mention or implication within the text. Thus, the Government Agencies and Public Services sector receives the highest relevance score here.


Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context

Summary: The "Chance to Compete Act of 2024" promotes merit-based reforms in federal hiring by replacing degree requirements with skills and competency assessments, enhancing job applicant evaluations.
Collection: Congressional Record
Status date: Dec. 16, 2024
Status: Issued
Source: Congress

Category: None (see reasoning)

The CHANCE TO COMPETE ACT OF 2024 focuses primarily on modernizing the federal civil service hiring process through merit-based reforms that emphasize skills and competencies over traditional degree requirements. Although the text does mention 'calls for technical assessments', it does not specifically address artificial intelligence or any AI-related technologies. Therefore, the relevance to the defined categories is limited. The impact of these reforms on society, data governance, system integrity, and robustness is not detailed within the document, indicating a minimal connection to the AI landscape.


Sector: None (see reasoning)

The bill is primarily about federal hiring practices and does not focus on specific sectors such as politics, government agencies, judicial systems, healthcare, private enterprises, academic institutions, international cooperation, or NGOs. There is a generic mention of examining agencies and job classifications but lacks substantive discussion on AI implications within these sectors. Thus, the relevance is very weak across all sectors.


Keywords (occurrence): automated (1) show keywords in context

Summary: The bill honors Grayford F. Payne for his 38 years of federal service, recognizing his contributions and leadership at the U.S. Department of the Interior and other agencies before his retirement.
Collection: Congressional Record
Status date: Dec. 16, 2024
Status: Issued
Source: Congress

Category: None (see reasoning)

The text is a recognition of Grayford F. Payne on his retirement and does not contain any direct references to AI concepts or terminology associated with the categories. Therefore, it has no relevance to the categories of Social Impact, Data Governance, System Integrity, or Robustness.


Sector: None (see reasoning)

The text is a personal commendation for an individual’s career in federal service without any references to AI applications in sectors such as Politics and Elections, Government Agencies, Healthcare, or any others listed. As such, it holds no relevance to the sectors defined.


Keywords (occurrence): automated (1) show keywords in context

Description: A bill to prohibit the distribution of materially deceptive AI-generated audio or visual media relating to candidates for Federal office, and for other purposes.
Summary: The "Protect Elections from Deceptive AI Act" aims to prohibit the distribution of deceptive AI-generated media concerning Federal election candidates to prevent misinformation and protect electoral integrity.
Collection: Legislation
Status date: March 31, 2025
Status: Introduced
Primary sponsor: Amy Klobuchar (5 total sponsors)
Last action: Read twice and referred to the Committee on Rules and Administration. (March 31, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly discusses the prohibition of materially deceptive AI-generated audio and visual media, particularly in the context of Federal elections. It focuses on AI technologies such as machine learning, deep learning, and the transformation of media that can mislead voters. This falls into the Social Impact category due to its implications for misinformation and public trust. The Data Governance category is also relevant as it touches on issues of data integrity and the responsible use of AI in media that influences electoral processes. The System Integrity category is pertinent as it emphasizes the need for transparency regarding AI-generated content in a political context. However, it does not explicitly discuss the benchmarks or regulatory compliance that would relate to the Robustness category, making that score lower.


Sector:
Politics and Elections (see reasoning)

The bill is highly relevant to the Politics and Elections sector since it specifically addresses the use of AI in political campaigns, targeting deceptive media that could influence electoral outcomes. While there is a mention of the implications for public misinformation, the bill does not directly apply to other sectors like Government Agencies or Healthcare, which are less relevant in this context. The focus remains on electoral integrity and public trust relating to AI in a political framework.


Keywords (occurrence): artificial intelligence (1) machine learning (1) deep learning (1) show keywords in context

Description: An Act providing for parental consent for virtual mental health services provided by a school entity.
Summary: The bill mandates parental consent for virtual mental health services provided by school entities in Pennsylvania, ensuring that students under 18 receive such services only with guardian approval.
Collection: Legislation
Status date: June 27, 2024
Status: Engrossed
Primary sponsor: Wayne Langerholc (12 total sponsors)
Last action: Referred to EDUCATION (June 27, 2024)

Category:
Societal Impact
Data Robustness (see reasoning)

This text discusses parental consent for virtual mental health services provided by schools, specifically mentioning AI's role in delivering behavioral health support. This connection emphasizes the impact of AI on society, particularly concerning minors and mental health, making 'Social Impact' a highly relevant category. The text does not elaborate on data governance or the integrity of AI systems, thus receiving lower relevance in 'Data Governance' and 'System Integrity.' While there is no explicit mention of performance benchmarks or auditing, the mention of AI's supportive role links it indirectly to 'Robustness.'


Sector:
Healthcare
Academic and Research Institutions (see reasoning)

The legislation addresses the application of AI in a school setting, focusing on mental health services for minors. While it does not explicitly reference the political processes or direct implications for the judicial system, its relevance to 'Healthcare' is notable given that these services involve mental health support. There is limited discussion of AI use in 'Government Agencies and Public Services' since the focus is primarily within school entities. The presence of AI applications in mental health solidifies relevance in 'Healthcare' and links slightly to the broader application of AI systems in education, thus touching on 'Academic and Research Institutions.' However, the intersection with sectors like 'International Cooperation' or 'Nonprofits' is less compelling.


Keywords (occurrence): artificial intelligence (1) automated (1)

Description: To enhance bilateral defense cooperation between the United States and Israel, and for other purposes.
Summary: The United States-Israel Defense Partnership Act of 2025 aims to enhance defense cooperation between the U.S. and Israel through joint initiatives, including technology development, counter unmanned systems, and increased funding.
Collection: Legislation
Status date: Feb. 12, 2025
Status: Introduced
Primary sponsor: Joe Wilson (105 total sponsors)
Last action: Referred to the Committee on Armed Services, and in addition to the Committee on Foreign Affairs, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (Feb. 12, 2025)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text discusses the enhancement of bilateral defense cooperation between the United States and Israel, mentioning specific technologies such as artificial intelligence for countering unmanned systems and improving warfare capabilities. This indicates a significant emphasis on emerging technologies, including AI, that could have various social impacts related to security and political collaboration. Additionally, the text outlines plans for cooperative programs that will influence system integrity and robustness through collaboration and development of countermeasures to evolving threats. Overall, the text is highly relevant to AI-related legislation as it addresses both the social dimensions of AI use in defense as well as technical concerns regarding integrity and robustness of systems.


Sector:
Government Agencies and Public Services
Academic and Research Institutions
International Cooperation and Standards
Hybrid, Emerging, and Unclassified (see reasoning)

The provisions of the text relate to government practices and defense, specifically highlighting the collaboration between the United States and Israel in the context of technological advancements that encompass artificial intelligence. This depicts a governmental focus on AI within international defense partnerships and emerging technologies. Though it relates to defense and security sectors prominently, mention of AI in other sectors is minimal, making this primarily relevant to government agencies and public services, as well as international cooperation. The text does not directly address judicial or health sectors, nor does it specifically mention labor or education contexts. Overall, primary relevance lies in the defense and governmental context.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Description: The Children First Act
Summary: The Children First Act prioritizes child welfare in North Carolina by expanding affordable child care access, establishing health and safety protections, and addressing workforce challenges in child care.
Collection: Legislation
Status date: March 25, 2025
Status: Introduced
Primary sponsor: Lindsey Prather (34 total sponsors)
Last action: Ref To Com On Rules, Calendar, and Operations of the House (March 26, 2025)

Category:
Societal Impact (see reasoning)

The text of The Children First Act discusses various measures aimed at enhancing the welfare of children, particularly in the domains of child care, health, safety, and digital protections. While it addresses critical issues such as digital exploitation and algorithmic manipulation by social media platforms, this is primarily contextual to the traditional concerns about the ramifications of technological interactions rather than focused legislation on AI itself. Therefore, it touches slightly on the broader social impact of AI through concerns about children's safety in digital environments and how algorithmic strategies affect their well-being, gaining a higher relevance score in the Social Impact category than the others, which do not find explicit mention of data governance, system integrity, or robustness in the legislative text. The significance of the AI-related concerns is rooted in algorithmic exploitation, creating slight concerns about data governance and system integrity regarding children's vulnerabilities without fully aligning with more defined standards in those categories.


Sector:
Government Agencies and Public Services (see reasoning)

This legislation is focused primarily on child welfare and safety, addressing access to child care and protections against digital exploitation. The references to how algorithms engage with children and the implications thereof suggest a relevance to the Government Agencies and Public Services sector, as it involves state responses and initiatives for safeguarding minors. However, there are limited connections to specific guidelines or legislative aspects concerning political campaigns, healthcare, the judiciary, private enterprise, research, or international standards. The legislation is very relevant to children and does have implications in public policy discussions but does not focus on explicit regulation by sector beyond the scope of public health initiatives. Therefore, while significant, the overall associations with the sectors are moderate, particularly in the context of governmental scenarios.


Keywords (occurrence): artificial intelligence (2) algorithm (2) show keywords in context

Description: Establishes the artificial intelligence training data transparency act requiring developers of generative artificial intelligence models or services to post on the developer's website information regarding the data used by the developer to train the generative artificial intelligence model or service, including a high-level summary of the datasets used in the development of such system or service.
Summary: The Artificial Intelligence Training Data Transparency Act mandates developers of generative AI models to disclose data sources and usage on their websites, enhancing transparency and accountability in AI development.
Collection: Legislation
Status date: March 27, 2025
Status: Introduced
Primary sponsor: Andrew Gounardes (sole sponsor)
Last action: REFERRED TO INTERNET AND TECHNOLOGY (March 27, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text explicitly addresses the transparency requirements for data used in training generative AI models, making it highly relevant to the Data Governance category. The legislation focuses on the secure and accurate management of data within AI systems, as it outlines the necessity for developers to disclose various aspects of the datasets used in AI training, including ownership, types, and whether personal information is included. The Social Impact category also scores highly due to the act's implications for consumer protection and accountability in the use of AI, especially since the disclosure of data sources can potentially mitigate biases and unfair practices in AI outputs. The System Integrity and Robustness categories are less relevant; while the act promotes some aspect of transparency and data handling which reflects on integrity, it does not address secure coding practices or benchmarks for AI performance directly.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

This legislation pertains primarily to government oversight over AI technologies and their applications, making it relevant to the Government Agencies and Public Services sector as it mandates accountability from developers concerning AI data usage. However, it intersects with the Academic and Research Institutions sector as well, as it may affect how AI research entities manage their datasets. Other sectors like Politics and Elections, Judicial System, Healthcare, Private Enterprises, Labor and Employment, International Cooperation and Standards, Nonprofits and NGOs are less relevant because the act does not specifically address issues like political campaigning, judicial assessments, healthcare applications, or international regulations directly.


Keywords (occurrence): artificial intelligence (20) automated (1) show keywords in context

Description: To establish the Task Force on Artificial Intelligence in the Financial Services Sector to report to Congress on issues related to artificial intelligence in the financial services sector, and for other purposes.
Summary: The Preventing Deep Fake Scams Act establishes a Task Force on Artificial Intelligence in the Financial Services Sector to address AI-related issues and enhance consumer protections against fraud and identity theft.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Brittany Pettersen (7 total sponsors)
Last action: Referred to the House Committee on Financial Services. (Feb. 27, 2025)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text of the Preventing Deep Fake Scams Act explicitly discusses artificial intelligence, particularly in the context of its implications and applications in the financial services sector. The mention of deep fakes specifically relates to potential scams and fraud, which directly connects to social impact as it addresses consumer protection and the psychological effect of scams on individuals. Additionally, the act emphasizes the importance of establishing standards for AI use in financial services, which aligns with concerns regarding data governance, as the safety and accuracy of customer data is paramount to prevent financial crimes. The establishment of the Task Force also speaks to the necessity of maintaining system integrity by overseeing the implementation of AI solutions in a regulated manner. Finally, there is an indirect link to robustness through the recommendations for ensuring AI systems are performant and compliant within financial sectors, although this is a less direct connection compared to other categories.


Sector:
Government Agencies and Public Services (see reasoning)

The Preventing Deep Fake Scams Act pertains significantly to the financial sector, where AI is being leveraged for various services, including fraud detection and customer service innovations such as voice banking. The legislation is focused on addressing the particular challenges that arise from the use of AI in finance, such as deep fake scams. There is a clear emphasis on legislation affecting consumer protection within the financial services context, showcasing how AI technology integrates into financial operations and raises unique threats that necessitate regulatory oversight. Consequently, this text directly addresses the Government Agencies and Public Services sector, as it outlines the role of a task force made up of government officials from financial oversight entities. While there is some relevance to other sectors like Private Enterprises due to the mention of third-party vendors providing AI services, the emphasis is predominantly on government-led initiatives and consumer protection related to financial services.


Keywords (occurrence): artificial intelligence (11) machine learning (1) deepfake (1) show keywords in context

Description: Requiring each unit of State government to conduct certain inventories and assessments by December 1, 2024, and annually thereafter; prohibiting the Department of Information Technology from making certain information publicly available under certain circumstances; prohibiting a unit of State government from deploying or using a system that employs artificial intelligence under certain circumstances; etc.
Summary: The Artificial Intelligence Governance Act of 2024 mandates Maryland state agencies to conduct annual inventories and impact assessments of AI systems, aiming to ensure ethical and responsible AI use in government functions.
Collection: Legislation
Status date: April 4, 2024
Status: Engrossed
Primary sponsor: Jazz Lewis (24 total sponsors)
Last action: Referred Rules (April 5, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The Artificial Intelligence Governance Act of 2024 is highly relevant to the category of Social Impact as it directly addresses the implications and ethical concerns related to the use of AI systems in state government. It emphasizes responsible and trustworthy AI use, which highlights societal accountability and potential impacts on civil rights and liberties (found in the definitions of high-risk AI). Moreover, it discusses the need for assessments to guard against AI-driven discrimination, a direct concern about the social implications of AI technology. This alignment with principles of fairness and equity underscores its significant relevance to Social Impact. Data Governance is also very relevant as the Act mandates regular data inventories and impact assessments of AI systems used by state government. It ensures that data necessary for AI operation is collected accurately and responsibly, thus addressing potential pitfalls of data mismanagement and bias in AI. The reference to compliance with regulations, collection, and sharing of data further solidifies this category's relevance. System Integrity is relevant as the legislation sets forth requirements for human oversight and the monitoring of AI systems to ensure their safe and effective operation. It outlines the necessity of policies and procedures for the use of AI, discussing the integrity and the transparency of those implementations in state government. Robustness is somewhat relevant due to its focus on establishing performance evaluations and audits for AI systems, ensuring compliance with new benchmarks and standards. However, it is less pronounced compared to the other three categories, making it marginally relevant in the context of this legislation.


Sector:
Government Agencies and Public Services (see reasoning)

This legislation is closely tied to the sector of Government Agencies and Public Services since it explicitly involves requirements for state government units to conduct assessments and inventories regarding their AI systems. The intent is to enhance the operational efficiency of public services through proper governance of AI technologies, ensuring safe deployment in state functions. It is less relevant to sectors like Politics and Elections or Healthcare because there is no direct mention of AI's role in electoral processes or healthcare applications. The emphasis is firmly within public administration and governance contexts, highlighting the relevance of AI regulation in governmental operations.


Keywords (occurrence): artificial intelligence (49) machine learning (1) automated (5) show keywords in context

Description: An Act providing for parental consent for virtual mental health services provided by a school entity.
Summary: The bill mandates parental consent for virtual mental health services provided by school entities in Pennsylvania, ensuring that students under 18 receive appropriate approval before accessing these services.
Collection: Legislation
Status date: April 11, 2025
Status: Introduced
Primary sponsor: Wayne Langerholc (13 total sponsors)
Last action: Referred to EDUCATION (April 11, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text references several aspects of artificial intelligence, particularly in the context of virtual mental health services. Specifically, the act defines artificial intelligence and includes it as a component of behavioral health support. This directly relates to the potential impacts of AI on society as mental health services may be influenced or augmented by AI-driven tools, touching upon concerns such as accountability and the psychological effects on individuals. Given the mention of AI's role in behavioral health, there are clear implications for both social impact (pertaining to mental health and the ethical use of AI) and system integrity (related to the oversight and safety of AI-assisted services). The legislation demonstrates the necessity for oversight and establishes a parental consent framework, indicating relevant governance of AI applications. However, there are no indications of new benchmarking or auditing processes specifically for AI performance, which would affect the robustness category.


Sector:
Healthcare
Academic and Research Institutions (see reasoning)

The text primarily relates to the education sector as it discusses virtual mental health services provided in a school context, necessitating parental consent. However, it also touches on aspects relevant to healthcare due to the mental health support provided, which is interconnected with both education and healthcare sectors. The inclusion of AI in these services could represent intersectional relevance. Yet, the main focus here is on schools and mental health services for students, which does not deeply engage with other sectors like politics, government agencies, or the judiciary.


Keywords (occurrence): artificial intelligence (1) automated (1) show keywords in context

Description: Requiring each unit of State government to conduct certain inventories and a certain assessment on or before certain dates; prohibiting the Department of Information Technology from making certain information publicly available under certain circumstances; requiring the Department, in consultation with a certain subcabinet, to adopt certain policies and procedures concerning the development, procurement, deployment, use, and assessment of systems that employ artificial intelligence by units o...
Summary: The Artificial Intelligence Governance Act of 2024 mandates Maryland state agencies to inventory AI systems, conduct impact assessments, and adopt governing policies for responsible AI use, ensuring accountability and ethical practices in implementation.
Collection: Legislation
Status date: May 9, 2024
Status: Passed
Primary sponsor: Katie Hester (19 total sponsors)
Last action: Approved by the Governor - Chapter 496 (May 9, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The AI Governance Act of 2024 addresses multiple facets of artificial intelligence use within state government. For Social Impact, the act emphasizes the responsible, ethical, and beneficial use of AI technologies, which includes considerations for algorithmic discrimination and civil rights, making it very relevant as it highlights the societal implications of AI. In terms of Data Governance, the act mandates the conduct of annual data inventories specifically for data used in AI systems to ensure secure management of information related to these technologies, thus scoring highly. System Integrity is also highly relevant as it requires transparency, oversight, and regulations about AI system implementation and monitoring, directly relating to the security of these systems. Lastly, while Robustness is relevant due to the emphasis on compliance and procedural frameworks for AI systems, it has a slightly lower linkage as it does not focus explicitly on performance benchmarks for AI systems, thus cannot be rated as highly as the other categories.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

This legislation pertains primarily to Government Agencies and Public Services, as it outlines the responsibilities of state governmental units regarding the use and assessment of AI technologies. There is less direct relevance to sectors like Politics and Elections, Judicial System, Healthcare, etc., though the implications of AI use in overall governance and public service delivery are significant. Therefore, while it touches on various sectors indirectly, the core focus remains on enhancing governmental processes, making it very relevant to the Government Agencies and Public Services sector. Other sectors scored lower due to a lack of direct focus in the text.


Keywords (occurrence): artificial intelligence (46) machine learning (1) automated (6) show keywords in context

Summary: The bill H.R. 8753 addresses persistent challenges faced by communities with shared ZIP Codes by directing the USPS to assign unique ZIP Codes, improving mail delivery and geographic identity.
Collection: Congressional Record
Status date: Dec. 17, 2024
Status: Issued
Source: Congress

Keywords (occurrence): artificial intelligence (1) show keywords in context

Summary: The "Equal Treatment of Public Servants Act of 2024" aims to amend Social Security rules, eliminating pension offsets, revising benefit calculations for public service workers, and enhancing reporting on noncovered earnings.
Collection: Congressional Record
Status date: Dec. 17, 2024
Status: Issued
Source: Congress

Keywords (occurrence): automated (3) show keywords in context

Description: Prohibiting the use of motor vehicle kill switches; providing exceptions; providing a minimum mandatory sentence for attempted murder of specified justice system personnel; providing correctional probation officers with the same firearms rights as law enforcement officers; prohibiting a person from depriving certain officers of digital recording devices or restraint devices, etc.
Summary: The bill prohibits the use of motor vehicle kill switches, establishes penalties for related offenses, enhances protections for law enforcement personnel, and sets requirements for testing infectious diseases in arrestees.
Collection: Legislation
Status date: Feb. 26, 2025
Status: Introduced
Primary sponsor: Criminal Justice (2 total sponsors)
Last action: CS by Criminal Justice read 1st time (April 3, 2025)

Keywords (occurrence): artificial intelligence (4) automated (2) show keywords in context

Description:
Summary:
Collection:
Status date:
Status:
Primary sponsor: ( total sponsors)
Source:
Last action: ()

Category: None (see reasoning)


Sector: None (see reasoning)


Keywords (occurrence): artificial intelligence () machine learning () neural network () deep learning () automated () deepfake () synthetic media () large language model () foundation model () chatbot () recommendation system () algorithm () autonomous vehicle ()

Description:
Summary:
Collection:
Status date:
Status:
Primary sponsor: ( total sponsors)
Source:
Last action: ()

Category: None (see reasoning)


Sector: None (see reasoning)


Keywords (occurrence): artificial intelligence () machine learning () neural network () deep learning () automated () deepfake () synthetic media () large language model () foundation model () chatbot () recommendation system () algorithm () autonomous vehicle ()

Description:
Summary:
Collection:
Status date:
Status:
Primary sponsor: ( total sponsors)
Source:
Last action: ()

Category: None (see reasoning)


Sector: None (see reasoning)


Keywords (occurrence): artificial intelligence () machine learning () neural network () deep learning () automated () deepfake () synthetic media () large language model () foundation model () chatbot () recommendation system () algorithm () autonomous vehicle ()

Summary: The bill enhances U.S. Customs and Border Protection by increasing officer numbers and reporting requirements, along with improving disaster relief definitions and procurement processes for artificial intelligence.
Collection: Congressional Record
Status date: Dec. 16, 2024
Status: Issued
Source: Congress

Keywords (occurrence): artificial intelligence (1) show keywords in context
Feedback form