5022 results:


Description: Relating to electronic health record requirements.
Summary: This bill establishes regulations for electronic health records in Texas, ensuring they are stored within the U.S., outlining required medical history content, and emphasizing biological sex documentation while restricting certain personal information.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: Lois Kolkhorst (sole sponsor)
Last action: Left pending in committee (March 18, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text primarily deals with the requirements for electronic health records (EHR), with a notable focus on how artificial intelligence (AI) is used in these records. Specifically, section 183.005 outlines responsibilities for health care practitioners using AI for diagnostic recommendations, mandating that they verify AI-provided information before recorded in EHRs, which addresses accountability and accuracy. This relates directly to the Social Impact category regarding consumer protection and potential health disparities stemming from AI use. It also sits within Data Governance, as it incorporates elements of data management in AI applications in healthcare. Moreover, there are references to maintaining the integrity of information and ensuring access controls under certain statutory frameworks, which connects to System Integrity. However, the text does not delve into the robustness standards or certification practices related to AI benchmarks, aligning less with the Robustness category.


Sector:
Healthcare (see reasoning)

The text specifically addresses the use of AI in healthcare settings, emphasizing electronic health records and AI responsibilities for healthcare practitioners. It is targeted towards rules and guidelines affecting medical facilities and practitioners in terms of their handling of health records that include AI-generated recommendations, which establishes a link to the Healthcare sector. The references to governmental entities controlling access to these records also add relevance, but it does not explicitly target other sectors such as Politics and Elections or the Judicial System, which might directly relate to AI regulation outside healthcare. Other sectors like Private Enterprises or Academia are touched upon only marginally if at all. Therefore, the greatest applicability remains within Healthcare.


Keywords (occurrence): artificial intelligence (2) algorithm (1) show keywords in context

Description: Relating to electronic health record requirements.
Summary: This bill establishes requirements for electronic health records in Texas, mandating storage within the U.S., outlining data inclusion and access protocols, and ensuring medical histories reflect biological sex accurately.
Collection: Legislation
Status date: March 12, 2025
Status: Introduced
Primary sponsor: Greg Bonnen (sole sponsor)
Last action: Filed (March 12, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

This text emphasizes the requirements and regulations surrounding electronic health records (EHR), with a significant focus on the use of artificial intelligence (AI) in determining diagnostics and treatment suggestions based on patient medical records. Since AI is mentioned explicitly in the context of healthcare decisions, there is a strong relevance to the Social Impact category, particularly regarding consumer safety and healthcare accuracy. Furthermore, the text contains data governance aspects related to how electronic health records are managed and stored, especially concerning accuracy and access restrictions shaped by AI tools. The System Integrity category is also relevant due to the implications of ensuring human oversight on AI processes in healthcare. Robustness is less relevant because it does not mention benchmarks for performance or compliance measures extensively. Overall, this legislation directly addresses social issues (accuracy of AI in treatment), data management (EHR requirements), and the need for human oversight, which adds weight to the relevance of these categories.


Sector:
Healthcare (see reasoning)

The text primarily pertains to healthcare as it deals explicitly with electronic health records and AI usage in medical contexts. It outlines important legal frameworks that govern how healthcare practitioners interact with AI technology and manage patient information, especially concerning diagnostic procedures and recommendations. While the legislation touches upon AI implications in healthcare, there is minimal relevance to politics, the judicial system, private enterprises, academic institutions, nonprofits, or international standards, as it mainly focuses on specific healthcare practice regulations. As such, the strongest alignment is with the healthcare sector, reflecting a direct influence of AI within medical practice.


Keywords (occurrence): artificial intelligence (2) algorithm (1) show keywords in context

Description: Creates the Meaningful Human Review of Artificial Intelligence Act. Sets forth provisions prohibiting a State agency, or any entity acting on behalf of an agency, from utilizing or applying any automated decision-making system, directly or indirectly, without continuous meaningful human review when performing any of the agency's specified functions. Requires impact assessments to be performed by State agencies seeking to utilize or apply an automated decision-making system with continuous mea...
Summary: The AI-MEANINGFUL HUMAN REVIEW Act mandates continuous human oversight for automated decision-making systems used by state agencies in Illinois, requiring impact assessments and ensuring accountability and protection of individual rights.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: Abdelnasser Rashid (sole sponsor)
Last action: Rule 19(a) / Re-referred to Rules Committee (March 21, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The Meaningful Human Review of Artificial Intelligence Act focuses on the utilization of automated decision-making systems within state agencies with a requirement for continuous human oversight. This oversight directly relates to the Social Impact category, as it addresses concerns about the impact of AI-driven decisions on individual rights and welfare, ensuring that AI applications do not negatively affect civil liberties or public assistance benefits. The requirement for meaningful human review and impact assessments aligns with concerns regarding the ethical, social, and legal implications of using AI in sensitive government applications. This text is also relevant to the System Integrity category due to its emphasis on the security and reliability of decision-making processes reliant on AI platforms, mandating oversight and evaluations to maintain integrity and accountability. However, while data governance aspects such as rights related to personal data and evaluations of biases are addressed in the text, they are not the primary focus, making its relevance to this category moderate. There is no explicit mention of performance benchmarks or legislation that develops new AI methodologies, which makes Robustness less relevant. Overall, the act emphasizes social responsibility and the integrity of government applications of AI, which are key concerns in today's digital governance landscape.


Sector:
Government Agencies and Public Services (see reasoning)

This legislation pertains directly to government agencies and public services, specifically how they utilize AI in decision-making processes that significantly affect the rights and welfare of individuals, particularly in public assistance contexts. The prohibition of automated decisions without meaningful human review speaks to the accountability and transparency expected in public sector operations when employing technology. There are specific terms related to the governance of automated systems and their impact assessments, reinforcing the role of AI in enhancing or compromising the integrity of public services. The text does not address sectors like Politics and Elections, Judicial System, Healthcare, Private Enterprises, Academic Institutions, or International Standards directly, making their relevance minimal in this context. Additionally, it does not touch upon nonprofit organizations, nor does it suggest any hybrid or emerging sector applications. The primary focus remains on government operations, thus aligning specifically with the Government Agencies and Public Services sector.


Keywords (occurrence): artificial intelligence (3) machine learning (1) automated (40) show keywords in context

Description: For legislation to further regulate the operation of autonomous vehicles. Transportation.
Summary: The bill regulates the operation of autonomous vehicles in Massachusetts, requiring a human safety operator to be present during transport to ensure safety and compliance with federal standards.
Collection: Legislation
Status date: Feb. 16, 2023
Status: Introduced
Primary sponsor: Jessica Giannino (4 total sponsors)
Last action: Accompanied a study order, see H4530 (April 29, 2024)

Category:
System Integrity (see reasoning)

This bill addresses the regulation and safety of autonomous vehicles, which are intrinsically linked to artificial intelligence (AI) technologies. The term 'autonomous vehicles' implies the use of AI algorithms and automated decision-making processes for navigation and control. The legislation emphasizes the need for human oversight, which hints at concerns regarding the integrity and safety of AI systems in this context. The focus is primarily on system integrity due to mandates for human intervention and adherence to federal standards, alongside safety measures. However, it does not discuss broader social implications or data governance concerning AI-driven systems, leading to a lower score for those categories.


Sector:
Government Agencies and Public Services (see reasoning)

The text explicitly addresses legislation related to autonomous vehicles, which falls under the transportation sector but also impacts government regulations concerning public safety. It does not specifically mention political campaigns, healthcare, or other sectors. Thus, it is primarily relevant to government operations related to public safety and transportation. The score for this sector reflects its focus on vehicle safety and regulation.


Keywords (occurrence): automated (2) autonomous vehicle (2) show keywords in context

Summary: The bill focuses on advancing bipartisan legislation for artificial intelligence, aiming to promote innovation while implementing safeguards to prevent misuse, ensuring the U.S. remains a technological leader.
Collection: Congressional Record
Status date: Dec. 10, 2024
Status: Issued
Source: Congress

Category:
Societal Impact (see reasoning)

The text primarily discusses the legislative efforts and bipartisan commitment around AI, indicating its social implications and the need for governance around its impact. It highlights the importance of creating AI legislation that prioritizes innovation while ensuring safeguards against potential harms, touching on social impacts such as healthcare and climate change. However, it lacks specific details about data governance, system integrity measures, or robustness benchmarks. Thus, while it is relevant to social impact, it is less relevant to the other categories.


Sector:
Government Agencies and Public Services
Healthcare (see reasoning)

While the text addresses AI in a governmental context, it does not delve into specifics about how AI is used or regulated within any individual sector. It indicates a commitment to improving AI legislation, which can apply across sectors but does not offer concrete examples pertinent to any one sector. The focus remains on legislative processes rather than sector-specific applications of AI. Therefore, its relevance to the sectors is limited, with the strongest ties to government agencies based on its commentary about bipartisan talks.


Keywords (occurrence): artificial intelligence (2) show keywords in context

Description: Department of Law; Division of Emerging Technologies, Cybersecurity, and Data Privacy established. Establishes within the Department of Law a Division of Emerging Technologies, Cybersecurity, and Data Privacy to oversee and enforce laws governing cybersecurity, data privacy, and the use of artificial intelligence (AI) and other emerging technologies. The bill requires the Division to submit an annual report to the Joint Commission on Technology and Science (JCOTS) by November 1 of each year d...
Summary: The bill establishes a Division of Emerging Technologies, Cybersecurity, and Data Privacy within the Virginia Department of Law to oversee and enforce related laws, conduct audits, and promote compliance and education.
Collection: Legislation
Status date: Jan. 7, 2025
Status: Introduced
Primary sponsor: Bonita Anthony (sole sponsor)
Last action: Left in Appropriations (Feb. 4, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text predominantly discusses the establishment of a Division of Emerging Technologies, Cybersecurity, and Data Privacy that specifically addresses the use of artificial intelligence (AI). This clearly indicates relevance to both the Social Impact and Data Governance categories, as it pertains to laws governing cybersecurity, data privacy, and the implications of AI use. Additionally, the mention of compliance audits and investigations of AI-related laws aligns with concerns of System Integrity. However, the focus seems more on governance and compliance rather than the performance benchmarks or adherence to international standards, which is what the Robustness category primarily covers. Thus, it's moderately relevant there.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text details the establishment of a Division to oversee various technological implementations including AI within government frameworks. This indicates relevance to Government Agencies and Public Services since it applies to state governance through the newly formed Division. It does not specifically address the use of AI in the Political context, nor does it detail implications related to Healthcare or the Judicial System. There is no emphasis on sectors like Nonprofits or Academic Institutions, thus they are not relevant either. However, it can be considered somewhat relevant to Private Enterprises, Labor, and Employment as the compliance and regulatory aspects could indirectly impact businesses that utilize AI, but this connection is more peripheral. As a result, the strongest affiliations are identified with Government Agencies and Public Services.


Keywords (occurrence): artificial intelligence (3) automated (3) show keywords in context

Summary: The bill addresses a potential scandal involving Elon Musk and Dogecoin, alleging manipulation that resulted in a $37 billion profit after the announcement of a government commission named after the cryptocurrency, suggesting conflicts of interest in cryptocurrency governance.
Collection: Congressional Record
Status date: Dec. 6, 2024
Status: Issued
Source: Congress

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text discusses various concerns surrounding artificial intelligence (AI), particularly the potential existential risks if AI develops self-awareness and ambition. It emphasizes the need for proactive measures to ensure AI remains a tool rather than becoming a creature with its own volition. It also touches on the societal impact of AI and the importance of funding research to mitigate risks associated with AI systems, which speaks directly to several elements of the Social Impact category. Data Governance is relevant as it hints toward managing AI's data securely, but there are no specific regulations mentioned. System Integrity touches on the need for oversight in AI processes, as the speaker expresses the need for safety measures in AI development. Robustness is also implied regarding the need for auditing and standards in AI efficacy. Therefore, the Social Impact category is particularly relevant due to the text’s focus on the broader implications of AI development for society, while System Integrity is relevant due to mentions of safety, oversight, and regulation.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

The text touches upon the broader impact of AI but does not specifically address its implications for any particular sector. Discussions around the impact of AI on society and its potential ethical considerations emphasize its relevance to the general discourse on AI without digging deep into sector-specific applications or regulations. The dialogue on AI's dangers and lessons learned could apply to multiple sectors, yet its application in sectors like healthcare, government, or international cooperation is absent in this text.


Keywords (occurrence): artificial intelligence (11) show keywords in context

Description: Law-enforcement agencies; use of certain technologies and interrogation practices; forensic laboratory accreditation. Directs the Department of Criminal Justice Services to establish a comprehensive framework for the use of generative artificial intelligence (AI), machine learning systems, audiovisual surveillance technologies, and custodial and noncustodial interrogations of adults and juveniles by law-enforcement agencies, which shall include (i) developing policies and procedures and publi...
Summary: The bill amends Virginia law regarding law enforcement practices, focusing on the use of specific technologies, interrogation methods, and requirements for forensic laboratory accreditation. It aims to improve accountability and oversight in criminal justice processes.
Collection: Legislation
Status date: Jan. 8, 2025
Status: Introduced
Primary sponsor: Jackie Glass (sole sponsor)
Last action: Left in Public Safety (Feb. 5, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly discusses the use of generative artificial intelligence (AI) and machine learning systems by law enforcement agencies. This indicates a direct relevance to Social Impact as it addresses how AI technologies will affect interrogation practices and the potential social implications of their use. The legislation's mention of developing policies for AI in law enforcement suggests a focus on ethical considerations, accountability, and societal consequences, thus scoring high in this category. In terms of Data Governance, the legislation mentions policies and procedures for the application of AI in law enforcement which implies a concern for the management and accuracy of data used in these systems. This relevance to data handling necessitates a moderately high score. System Integrity is relevant due to the legislation's focus on establishing a framework which would likely include human oversight, security measures, or interoperability of the AI systems deployed. The focus on frameworks for AI management also implies a certain level of integrity and accountability in these technologies. Robustness is less directly relevant as the text primarily addresses the application rather than the performance benchmarks of AI. Overall, the Social Impact category stands out most due to the implications for society and law enforcement interactions with technology, followed by Data Governance due to the mention of policies and procedures for AI utilization.


Sector:
Government Agencies and Public Services
Judicial system (see reasoning)

The text primarily addresses the application of AI and machine learning technologies within law enforcement which positions it strongly in the 'Judicial System' sector due to its direct implications for policing practices and interrogation methods. Additionally, there is a broader implication for 'Government Agencies and Public Services' because this legislation pertains to the operational frameworks that govern law enforcement agencies as part of public service delivery. Other sectors such as Healthcare and Private Enterprises are not applicable as they do not relate to the content of this legislation. Thus, the scores reflect the clear focus on judicial and governmental applications of AI.


Keywords (occurrence): artificial intelligence (8) machine learning (9) automated (2) show keywords in context

Description: Minnesota Consumer Data Privacy Act modified to make consumer health data a form of sensitive data, and additional protections added for sensitive data.
Summary: The Minnesota Consumer Data Privacy Act is amended to classify health data as sensitive, incorporating additional protections for such data to enhance consumer privacy and security.
Collection: Legislation
Status date: March 24, 2025
Status: Introduced
Primary sponsor: Steve Elkins (6 total sponsors)
Last action: Introduction and first reading, referred to Judiciary Finance and Civil Law (March 24, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text primarily addresses consumer data privacy in relation to health data, which necessitates significant considerations around its impact on society and individual rights. This impacts how personal data is processed, stored, and shared particularly concerning sensitive health data. The bill modifies existing privacy regulations to categorize health data as sensitive, implying societal-wide implications on consumer trust and data protection. However, it does not explicitly detail biases, discrimination, or ecological impacts directly attributable to AI, which affects its score in this category. Furthermore, while the bill covers data protection and consumer rights, it doesn't involve systemic accountability mechanisms or ethical frameworks for AI usage in its current wording, lowering its relevance for systemic integrity. Hence, its implications on social impact are significant, but less so for the other categories which are more focused on algorithmic concern and system integrity. Overall, this justifies a score of 4 in Social Impact, as it is very relevant to the broad scope of how consumer health data impacts societal norms, privacy, and security. The relevance to Data Governance is tied to how health data is managed and protected, scoring a 5 as the text describes specific measures for legal entities regarding personal and sensitive data. For System Integrity, while it touches on the definition of processing which relates to automated data use, there does not seem to be enough focus on oversight or human intervention mandated in AI processes, resulting in a score of 2. It similarly doesn’t emphasize performance benchmarks or compliance auditing, yielding a score of 2 for Robustness.


Sector:
Government Agencies and Public Services
Healthcare (see reasoning)

The text distinctly addresses the governance of consumer data protection related to health data, which firmly situates it in the Healthcare sector due to its implications on medical privacy rights and health-related data handling. The rewritten bill includes various provisions that align very well with the regulatory interests of healthcare surrounding data privacy and sensitivity. There are also implications regarding how such data can relate to broader public services, tying the text to Government Agencies and Public Services as it mentions legal entities which could include government healthcare providers. However, there is little information that explicitly pertains to the Political and Elections or Judicial System sectors, primarily because the text does not present a discussion on AI involvement or oversight in those areas. Thus, Healthcare receives a score of 5, Government Agencies and Public Services a score of 3, while the rest of the categories either don’t connect or have superficial relevance, reflecting a score of 1.


Keywords (occurrence): automated (2) show keywords in context

Description: To amend the Internal Revenue Code of 1986 to establish a credit for investments in innovative agricultural technology.
Summary: The Supporting Innovation in Agriculture Act of 2025 establishes a 30% tax credit for investments in innovative agricultural technology projects, aimed at enhancing specialty crop production and efficiency through advanced technologies.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Mike Kelly (19 total sponsors)
Last action: Referred to the House Committee on Ways and Means. (Feb. 27, 2025)

Category:
Societal Impact
Data Robustness (see reasoning)

The text largely discusses a tax credit for investments in innovative agricultural technology. The major references to AI are in the context of 'machine learning systems and artificial intelligence systems designed as part of or sold in connection with controlled environment agriculture technology' and in 'advanced analytics, machine learning, and artificial intelligence systems designed as part of or sold in connection with precision agriculture technology.' This indicates a clear relevance to the impact of AI on agricultural efficiency and innovation. 1. Social Impact: The bill may not directly address social impacts such as fairness or consumer protections but focuses on enhancing agricultural technology, which can have indirect social implications. 2. Data Governance: While it mentions data management in connection with machine learning, it does not explicitly address data governance issues like privacy, bias, or intellectual property. 3. System Integrity: AI system integrity isn't a primary focus in this text, as it mostly emphasizes investment credits rather than security or transparency mandates. 4. Robustness: The bill discusses both AI and machine learning. However, it doesn't set forth benchmarks or compliance standards aimed at AI systems in this context, focusing instead on financial incentives for agricultural technology innovation. Overall, the relevance clearly peaks with the proposed AI tools in agriculture but lacks broader implications for systemic integrity or governance. The strong mentions of AI-related technology warrant a moderate to high relevance in this category.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The legislation relates directly to agricultural innovation, stressing applications of AI in efficient agricultural practices. 1. Politics and Elections: There are no relevant mentions of politics or elections in the bill. 2. Government Agencies and Public Services: Although it touches on innovative technology, it does not target government services or agencies. 3. Judicial System: The bill does not reference judicial use of AI. 4. Healthcare: No mentions of healthcare or medical applications are present in the text. 5. Private Enterprises, Labor, and Employment: While it suggests the potential for economic impact from agricultural innovation, it does not specifically address labor or employment issues directly. 6. Academic and Research Institutions: The bill's framework can influence academic research in agriculture, yet it lacks explicit initiatives targeting educational institutions. 7. International Cooperation and Standards: There are no international implications addressed or mentioned in the text. 8. Nonprofits and NGOs: It doesn't discuss how nonprofits or NGOs might engage with the legislation. 9. Hybrid, Emerging, and Unclassified: It touches on the adoption of innovative systems but remains very focused on agricultural technology rather than broader hybrid or unclassified sectors. Overall, the prominence of the agricultural sector in innovation makes that the most relevant classification, leaning towards moderate relevance.


Keywords (occurrence): artificial intelligence (2) machine learning (2) automated (1) show keywords in context

Description: A BILL to be entitled an Act to amend Title 51 of the Official Code of Georgia Annotated, relating to torts, so as to provide for a cause of action for appropriating an individual's indicia of identity; to provide for a right to protection against the appropriation of an individual's indicia of identity; to provide for assignment or licensing of such right; to provide for damages; to provide for other remedies; to provide for exceptions; to provide for definitions; to provide for statutory co...
Summary: Senate Bill 354 establishes a legal framework in Georgia to protect individuals from the unauthorized appropriation of their identity, including their likeness and voice, allowing for civil action and damages against violators.
Collection: Legislation
Status date: March 21, 2025
Status: Introduced
Primary sponsor: Sally Harrell (sole sponsor)
Last action: Senate Read and Referred (March 25, 2025)

Category:
Societal Impact
Data Governance (see reasoning)

The text primarily addresses the protection of individuals from the unauthorized appropriation of their identity, including their likeness, voice, and other unique features. This is closely related to the 'Social Impact' category as it seeks to establish rights and protections to prevent misuse of personal identifiers, which can have significant implications for privacy, consent, and the commercial exploitation of personal identity. The mention of algorithms and machine learning in the context of identity appropriation indicates a direct link to how AI technologies can be implicated in these issues, further justifying a high relevance score for the 'Data Governance' category which discusses the secure and ethical management of data related to individuals' identities. The text does not address security, oversight, compliance with standards, or performance benchmarks specifically related to AI systems, suggesting lower relevance for 'System Integrity' and 'Robustness'.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The text is focused on the appropriation of identity and does not specificially apply to political campaigns, government functions, the judicial system, healthcare, or other direct applications of AI that would characterize legislation for distinct sectors. However, the discussion around algorithms and identity does hint at concerns that could apply across various sectors including public services and private enterprises, although these are not the primary focus. Given that it touches on aspects indirectly related to the functioning and application of AI in society, it does not fit robustly into any sector but may loosely relate to Private Enterprises due to implications for commercialization of identity. Therefore, the scores for sectors will reflect their indirect relevance.


Keywords (occurrence): machine learning (1) algorithm (2) show keywords in context

Description: An act to add Chapter 22.6 (commencing with Section 22601) to Division 8 of the Business and Professions Code, relating to artificial intelligence.
Summary: Senate Bill 243 mandates operators of chatbot platforms to implement safety measures for minors, including preventing harmful engagement patterns and reporting suicidal ideation, aiming to protect youth online.
Collection: Legislation
Status date: Jan. 30, 2025
Status: Introduced
Primary sponsor: Josh Becker (3 total sponsors)
Last action: April 1 set for first hearing canceled at the request of author. (March 25, 2025)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly discusses the implications of chatbot use among minors and promotes accountability of operators regarding their AI systems. It highlights the societal impact of AI through the accountability measures and mental health concerns associated with the chatbot interactions with minors. This relevance is significant because it deals directly with the potential psychological and material harm caused by AI systems, warranting a high score in the Social Impact category. In terms of Data Governance, the bill mandates that operators report data related to suicidality in minors to the State Department of Health Care Services, which addresses the management of sensitive data and ensures data transparency, thus also being relevant to this category. The System Integrity category is moderately relevant due to the requirement for audits ensuring compliance, but it does not address broader issues of security or control in AI systems comprehensively. Robustness is less relevant as there are no specific mentions of performance standards or benchmarks for AI systems. Overall, the text primarily revolves around social considerations and data governance related to AI.


Sector:
Healthcare
Nonprofits and NGOs
Hybrid, Emerging, and Unclassified (see reasoning)

The legislation significantly impacts the sector of children and minors, creating specific regulations for the use of AI technology, particularly chatbots, in environments where minors are users. It mandates considerations for their mental health and safety, indicating a strong legislative focus on this sector (minors). Although the bill does not specifically classify under typical sectors like Government Agencies, Healthcare, or Education, it does impact the regulation of AI in interactions with minors, hence it may fit best under Hybrid, Emerging, and Unclassified. The text’s focus on the operators' responsibilities for preventing harm and providing transparency about AI technology adds to its relevance. Each sector seems somewhat disconnected from the details of this legislation; therefore, this leads to lower scores for more established sectors. It leans more towards being classified in an unclassifiable category.


Keywords (occurrence): artificial intelligence (3) chatbot (20) show keywords in context

Description: A bill to impose a tax on certain trading transactions to invest in our families and communities, improve our infrastructure and our environment, strengthen our financial security, expand opportunity and reduce market volatility.
Summary: The Tax on Wall Street Speculation Act imposes a tax on trading transactions to fund community investments, infrastructure improvements, and enhance financial security, while aiming to reduce market volatility.
Collection: Legislation
Status date: June 14, 2023
Status: Introduced
Primary sponsor: Bernard Sanders (2 total sponsors)
Last action: Read twice and referred to the Committee on Finance. (June 14, 2023)

Category: None (see reasoning)

The text primarily discusses the imposition of a tax on trading transactions related to securities and derivative instruments without explicit mention of AI technologies or concepts. While terms like 'algorithm' are found in the context of derivatives, they do not relate directly to AI, machine learning, or other highlighted technologies. Therefore, the categories of Social Impact, Data Governance, System Integrity, and Robustness do not apply to the core purpose of this legislation.


Sector: None (see reasoning)

The text does not address sectors, such as Politics and Elections, Government Agencies and Public Services, or any other sectors mentioned. Its focus is strictly on financial transactions and taxation, with no applicable context or frameworks within the specified sectors. Thus, each sector category is scored as not relevant.


Keywords (occurrence): algorithm (1) show keywords in context

Description: Revised for 1st Substitute: Making 2023-2025 fiscal biennium operating appropriations and 2021-2023 fiscal biennium second supplemental operating appropriations.Original: Making 2023-2025 fiscal biennium operating appropriations.
Summary: The bill appropriates funding for various state agencies and services during the 2023-2025 fiscal biennium, establishing budgets for operational needs, including salaries and designated programs, aimed at ensuring effective governance.
Collection: Legislation
Status date: May 16, 2023
Status: Passed
Primary sponsor: Christine Rolfes (3 total sponsors)
Last action: Effective date 5/16/2023. (May 16, 2023)

Category: None (see reasoning)

The text primarily pertains to fiscal appropriations and budgetary measures without any mention or explicit relation to AI-related terms or legislation affecting AI technologies. Given that the text focuses on budget allocations and legislative procedures, its content does not address social impacts, data governance issues, system integrity, or robustness aspects that would be relevant for AI-related discussions. Hence, it is deemed not relevant for all the categories.


Sector: None (see reasoning)

The text does not contain any references or relevance to AI applications within the identified sectors. There are no indications of AI's regulation in politics and elections, public services, judicial matters, healthcare, private enterprises, academic settings, international cooperation, or nonprofit activities. It discusses budgetary provisions but nothing related to AI use or regulation in these sectors.


Keywords (occurrence): artificial intelligence (2) automated (13) algorithm (1) show keywords in context

Description: A bill to ensure that large online platforms are addressing the needs of non-English users.
Summary: The LISTOS Act aims to ensure large online platforms adequately support non-English users by enhancing content moderation, transparency, and equitable resource allocation across languages to improve user safety and access.
Collection: Legislation
Status date: June 1, 2023
Status: Introduced
Primary sponsor: Ben Lujan (5 total sponsors)
Last action: Read twice and referred to the Committee on Commerce, Science, and Transportation. (June 1, 2023)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The LISTOS Act primarily addresses the needs of non-English users of online platforms and emphasizes the fairness and inclusivity of AI-driven content moderation processes. However, the AI-related elements are limited to specific mentions in the context of ensuring that automated content detection and filtering systems are equitable across languages. While the Act acknowledges the necessity of algorithmic transparency and the equal treatment of non-English content moderation, it does not overwhelmingly focus on significant social impacts, data governance, system integrity, or robustness specific to AI frameworks. Therefore, the relevance varies across categories but is notably more aligned with social impact due to direct references to users' safety and equitable treatment in AI processes potentially causing harm.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs (see reasoning)

The LISTOS Act is primarily concerned with ensuring that online platforms serve non-English speaking communities effectively. It mentions the use of automated systems for content moderation and relates indirectly to broader issues in various sectors such as healthcare, public services, and nonprofit organizations that deal with users across diverse languages. However, the bill does not directly focus on sectors like government agencies, academia, or the judicial system. Overall, it best fits under the Government Agencies and Public Services sector, indicating its focus on the role of technology in public service delivery.


Keywords (occurrence): automated (10) show keywords in context

Description: To amend the Food, Agriculture, Conservation, and Trade Act of 1990 to include additional priorities as research and extension initiatives, and for other purposes.
Summary: The Land Grant Research Prioritization Act of 2023 amends existing legislation to add new research priorities in agriculture, focusing on mechanized harvesting, artificial intelligence applications, invasive species management, and aquaculture development.
Collection: Legislation
Status date: June 15, 2023
Status: Introduced
Primary sponsor: Scott Franklin (9 total sponsors)
Last action: Referred to the Subcommittee on Conservation, Research, and Biotechnology. (Aug. 21, 2023)

Category:
Societal Impact (see reasoning)

The text mainly focuses on amendments to the Food, Agriculture, Conservation, and Trade Act of 1990 to include agricultural applications of Artificial Intelligence as a prioritized area for research and extension initiatives. Given the explicit mention of Artificial Intelligence in terms of its application in agriculture, it aligns closely with issues of societal impacts—like enhancing specialty crop production and potentially affecting agricultural labor and practices. It does not, however, delve deeply into data governance, system integrity, or robustness measures as they relate to these AI applications, thus having lower relevance for those categories. It primarily falls under Social Impact due to its implications for agricultural practice and efficiency.


Sector:
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Hybrid, Emerging, and Unclassified (see reasoning)

The legislation explicitly highlights the application of artificial intelligence in agricultural settings, making it significantly relevant to the Agriculture sector. While agriculture is not explicitly listed as a sector in the predefined sectors, the implications of this bill relate directly to agricultural practices and innovations. There is no specific mention of AI application in sectors such as Politics and Elections or Healthcare, indicating limited relevance there. Therefore, the Agriculture sector's relevance is implied by the content of the text but cannot be clearly categorized under the explicitly defined sectors. Based on the focus on technology in agriculture, the relevance can be regarded as moderate.


Keywords (occurrence): artificial intelligence (2) show keywords in context

Description: A bill to prohibit certain uses of automated decision systems by employers, and for other purposes.
Summary: The "No Robot Bosses Act" prohibits employers from relying solely on automated decision systems for employment-related decisions, ensuring compliance with discrimination laws and promoting human oversight in hiring and management.
Collection: Legislation
Status date: July 20, 2023
Status: Introduced
Primary sponsor: Robert Casey (5 total sponsors)
Last action: Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (July 20, 2023)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The No Robot Bosses Act encompasses specific provisions related to automated decision systems used in employment contexts. The act's focus on prohibiting certain uses of AI-driven systems directly relates to social impacts, as it addresses fairness, transparency, and human oversight in employment decisions, thereby reducing potential discrimination and ensuring accountability for AI actions. Therefore, the Social Impact category is highly relevant. The act also touches on the governance of data through mandates for transparency, disclosures, and the testing of automated decision systems, making Data Governance relevant as well. The emphasis on oversight and reporting in the use of AI systems further aligns with the System Integrity category. Although there are aspects of performance measurement embedded in the act, they are not as central to the provisions outlined, making Robustness the least relevant category. Overall, this act strongly relates to the ethical, legal, and practical implications of AI in employment settings, indicating significant relevance to the categories concerned with social impact, data governance, and system integrity.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

This act primarily deals with the implications of AI in the realm of employment, making it highly relevant to the Private Enterprises, Labor, and Employment sector. It outlines prohibitions and requirements specifically targeted at employers who utilize AI in decision-making, highlighting concerns over fairness, transparency, and the rights of employees and candidates. While it may have implications for broader government action regarding employment regulations, its core focus is narrow enough to keep it primarily in the workforce arena, thus making it less relevant to Government Agencies and Public Services and Nonprofits and NGOs. Other sectors like Politics and Elections or International Cooperation and Standards do not have a pertinent connection to this act either. Therefore, the scoring reflects a strong relevance to the employment sector and moderate relevance to other sectors.


Keywords (occurrence): artificial intelligence (1) machine learning (1) automated (42) show keywords in context

Description: Requires certain disclosures by automobile insurers relating to the use of telematics systems in determining insurance rates and/or discounts.
Summary: The bill mandates automobile insurers to disclose their telematics systems' scoring methodologies and ensure non-discriminatory practices in determining insurance rates, enhancing transparency and consumer access to data.
Collection: Legislation
Status date: Jan. 5, 2023
Status: Introduced
Primary sponsor: Kevin Thomas (sole sponsor)
Last action: REFERRED TO INSURANCE (Jan. 3, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text primarily addresses telematics systems used by automobile insurers, focusing on how these systems gather data to determine insurance rates. The relevance to the four categories reflects this focus: Social Impact is very relevant due to provisions that call for testing against discrimination, consumer data access, and the implications for various protected classes. Data Governance is also essential given the emphasis on secure data management and mandates against unauthorized use of collected data. System Integrity is relevant since the legislation mandates transparency around algorithms and risk factors. Robustness, however, is less relevant as the text does not discuss benchmarks or performance standards per se; it's more about the general procedures related to data and discrimination than about performance certifications or compliance audits.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The legislation has direct implications for the insurance sector as it involves telematics, which deeply relates to how insurance rates are calculated. It promotes fairness and reduces potential bias in insurance algorithms, making it highly relevant to Private Enterprises, Labor, and Employment, given the potential impact of AI on workers' rights and employment practices in related fields. Government Agencies and Public Services is also moderately relevant as the regulations involve both insurer and governmental oversight by requiring a report to the superintendent. The other sectors, such as Politics and Elections or Healthcare, do not connect to this legislation.


Keywords (occurrence): algorithm (2) show keywords in context

Description: A bill to require application stores to publicly list the country of origin of the applications that they distribute, and to provide consumers the ability to protect themselves.
Summary: The Know Your App Act mandates application stores to disclose the country of origin for apps, enhancing consumer awareness and protection against privacy risks and foreign surveillance.
Collection: Legislation
Status date: May 18, 2023
Status: Introduced
Primary sponsor: Tim Scott (5 total sponsors)
Last action: Read twice and referred to the Committee on Commerce, Science, and Transportation. (May 18, 2023)

Category:
Societal Impact
Data Governance (see reasoning)

The 'Know Your App Act' centers on transparency and consumer protection regarding the disclosure of the country of origin of applications. The provisions related to user awareness, privacy risks associated with foreign application developers, and potential national security implications imply a social impact focus, particularly in relation to consumer protections and accountability of application developers. However, the text primarily emphasizes data handling and user privacy rather than the direct societal consequences of AI applications. Therefore, the relevance to the Social Impact category is moderately significant. Additionally, there's significant emphasis on accurate information and the management of user data, aligning closely with data governance considerations regarding critical data practices and protection measures. The text does not delve deeply into system integrity or robustness, indicating less relevance in those domains. Thus, while it possesses elements addressing AI use and privacy, those are more about application governance than the integrity or robust evaluations of AI systems themselves.


Sector:
Government Agencies and Public Services
Hybrid, Emerging, and Unclassified (see reasoning)

The 'Know Your App Act' implies regulation that involves applications distributed by various developers that could touch multiple sectors, particularly in the realm of consumer privacy. However, since it doesn't specifically address AI's function within politics, public services, healthcare, labor, or the judicial system, or the nuanced implications of AI technology as applied within those settings, the relevance to this sector category is minimal. The vague association with emerging technology indicates that while the act may indirectly affect application practices in business and consumer contexts, it lacks direct references to any one of those specified sectors more clearly defined as having a focus on technology or AI specifically. Therefore, while there are hints of relevance, the fundamental focus remains on consumer protection and transparency rather than any one sector's specific implication from AI.


Keywords (occurrence): algorithm (1) show keywords in context

Summary: The bill exempts intra-company, intra-organization, and intra-governmental transfers of unclassified defense articles to dual and third-country national employees, simplifying export processes within approved frameworks.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily discusses exemptions pertaining to the transfer of classified and unclassified defense articles between various entities, particularly focusing on regulations and requirements for dual nationals and employees of foreign organizations. There is no explicit mention or implication regarding the social impact of AI, data governance specific to AI systems, integrity of AI systems, or robustness related to AI benchmarks. The focus is on defense articles, military equipment, and related compliance processes rather than any AI-related legislation or consequences. Therefore, the relevance to the AI categories is minimal.


Sector: None (see reasoning)

The text does not directly pertain to any of the nine specified sectors. It is centered on the regulations surrounding the transfer of military equipment and defense services, which might indirectly relate to government agencies but does not clearly fit into that category. Because the text does not touch upon the use or regulation of AI in politics, government agencies, the judicial system, healthcare, private enterprises, education, international standards, or NGOs, it is not relevant to any sector. Therefore, each sector receives a very low relevance score.


Keywords (occurrence): automated (1) show keywords in context
Feedback form