4829 results:
Description: Department of Law; Division of Emerging Technologies, Cybersecurity, and Data Privacy established. Establishes within the Department of Law a Division of Emerging Technologies, Cybersecurity, and Data Privacy to oversee and enforce laws governing cybersecurity, data privacy, and the use of artificial intelligence (AI) and other emerging technologies. The bill requires the Division to submit an annual report to the Joint Commission on Technology and Science (JCOTS) by November 1 of each year d...
Summary: The bill establishes a Division of Emerging Technologies, Cybersecurity, and Data Privacy within the Virginia Department of Law to oversee and enforce related laws, conduct audits, and promote compliance and education.
Collection: Legislation
Status date: Jan. 7, 2025
Status: Introduced
Primary sponsor: Bonita Anthony
(sole sponsor)
Last action: Left in Appropriations (Feb. 4, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text predominantly discusses the establishment of a Division of Emerging Technologies, Cybersecurity, and Data Privacy that specifically addresses the use of artificial intelligence (AI). This clearly indicates relevance to both the Social Impact and Data Governance categories, as it pertains to laws governing cybersecurity, data privacy, and the implications of AI use. Additionally, the mention of compliance audits and investigations of AI-related laws aligns with concerns of System Integrity. However, the focus seems more on governance and compliance rather than the performance benchmarks or adherence to international standards, which is what the Robustness category primarily covers. Thus, it's moderately relevant there.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text details the establishment of a Division to oversee various technological implementations including AI within government frameworks. This indicates relevance to Government Agencies and Public Services since it applies to state governance through the newly formed Division. It does not specifically address the use of AI in the Political context, nor does it detail implications related to Healthcare or the Judicial System. There is no emphasis on sectors like Nonprofits or Academic Institutions, thus they are not relevant either. However, it can be considered somewhat relevant to Private Enterprises, Labor, and Employment as the compliance and regulatory aspects could indirectly impact businesses that utilize AI, but this connection is more peripheral. As a result, the strongest affiliations are identified with Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (3) automated (3) show keywords in context
Summary: The bill addresses a potential scandal involving Elon Musk and Dogecoin, alleging manipulation that resulted in a $37 billion profit after the announcement of a government commission named after the cryptocurrency, suggesting conflicts of interest in cryptocurrency governance.
Collection: Congressional Record
Status date: Dec. 6, 2024
Status: Issued
Source: Congress
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text discusses various concerns surrounding artificial intelligence (AI), particularly the potential existential risks if AI develops self-awareness and ambition. It emphasizes the need for proactive measures to ensure AI remains a tool rather than becoming a creature with its own volition. It also touches on the societal impact of AI and the importance of funding research to mitigate risks associated with AI systems, which speaks directly to several elements of the Social Impact category. Data Governance is relevant as it hints toward managing AI's data securely, but there are no specific regulations mentioned. System Integrity touches on the need for oversight in AI processes, as the speaker expresses the need for safety measures in AI development. Robustness is also implied regarding the need for auditing and standards in AI efficacy. Therefore, the Social Impact category is particularly relevant due to the text’s focus on the broader implications of AI development for society, while System Integrity is relevant due to mentions of safety, oversight, and regulation.
Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)
The text touches upon the broader impact of AI but does not specifically address its implications for any particular sector. Discussions around the impact of AI on society and its potential ethical considerations emphasize its relevance to the general discourse on AI without digging deep into sector-specific applications or regulations. The dialogue on AI's dangers and lessons learned could apply to multiple sectors, yet its application in sectors like healthcare, government, or international cooperation is absent in this text.
Keywords (occurrence): artificial intelligence (11) show keywords in context
Description: Law-enforcement agencies; use of certain technologies and interrogation practices; forensic laboratory accreditation. Directs the Department of Criminal Justice Services to establish a comprehensive framework for the use of generative artificial intelligence (AI), machine learning systems, audiovisual surveillance technologies, and custodial and noncustodial interrogations of adults and juveniles by law-enforcement agencies, which shall include (i) developing policies and procedures and publi...
Summary: The bill amends Virginia law regarding law enforcement practices, focusing on the use of specific technologies, interrogation methods, and requirements for forensic laboratory accreditation. It aims to improve accountability and oversight in criminal justice processes.
Collection: Legislation
Status date: Jan. 8, 2025
Status: Introduced
Primary sponsor: Jackie Glass
(sole sponsor)
Last action: Left in Public Safety (Feb. 5, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly discusses the use of generative artificial intelligence (AI) and machine learning systems by law enforcement agencies. This indicates a direct relevance to Social Impact as it addresses how AI technologies will affect interrogation practices and the potential social implications of their use. The legislation's mention of developing policies for AI in law enforcement suggests a focus on ethical considerations, accountability, and societal consequences, thus scoring high in this category. In terms of Data Governance, the legislation mentions policies and procedures for the application of AI in law enforcement which implies a concern for the management and accuracy of data used in these systems. This relevance to data handling necessitates a moderately high score. System Integrity is relevant due to the legislation's focus on establishing a framework which would likely include human oversight, security measures, or interoperability of the AI systems deployed. The focus on frameworks for AI management also implies a certain level of integrity and accountability in these technologies. Robustness is less directly relevant as the text primarily addresses the application rather than the performance benchmarks of AI. Overall, the Social Impact category stands out most due to the implications for society and law enforcement interactions with technology, followed by Data Governance due to the mention of policies and procedures for AI utilization.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The text primarily addresses the application of AI and machine learning technologies within law enforcement which positions it strongly in the 'Judicial System' sector due to its direct implications for policing practices and interrogation methods. Additionally, there is a broader implication for 'Government Agencies and Public Services' because this legislation pertains to the operational frameworks that govern law enforcement agencies as part of public service delivery. Other sectors such as Healthcare and Private Enterprises are not applicable as they do not relate to the content of this legislation. Thus, the scores reflect the clear focus on judicial and governmental applications of AI.
Keywords (occurrence): artificial intelligence (8) machine learning (9) automated (2) show keywords in context
Description: Minnesota Consumer Data Privacy Act modified to make consumer health data a form of sensitive data, and additional protections added for sensitive data.
Summary: The Minnesota Consumer Data Privacy Act is amended to classify health data as sensitive, incorporating additional protections for such data to enhance consumer privacy and security.
Collection: Legislation
Status date: March 24, 2025
Status: Introduced
Primary sponsor: Steve Elkins
(6 total sponsors)
Last action: Introduction and first reading, referred to Judiciary Finance and Civil Law (March 24, 2025)
Societal Impact
Data Governance (see reasoning)
The text primarily addresses consumer data privacy in relation to health data, which necessitates significant considerations around its impact on society and individual rights. This impacts how personal data is processed, stored, and shared particularly concerning sensitive health data. The bill modifies existing privacy regulations to categorize health data as sensitive, implying societal-wide implications on consumer trust and data protection. However, it does not explicitly detail biases, discrimination, or ecological impacts directly attributable to AI, which affects its score in this category. Furthermore, while the bill covers data protection and consumer rights, it doesn't involve systemic accountability mechanisms or ethical frameworks for AI usage in its current wording, lowering its relevance for systemic integrity. Hence, its implications on social impact are significant, but less so for the other categories which are more focused on algorithmic concern and system integrity. Overall, this justifies a score of 4 in Social Impact, as it is very relevant to the broad scope of how consumer health data impacts societal norms, privacy, and security. The relevance to Data Governance is tied to how health data is managed and protected, scoring a 5 as the text describes specific measures for legal entities regarding personal and sensitive data. For System Integrity, while it touches on the definition of processing which relates to automated data use, there does not seem to be enough focus on oversight or human intervention mandated in AI processes, resulting in a score of 2. It similarly doesn’t emphasize performance benchmarks or compliance auditing, yielding a score of 2 for Robustness.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text distinctly addresses the governance of consumer data protection related to health data, which firmly situates it in the Healthcare sector due to its implications on medical privacy rights and health-related data handling. The rewritten bill includes various provisions that align very well with the regulatory interests of healthcare surrounding data privacy and sensitivity. There are also implications regarding how such data can relate to broader public services, tying the text to Government Agencies and Public Services as it mentions legal entities which could include government healthcare providers. However, there is little information that explicitly pertains to the Political and Elections or Judicial System sectors, primarily because the text does not present a discussion on AI involvement or oversight in those areas. Thus, Healthcare receives a score of 5, Government Agencies and Public Services a score of 3, while the rest of the categories either don’t connect or have superficial relevance, reflecting a score of 1.
Keywords (occurrence): automated (2) show keywords in context
Description: To amend the Internal Revenue Code of 1986 to establish a credit for investments in innovative agricultural technology.
Summary: The Supporting Innovation in Agriculture Act of 2025 establishes a 30% tax credit for investments in innovative agricultural technology projects, aimed at enhancing specialty crop production and efficiency through advanced technologies.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Mike Kelly
(19 total sponsors)
Last action: Referred to the House Committee on Ways and Means. (Feb. 27, 2025)
Societal Impact
Data Robustness (see reasoning)
The text largely discusses a tax credit for investments in innovative agricultural technology. The major references to AI are in the context of 'machine learning systems and artificial intelligence systems designed as part of or sold in connection with controlled environment agriculture technology' and in 'advanced analytics, machine learning, and artificial intelligence systems designed as part of or sold in connection with precision agriculture technology.' This indicates a clear relevance to the impact of AI on agricultural efficiency and innovation. 1. Social Impact: The bill may not directly address social impacts such as fairness or consumer protections but focuses on enhancing agricultural technology, which can have indirect social implications. 2. Data Governance: While it mentions data management in connection with machine learning, it does not explicitly address data governance issues like privacy, bias, or intellectual property. 3. System Integrity: AI system integrity isn't a primary focus in this text, as it mostly emphasizes investment credits rather than security or transparency mandates. 4. Robustness: The bill discusses both AI and machine learning. However, it doesn't set forth benchmarks or compliance standards aimed at AI systems in this context, focusing instead on financial incentives for agricultural technology innovation. Overall, the relevance clearly peaks with the proposed AI tools in agriculture but lacks broader implications for systemic integrity or governance. The strong mentions of AI-related technology warrant a moderate to high relevance in this category.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The legislation relates directly to agricultural innovation, stressing applications of AI in efficient agricultural practices. 1. Politics and Elections: There are no relevant mentions of politics or elections in the bill. 2. Government Agencies and Public Services: Although it touches on innovative technology, it does not target government services or agencies. 3. Judicial System: The bill does not reference judicial use of AI. 4. Healthcare: No mentions of healthcare or medical applications are present in the text. 5. Private Enterprises, Labor, and Employment: While it suggests the potential for economic impact from agricultural innovation, it does not specifically address labor or employment issues directly. 6. Academic and Research Institutions: The bill's framework can influence academic research in agriculture, yet it lacks explicit initiatives targeting educational institutions. 7. International Cooperation and Standards: There are no international implications addressed or mentioned in the text. 8. Nonprofits and NGOs: It doesn't discuss how nonprofits or NGOs might engage with the legislation. 9. Hybrid, Emerging, and Unclassified: It touches on the adoption of innovative systems but remains very focused on agricultural technology rather than broader hybrid or unclassified sectors. Overall, the prominence of the agricultural sector in innovation makes that the most relevant classification, leaning towards moderate relevance.
Keywords (occurrence): artificial intelligence (2) machine learning (2) automated (1) show keywords in context

Description: A BILL to be entitled an Act to amend Title 51 of the Official Code of Georgia Annotated, relating to torts, so as to provide for a cause of action for appropriating an individual's indicia of identity; to provide for a right to protection against the appropriation of an individual's indicia of identity; to provide for assignment or licensing of such right; to provide for damages; to provide for other remedies; to provide for exceptions; to provide for definitions; to provide for statutory co...
Summary: Senate Bill 354 establishes a legal framework in Georgia to protect individuals from the unauthorized appropriation of their identity, including their likeness and voice, allowing for civil action and damages against violators.
Collection: Legislation
Status date: March 21, 2025
Status: Introduced
Primary sponsor: Sally Harrell
(sole sponsor)
Last action: Senate Read and Referred (March 25, 2025)
Societal Impact
Data Governance (see reasoning)
The text primarily addresses the protection of individuals from the unauthorized appropriation of their identity, including their likeness, voice, and other unique features. This is closely related to the 'Social Impact' category as it seeks to establish rights and protections to prevent misuse of personal identifiers, which can have significant implications for privacy, consent, and the commercial exploitation of personal identity. The mention of algorithms and machine learning in the context of identity appropriation indicates a direct link to how AI technologies can be implicated in these issues, further justifying a high relevance score for the 'Data Governance' category which discusses the secure and ethical management of data related to individuals' identities. The text does not address security, oversight, compliance with standards, or performance benchmarks specifically related to AI systems, suggesting lower relevance for 'System Integrity' and 'Robustness'.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text is focused on the appropriation of identity and does not specificially apply to political campaigns, government functions, the judicial system, healthcare, or other direct applications of AI that would characterize legislation for distinct sectors. However, the discussion around algorithms and identity does hint at concerns that could apply across various sectors including public services and private enterprises, although these are not the primary focus. Given that it touches on aspects indirectly related to the functioning and application of AI in society, it does not fit robustly into any sector but may loosely relate to Private Enterprises due to implications for commercialization of identity. Therefore, the scores for sectors will reflect their indirect relevance.
Keywords (occurrence): machine learning (1) algorithm (2) show keywords in context
Description: Specifying that hurricane mitigation grants funded through the My Safe Florida Home Program may be awarded only under certain circumstances; revising the surplus required for certain insurers applying for their original certificates of authority and to maintain their certificates of authority, respectively; specifying prohibitions for persons who were officers or directors of an insolvent insurer, attorneys in fact of a reciprocal insurer, or officers or directors of an attorney in fact of a ...
Summary: The bill amends Florida insurance laws, focusing on improving hurricane mitigation through grants for inspections and enhancements to homes, and establishing regulations around insurers’ claims processes and officer qualifications.
Collection: Legislation
Status date: Feb. 28, 2025
Status: Introduced
Primary sponsor: Banking and Insurance
(3 total sponsors)
Last action: CS by Banking and Insurance read 1st time (March 19, 2025)
System Integrity (see reasoning)
This text primarily discusses insurance regulations, particularly hurricane mitigation grants and insurer requirements. The mention of 'artificial intelligence, machine learning systems, or algorithms' while stating they cannot be the sole basis for claim denials indicates an intersection with AI considerations. However, the document mainly focuses on insurance processes and does not delve into broader societal impacts or specific data governance concerns related to AI utilization. Therefore, its relevance to the categories of Social Impact, Data Governance, System Integrity, and Robustness primarily ties back to the context of insurance and decision-making rather than direct implications on societal norms or technical integrity in AI systems.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text is significantly relevant to the insurance sector due to its focus on hurricane mitigation insurance, claims processing, and handling grant applications. It outlines the eligibility requirements and stipulations related to homeowners and insurance practices. While it does mention AI and algorithms, the primary context is within the insurance domain rather than how AI operates or affects different sectors. It does, however, subtly touch upon the principles of fairness and accountability in claims decisions, which might indicate broader implications for regulation in the sector. Therefore, its most related sector is Private Enterprises, Labor, and Employment, but it also has elements relevant to Government Agencies and Public Services due to the involvement of state regulation.
Keywords (occurrence): artificial intelligence (6) machine learning (6) algorithm (7) show keywords in context
Description: An act to add Chapter 22.6 (commencing with Section 22601) to Division 8 of the Business and Professions Code, relating to artificial intelligence.
Summary: Senate Bill 243 mandates operators of chatbot platforms to implement safety measures for minors, including preventing harmful engagement patterns and reporting suicidal ideation, aiming to protect youth online.
Collection: Legislation
Status date: Jan. 30, 2025
Status: Introduced
Primary sponsor: Josh Becker
(3 total sponsors)
Last action: April 1 set for first hearing canceled at the request of author. (March 25, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly discusses the implications of chatbot use among minors and promotes accountability of operators regarding their AI systems. It highlights the societal impact of AI through the accountability measures and mental health concerns associated with the chatbot interactions with minors. This relevance is significant because it deals directly with the potential psychological and material harm caused by AI systems, warranting a high score in the Social Impact category. In terms of Data Governance, the bill mandates that operators report data related to suicidality in minors to the State Department of Health Care Services, which addresses the management of sensitive data and ensures data transparency, thus also being relevant to this category. The System Integrity category is moderately relevant due to the requirement for audits ensuring compliance, but it does not address broader issues of security or control in AI systems comprehensively. Robustness is less relevant as there are no specific mentions of performance standards or benchmarks for AI systems. Overall, the text primarily revolves around social considerations and data governance related to AI.
Sector:
Healthcare
Nonprofits and NGOs
Hybrid, Emerging, and Unclassified (see reasoning)
The legislation significantly impacts the sector of children and minors, creating specific regulations for the use of AI technology, particularly chatbots, in environments where minors are users. It mandates considerations for their mental health and safety, indicating a strong legislative focus on this sector (minors). Although the bill does not specifically classify under typical sectors like Government Agencies, Healthcare, or Education, it does impact the regulation of AI in interactions with minors, hence it may fit best under Hybrid, Emerging, and Unclassified. The text’s focus on the operators' responsibilities for preventing harm and providing transparency about AI technology adds to its relevance. Each sector seems somewhat disconnected from the details of this legislation; therefore, this leads to lower scores for more established sectors. It leans more towards being classified in an unclassifiable category.
Keywords (occurrence): artificial intelligence (3) chatbot (20) show keywords in context
Description: A bill to impose a tax on certain trading transactions to invest in our families and communities, improve our infrastructure and our environment, strengthen our financial security, expand opportunity and reduce market volatility.
Summary: The Tax on Wall Street Speculation Act imposes a tax on trading transactions to fund community investments, infrastructure improvements, and enhance financial security, while aiming to reduce market volatility.
Collection: Legislation
Status date: June 14, 2023
Status: Introduced
Primary sponsor: Bernard Sanders
(2 total sponsors)
Last action: Read twice and referred to the Committee on Finance. (June 14, 2023)
The text primarily discusses the imposition of a tax on trading transactions related to securities and derivative instruments without explicit mention of AI technologies or concepts. While terms like 'algorithm' are found in the context of derivatives, they do not relate directly to AI, machine learning, or other highlighted technologies. Therefore, the categories of Social Impact, Data Governance, System Integrity, and Robustness do not apply to the core purpose of this legislation.
Sector: None (see reasoning)
The text does not address sectors, such as Politics and Elections, Government Agencies and Public Services, or any other sectors mentioned. Its focus is strictly on financial transactions and taxation, with no applicable context or frameworks within the specified sectors. Thus, each sector category is scored as not relevant.
Keywords (occurrence): algorithm (1) show keywords in context
Description: Revised for 1st Substitute: Making 2023-2025 fiscal biennium operating appropriations and 2021-2023 fiscal biennium second supplemental operating appropriations.Original: Making 2023-2025 fiscal biennium operating appropriations.
Summary: The bill appropriates funding for various state agencies and services during the 2023-2025 fiscal biennium, establishing budgets for operational needs, including salaries and designated programs, aimed at ensuring effective governance.
Collection: Legislation
Status date: May 16, 2023
Status: Passed
Primary sponsor: Christine Rolfes
(3 total sponsors)
Last action: Effective date 5/16/2023. (May 16, 2023)
The text primarily pertains to fiscal appropriations and budgetary measures without any mention or explicit relation to AI-related terms or legislation affecting AI technologies. Given that the text focuses on budget allocations and legislative procedures, its content does not address social impacts, data governance issues, system integrity, or robustness aspects that would be relevant for AI-related discussions. Hence, it is deemed not relevant for all the categories.
Sector: None (see reasoning)
The text does not contain any references or relevance to AI applications within the identified sectors. There are no indications of AI's regulation in politics and elections, public services, judicial matters, healthcare, private enterprises, academic settings, international cooperation, or nonprofit activities. It discusses budgetary provisions but nothing related to AI use or regulation in these sectors.
Keywords (occurrence): artificial intelligence (2) automated (13) algorithm (1) show keywords in context
Description: A bill to ensure that large online platforms are addressing the needs of non-English users.
Summary: The LISTOS Act aims to ensure large online platforms adequately support non-English users by enhancing content moderation, transparency, and equitable resource allocation across languages to improve user safety and access.
Collection: Legislation
Status date: June 1, 2023
Status: Introduced
Primary sponsor: Ben Lujan
(5 total sponsors)
Last action: Read twice and referred to the Committee on Commerce, Science, and Transportation. (June 1, 2023)
Societal Impact
Data Governance
System Integrity (see reasoning)
The LISTOS Act primarily addresses the needs of non-English users of online platforms and emphasizes the fairness and inclusivity of AI-driven content moderation processes. However, the AI-related elements are limited to specific mentions in the context of ensuring that automated content detection and filtering systems are equitable across languages. While the Act acknowledges the necessity of algorithmic transparency and the equal treatment of non-English content moderation, it does not overwhelmingly focus on significant social impacts, data governance, system integrity, or robustness specific to AI frameworks. Therefore, the relevance varies across categories but is notably more aligned with social impact due to direct references to users' safety and equitable treatment in AI processes potentially causing harm.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs (see reasoning)
The LISTOS Act is primarily concerned with ensuring that online platforms serve non-English speaking communities effectively. It mentions the use of automated systems for content moderation and relates indirectly to broader issues in various sectors such as healthcare, public services, and nonprofit organizations that deal with users across diverse languages. However, the bill does not directly focus on sectors like government agencies, academia, or the judicial system. Overall, it best fits under the Government Agencies and Public Services sector, indicating its focus on the role of technology in public service delivery.
Keywords (occurrence): automated (10) show keywords in context
Description: To amend the Food, Agriculture, Conservation, and Trade Act of 1990 to include additional priorities as research and extension initiatives, and for other purposes.
Summary: The Land Grant Research Prioritization Act of 2023 amends existing legislation to add new research priorities in agriculture, focusing on mechanized harvesting, artificial intelligence applications, invasive species management, and aquaculture development.
Collection: Legislation
Status date: June 15, 2023
Status: Introduced
Primary sponsor: Scott Franklin
(9 total sponsors)
Last action: Referred to the Subcommittee on Conservation, Research, and Biotechnology. (Aug. 21, 2023)
Societal Impact (see reasoning)
The text mainly focuses on amendments to the Food, Agriculture, Conservation, and Trade Act of 1990 to include agricultural applications of Artificial Intelligence as a prioritized area for research and extension initiatives. Given the explicit mention of Artificial Intelligence in terms of its application in agriculture, it aligns closely with issues of societal impacts—like enhancing specialty crop production and potentially affecting agricultural labor and practices. It does not, however, delve deeply into data governance, system integrity, or robustness measures as they relate to these AI applications, thus having lower relevance for those categories. It primarily falls under Social Impact due to its implications for agricultural practice and efficiency.
Sector:
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Hybrid, Emerging, and Unclassified (see reasoning)
The legislation explicitly highlights the application of artificial intelligence in agricultural settings, making it significantly relevant to the Agriculture sector. While agriculture is not explicitly listed as a sector in the predefined sectors, the implications of this bill relate directly to agricultural practices and innovations. There is no specific mention of AI application in sectors such as Politics and Elections or Healthcare, indicating limited relevance there. Therefore, the Agriculture sector's relevance is implied by the content of the text but cannot be clearly categorized under the explicitly defined sectors. Based on the focus on technology in agriculture, the relevance can be regarded as moderate.
Keywords (occurrence): artificial intelligence (2) show keywords in context
Description: A bill to prohibit certain uses of automated decision systems by employers, and for other purposes.
Summary: The "No Robot Bosses Act" prohibits employers from relying solely on automated decision systems for employment-related decisions, ensuring compliance with discrimination laws and promoting human oversight in hiring and management.
Collection: Legislation
Status date: July 20, 2023
Status: Introduced
Primary sponsor: Robert Casey
(5 total sponsors)
Last action: Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (July 20, 2023)
Societal Impact
Data Governance
System Integrity (see reasoning)
The No Robot Bosses Act encompasses specific provisions related to automated decision systems used in employment contexts. The act's focus on prohibiting certain uses of AI-driven systems directly relates to social impacts, as it addresses fairness, transparency, and human oversight in employment decisions, thereby reducing potential discrimination and ensuring accountability for AI actions. Therefore, the Social Impact category is highly relevant. The act also touches on the governance of data through mandates for transparency, disclosures, and the testing of automated decision systems, making Data Governance relevant as well. The emphasis on oversight and reporting in the use of AI systems further aligns with the System Integrity category. Although there are aspects of performance measurement embedded in the act, they are not as central to the provisions outlined, making Robustness the least relevant category. Overall, this act strongly relates to the ethical, legal, and practical implications of AI in employment settings, indicating significant relevance to the categories concerned with social impact, data governance, and system integrity.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
This act primarily deals with the implications of AI in the realm of employment, making it highly relevant to the Private Enterprises, Labor, and Employment sector. It outlines prohibitions and requirements specifically targeted at employers who utilize AI in decision-making, highlighting concerns over fairness, transparency, and the rights of employees and candidates. While it may have implications for broader government action regarding employment regulations, its core focus is narrow enough to keep it primarily in the workforce arena, thus making it less relevant to Government Agencies and Public Services and Nonprofits and NGOs. Other sectors like Politics and Elections or International Cooperation and Standards do not have a pertinent connection to this act either. Therefore, the scoring reflects a strong relevance to the employment sector and moderate relevance to other sectors.
Keywords (occurrence): artificial intelligence (1) machine learning (1) automated (42) show keywords in context
Description: Requires certain disclosures by automobile insurers relating to the use of telematics systems in determining insurance rates and/or discounts.
Summary: The bill mandates automobile insurers to disclose their telematics systems' scoring methodologies and ensure non-discriminatory practices in determining insurance rates, enhancing transparency and consumer access to data.
Collection: Legislation
Status date: Jan. 5, 2023
Status: Introduced
Primary sponsor: Kevin Thomas
(sole sponsor)
Last action: REFERRED TO INSURANCE (Jan. 3, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text primarily addresses telematics systems used by automobile insurers, focusing on how these systems gather data to determine insurance rates. The relevance to the four categories reflects this focus: Social Impact is very relevant due to provisions that call for testing against discrimination, consumer data access, and the implications for various protected classes. Data Governance is also essential given the emphasis on secure data management and mandates against unauthorized use of collected data. System Integrity is relevant since the legislation mandates transparency around algorithms and risk factors. Robustness, however, is less relevant as the text does not discuss benchmarks or performance standards per se; it's more about the general procedures related to data and discrimination than about performance certifications or compliance audits.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation has direct implications for the insurance sector as it involves telematics, which deeply relates to how insurance rates are calculated. It promotes fairness and reduces potential bias in insurance algorithms, making it highly relevant to Private Enterprises, Labor, and Employment, given the potential impact of AI on workers' rights and employment practices in related fields. Government Agencies and Public Services is also moderately relevant as the regulations involve both insurer and governmental oversight by requiring a report to the superintendent. The other sectors, such as Politics and Elections or Healthcare, do not connect to this legislation.
Keywords (occurrence): algorithm (2) show keywords in context
Description: A bill to require application stores to publicly list the country of origin of the applications that they distribute, and to provide consumers the ability to protect themselves.
Summary: The Know Your App Act mandates application stores to disclose the country of origin for apps, enhancing consumer awareness and protection against privacy risks and foreign surveillance.
Collection: Legislation
Status date: May 18, 2023
Status: Introduced
Primary sponsor: Tim Scott
(5 total sponsors)
Last action: Read twice and referred to the Committee on Commerce, Science, and Transportation. (May 18, 2023)
Societal Impact
Data Governance (see reasoning)
The 'Know Your App Act' centers on transparency and consumer protection regarding the disclosure of the country of origin of applications. The provisions related to user awareness, privacy risks associated with foreign application developers, and potential national security implications imply a social impact focus, particularly in relation to consumer protections and accountability of application developers. However, the text primarily emphasizes data handling and user privacy rather than the direct societal consequences of AI applications. Therefore, the relevance to the Social Impact category is moderately significant. Additionally, there's significant emphasis on accurate information and the management of user data, aligning closely with data governance considerations regarding critical data practices and protection measures. The text does not delve deeply into system integrity or robustness, indicating less relevance in those domains. Thus, while it possesses elements addressing AI use and privacy, those are more about application governance than the integrity or robust evaluations of AI systems themselves.
Sector:
Government Agencies and Public Services
Hybrid, Emerging, and Unclassified (see reasoning)
The 'Know Your App Act' implies regulation that involves applications distributed by various developers that could touch multiple sectors, particularly in the realm of consumer privacy. However, since it doesn't specifically address AI's function within politics, public services, healthcare, labor, or the judicial system, or the nuanced implications of AI technology as applied within those settings, the relevance to this sector category is minimal. The vague association with emerging technology indicates that while the act may indirectly affect application practices in business and consumer contexts, it lacks direct references to any one of those specified sectors more clearly defined as having a focus on technology or AI specifically. Therefore, while there are hints of relevance, the fundamental focus remains on consumer protection and transparency rather than any one sector's specific implication from AI.
Keywords (occurrence): algorithm (1) show keywords in context
Summary: The bill exempts intra-company, intra-organization, and intra-governmental transfers of unclassified defense articles to dual and third-country national employees, simplifying export processes within approved frameworks.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register
The text primarily discusses exemptions pertaining to the transfer of classified and unclassified defense articles between various entities, particularly focusing on regulations and requirements for dual nationals and employees of foreign organizations. There is no explicit mention or implication regarding the social impact of AI, data governance specific to AI systems, integrity of AI systems, or robustness related to AI benchmarks. The focus is on defense articles, military equipment, and related compliance processes rather than any AI-related legislation or consequences. Therefore, the relevance to the AI categories is minimal.
Sector: None (see reasoning)
The text does not directly pertain to any of the nine specified sectors. It is centered on the regulations surrounding the transfer of military equipment and defense services, which might indirectly relate to government agencies but does not clearly fit into that category. Because the text does not touch upon the use or regulation of AI in politics, government agencies, the judicial system, healthcare, private enterprises, education, international standards, or NGOs, it is not relevant to any sector. Therefore, each sector receives a very low relevance score.
Keywords (occurrence): automated (1) show keywords in context
Description: A bill to prohibit the use of Federal funds to launch a nuclear weapon using an autonomous weapons system that is not subject to meaningful human control, and for other purposes.
Summary: The "Block Nuclear Launch by Autonomous Artificial Intelligence Act of 2023" prohibits federal funding for launching nuclear weapons using autonomous systems without meaningful human control, emphasizing the necessity of human oversight in such critical decisions.
Collection: Legislation
Status date: May 1, 2023
Status: Introduced
Primary sponsor: Edward Markey
(4 total sponsors)
Last action: Read twice and referred to the Committee on Armed Services. (May 1, 2023)
Societal Impact
System Integrity
Data Robustness (see reasoning)
The text explicitly discusses the implications of using autonomous weapons systems, particularly regarding nuclear weapons and the necessity for meaningful human control in their operation. It directly addresses the risks associated with Artificial Intelligence (AI) in military applications, highlighting concerns around accountability, safety, and international humanitarian law. Therefore, its relevance to Social Impact is extremely high as it concerns the consequences of AI on societal safety and military ethics. Similarly, because the bill emphasizes human oversight and sets forth restrictions on AI operations in warfare, it significantly relates to System Integrity. However, it is less focused on data governance and robustness, which are not central to the arguments or provisions mentioned in the text. Hence, Data Governance does not receive as high a score.
Sector:
Government Agencies and Public Services (see reasoning)
The text primarily revolves around the deployment and regulation of AI in a highly critical context: nuclear weapons and military strategy. It does not delve into specific applications of AI in broader sectors like politics, healthcare, or employment. However, its implications for military policy and the autonomy of armed forces makes it relevant to broader governmental concerns regarding AI in public services and defense. Therefore, Government Agencies and Public Services is the only category that scores higher. Other sectors such as Politics and Elections, Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified show no significant engagement with AI in the context provided by this bill.
Keywords (occurrence): artificial intelligence (3) show keywords in context
Description: Reinserts the provisions of the bill as amended by Senate Amendment No. 1 with the following changes. Provides that each disclosing State department or agency (rather than only department) shall execute a single master data use agreement that includes all data sets and is in accordance with the applicable laws, rules, and regulations pertaining to the specific data being requested. Provides that the State department or agency may require the names of any authorized users who will access or us...
Summary: The Access to Public Health Data Act facilitates certified local health departments' access to essential public health data for disease prevention and control, ensuring data privacy and security safeguards.
Collection: Legislation
Status date: Aug. 4, 2023
Status: Passed
Primary sponsor: Anna Moeller
(28 total sponsors)
Last action: Public Act . . . . . . . . . 103-0423 (Aug. 4, 2023)
Data Governance
System Integrity (see reasoning)
The text focuses on public health data and its accessibility while emphasizing data protection, privacy, and specific regulations regarding data sharing between state departments and local health authorities. However, it does not directly address social impacts of AI, such as biases or ethical implications. The lack of AI terminology diminishes the relevance to Social Impact. Similarly, while it outlines governance relating to public health data, the legislation does not delve into the nuances of data management within AI systems, making Data Governance moderately relevant. On the other hand, it does highlight safeguards and risk management concerning data security, aligning with System Integrity, albeit indirectly. Lastly, while it mentions agreements and protocols similar to those used in AI contexts, it does not specify benchmarks or standards applicable to AI, thus scoring lower in Robustness. Overall, while there are elements pertinent to governance and integrity, the text does not engage directly with AI in the broader contexts presented by the categories.
Sector:
Government Agencies and Public Services (see reasoning)
The text primarily addresses public health and the management of health data accessibility, which is closely tied to Government Agencies and Public Services, as it discusses the functions and responsibilities of state health departments and the access to health data by certified local health departments. It does not engage with the use of AI within Political and Electoral processes, nor does it address implications for the Judicial System, Healthcare as a sector in itself beyond data management, Private Enterprises, or Academic Institutions specifically. The text aligns with contexts of data governed by health services but does not explicitly reference AI regulations in those sectors. Given this focus and contextual relevance, Government Agencies and Public Services is rated higher than others.
Keywords (occurrence): automated (2) show keywords in context
Summary: The bill establishes requirements for adjunctive predictive cardiovascular indicators, ensuring clinical validation and accurate interpretation of cardiovascular data to improve patient outcomes without independently directing therapy.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text discusses the FDA regulations concerning adjunctive predictive cardiovascular devices that utilize software algorithms to analyze cardiovascular vital signs. This directly touches upon Data Governance through its emphasis on the secure and accurate management of data used in these AI-driven devices, ensuring data quality, and patient representation. It also pertains to System Integrity due to the specifications regarding the validation and verification of algorithms, ensuring transparency and control over device outputs. Moreover, the need for clinical data assessments regarding user interpretation and effectiveness connects this text to aspects of Social Impact, as it involves risks to patient health and the potential for misinterpretation. Robustness is relevant as the document emphasizes benchmarks such as sensitivity and specificity for the algorithms. Hence, the text is relevant to all four categories, with stronger connections to Data Governance and System Integrity due to the focus on data management and algorithm validation. Overall, this will influence the establishment of standards and regulations that govern AI in medical devices.
Sector:
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily addresses regulations for medical devices that utilize AI technology, with significant implications for the healthcare sector. The discussion around algorithmic predictions and clinical data assessments directly aligns the text with Healthcare. The risk of misinterpretation and the requirement for thorough verification and validation also relate to the need for quality assurance in health-related applications of AI. While there are related aspects for private enterprises and international cooperation, the primary focus remains on healthcare applications, emphasizing how AI interfaces with patient data and the healthcare system. Thus, other sectors are relevant only tangentially compared to Healthcare.
Keywords (occurrence): algorithm (6) show keywords in context
Description: Creates the Artificial Intelligence Consent Act. Provides that if a person creates an image or video that uses artificial intelligence to mimic or replicate another person's voice or likeness in a manner that would otherwise deceive an average viewer, and displays the content for public viewing, the creator must provide a disclosure on the bottom of the image or video that the image or video is not authentic and does not reflect the original voice or likeness of the person being depicted, unl...
Summary: The AI Consent Act mandates that creators disclose when AI-generated content mimics a person's voice or likeness without consent, allowing individuals to take legal action for violations.
Collection: Legislation
Status date: Feb. 17, 2023
Status: Introduced
Primary sponsor: Stephanie Kifowit
(sole sponsor)
Last action: Rule 19(a) / Re-referred to Rules Committee (March 10, 2023)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly addresses the implications of AI in creating potentially deceptive content through the reproduction of individuals' voices and likenesses without consent. It highlights consumer protections and legal ramifications, thus engaging deeply with societal impacts, particularly surrounding trust and misinformation caused by AI technologies. For Data Governance, while there is mention of rights and action, the focus on the regulation of AI usage in a consent-based manner constitutes a more indirect relationship. System Integrity is relevant due to the underlying themes of transparency and accountability in the AI systems that produce such content. However, the strong focus remains on the social implications of misinformation and representation. Robustness is less relevant as it primarily focuses on performance metrics rather than the ethical implications of AI-generated content.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
This legislation is particularly relevant to themes within the domain of Private Enterprises, Labor, and Employment. It addresses how businesses might utilize AI technologies in producing media containing individuals' likenesses and voices. Moreover, it also pertains to aspects of Government Agencies and Public Services since public institutions may need to navigate AI implications in the public domain. It appears less relevant for sectors like Healthcare and Academic Institutions as the text doesn't directly address AI applications in these fields.
Keywords (occurrence): artificial intelligence (4) show keywords in context