4830 results:
Description: Minnesota Data Privacy Act modification to make consumer health data a form of sensitive data provision and sensitive data additional protections addition provision
Summary: The bill modifies the Minnesota Consumer Data Privacy Act to classify consumer health data as sensitive data, enhancing protections for such data within privacy regulations.
Collection: Legislation
Status date: March 24, 2025
Status: Introduced
Primary sponsor: Bonnie Westlin
(2 total sponsors)
Last action: Referred to Commerce and Consumer Protection (March 24, 2025)
Societal Impact
Data Governance (see reasoning)
The text details modifications to the Minnesota Data Privacy Act, specifically addressing consumer health data and its classification as sensitive data. It emphasizes the additional protections for sensitive data, such as biometric and genetic information. The inclusion of 'algorithms' and 'machine learning' in connection with health data solidifies its relevance for data governance, particularly regarding how personal data is managed securely and without bias. While there is a significant emphasis on data governance, the text also indirectly relates to social impacts by discussing consumer protections and impacts on health-related data handling. However, system integrity and robustness are less directly addressed, as the text primarily revolves around consumer rights and data privacy rather than technical standards or performance benchmarks for AI systems.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Nonprofits and NGOs (see reasoning)
The text is highly relevant to the Healthcare sector, as it primarily concerns consumer health data and its protections under modified privacy laws. It also touches on government operations by describing how the legislation affects the handling of data by entities operating in Minnesota, which could include both public and private organizations involved in healthcare. However, its focus on consumer protections means it is less applicable to sectors like Private Enterprises or Academic Institutions, as it primarily discusses regulatory compliance rather than sector-specific applications of AI. The inclusion of terms like 'algorithms' and 'profiling' links to broader implications on AI use in the healthcare space, enhancing its relevance to that sector.
Keywords (occurrence): automated (2) show keywords in context
Description: Use of cameras prohibition for traffic safety enforcement
Summary: This bill prohibits the use of traffic safety cameras for enforcement in Minnesota, cancels existing grants for such systems, and amends related statutes, aiming to enhance transportation safety without automated monitoring.
Collection: Legislation
Status date: March 20, 2025
Status: Introduced
Primary sponsor: Bill Lieske
(2 total sponsors)
Last action: Referred to Transportation (March 20, 2025)
Societal Impact
System Integrity (see reasoning)
The text explicitly discusses the prohibition of traffic safety cameras, including those that utilize artificial intelligence to enhance their functionality. The regulation focuses on traffic enforcement methods that include automated systems, which ties into societal impacts like privacy and safety. However, it does not deeply address data governance, system integrity, or robustness in AI beyond this mention. The primary focus is on social implications of AI in traffic enforcement, minimizing potential biases and accountability issues. Therefore, while the other categories have some relevance, the strongest connections are with social impact related to safety and individual rights.
Sector:
Government Agencies and Public Services (see reasoning)
The text primarily addresses traffic enforcement and safety, situating its relevance predominantly within public services. As it discusses the passage of legislation impacting how local and state authorities implement and manage traffic enforcement technologies, it fits well within this sector. The impacts described also influence broader public safety issues, but it does not explore other sectors like healthcare or academia, which are not applicable to the content of this bill.
Keywords (occurrence): automated (3)
Description: For legislation to establish a commission (including members of the General Court) relative to state agency automated decision-making, artificial intelligence, transparency, fairness, and individual rights. Advanced Information Technology, the Internet and Cybersecurity.
Summary: The bill establishes a commission in Massachusetts to study and regulate government use of automated decision-making systems, focusing on transparency, fairness, and individual rights.
Collection: Legislation
Status date: Feb. 16, 2023
Status: Introduced
Primary sponsor: Sean Garballey
(4 total sponsors)
Last action: Accompanied a new draft, see H4024 (Aug. 3, 2023)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text addresses the establishment of a commission dedicated to examining automated decision-making systems that utilize artificial intelligence (AI) within government operations in Massachusetts. This means it is deeply tied to societal issues surrounding AI technology, such as fairness, transparency, accountability, and individual rights, which directly aligns with the focus of the Social Impact category. The reference to evaluating existing systems and making recommendations for their use also indicates relevant prominence to the principles of data governance, especially concerning bias and fairness in AI systems. The legislation points to a structured approach to system integrity—ensuring that automated systems are auditable and transparent—and robustness through recommendations for best practices and standards. Consequently, all categories exhibit significant relevance to the AI-related portions of the text.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The proposed legislation is centered around governmental organizations and agencies in Massachusetts, as it specifically discusses AI and automated decision-making in relation to public services. This makes it extremely relevant to the sector of Government Agencies and Public Services, as it seeks insights and oversight on the application of AI in delivering government services. The legislation also indirectly touches upon various aspects of the Judicial System, as decisions made by automated systems can impact legal rights and due processes for individuals, but this connection is slightly weaker compared to the primary focus on government agencies. The legislation does not sufficiently engage with other sectors listed and does not present explicit ties to politics and elections, healthcare, private enterprises, academic fields, international standards, nonprofits, or hybrid sectors.
Keywords (occurrence): artificial intelligence (2) machine learning (2) automated (27) algorithm (1) show keywords in context
Description: To address the needs of workers in industries likely to be impacted by rapidly evolving technologies.
Summary: The "Investing in Tomorrow's Workforce Act of 2023" aims to provide grants for training programs that support workers potentially displaced by automation, enhancing their skills for in-demand jobs and addressing equity concerns.
Collection: Legislation
Status date: Sept. 5, 2023
Status: Introduced
Primary sponsor: Bradley Schneider
(sole sponsor)
Last action: Referred to the House Committee on Education and the Workforce. (Sept. 5, 2023)
Societal Impact
Data Governance (see reasoning)
The text explicitly addresses automation and its impact on the workforce. It discusses the need for training due to the displacement caused by automation technologies. The mention of automation aligns with the concerns of Social Impact, as it recognizes the socioeconomic challenges posed by job losses warranting a legislative response. Automation is noted to particularly affect marginalized groups, emphasizing the social ramifications of advancing technology. However, it does not dive deeply into the structure of AI systems or broader ethical implications on a societal level, which could dilute its relevance to Social Impact. Data Governance is also relevant to an extent due to the implications of managing and securing data during training processes in automation but is not the primary focus. System Integrity is touched upon through the mention of developing technology-based skills but lacks the depth to warrant relevance. Robustness is noted in relation to preparing the workforce for evolving technologies but again lacks specific focus or benchmarks. Overall, the text most strongly contributes to Social Impact, followed by some relevance to Data Governance with a lesser degree for System Integrity and Robustness.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily fits within the sector of Private Enterprises, Labor, and Employment, as it addresses issues concerning job displacement and the need for worker training in industries impacted by automation. It discusses direct consequences on workers and their pathways to employment, demonstrating clear relevance to the labor sector. Elements of Government Agencies and Public Services are present through the potential utilization of federal grants and training programs, but it is not as central as the labor aspect. There are limited implications for sectors like Healthcare, Politics and Elections, and others unless directly linked to workforce development, making them less relevant. As such, the text shows strong relevance to Labor and Employment while indicating some applicability to Government Agencies.
Keywords (occurrence): autonomous vehicle (1) show keywords in context
Description: Requires employers and employment agencies to notify candidates for employment if machine learning technology is used to make hiring decisions prior to the use of such technology.
Summary: The bill mandates employers and agencies to inform job candidates when machine learning technology is used in hiring decisions, specifying criteria and data sources involved, to enhance transparency in the hiring process.
Collection: Legislation
Status date: July 7, 2023
Status: Introduced
Primary sponsor: Linda Rosenthal
(7 total sponsors)
Last action: referred to labor (Jan. 3, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly addresses the use of automated employment decision tools, particularly focusing on their implications for hiring practices within employment agencies and employers. It discusses the definitions related to automated systems powered by machine learning and artificial intelligence, which directly relates to the social impact of AI in terms of informing candidates about the use of such technologies, potentially impacting fairness and transparency in hiring. This strongly connects to the Social Impact category as it revolves around candidate rights and employer responsibilities regarding AI-utilized hiring practices. However, the text also addresses the management of data concerning candidates, making it relevant to the Data Governance category, especially in terms of data collection and candidate notifications about data use. The focus on automated tools does touch upon System Integrity, particularly regarding transparency in automated processes, but this is less prominent compared to the other categories. Robustness is less applicable here since it mainly focuses on benchmarks or standards for AI performance rather than the regulatory aspects present in this law.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily pertains to the Private Enterprises, Labor, and Employment sector by regulating the implications of machine learning in hiring practices. As it aims to safeguard candidates' rights against potentially opaque AI-driven decisions, it is closely tied to employment and labor legislation. There's moderate relevance to Government Agencies and Public Services since overseeing employment practices often involves government regulation, but this is secondary. The Judicial System may also come into play regarding the right of candidates to seek legal recourse for discrimination or transparent processes. However, the direct focus remains on the employment sector, making that the most relevant.
Keywords (occurrence): artificial intelligence (1) machine learning (1) automated (9) show keywords in context
Description: Amends the Artificial Intelligence Video Interview Act. Makes a technical change in a Section concerning the short title.
Summary: The bill amends the Artificial Intelligence Video Interview Act to make a technical change in its short title, clarifying its reference for employment-related matters in Illinois.
Collection: Legislation
Status date: Feb. 2, 2023
Status: Introduced
Primary sponsor: Omar Aquino
(6 total sponsors)
Last action: Senate Floor Amendment No. 1 Pursuant to Senate Rule 3-9(b) / Referred to Assignments (June 26, 2023)
The text pertains specifically to an amendment of the Artificial Intelligence Video Interview Act. By explicitly mentioning 'Artificial Intelligence', it signifies relevance to how AI is utilized in the context of employment practices, particularly through video interviewing. However, the text does not delve into the social impact, data governance, system integrity, or robustness of AI systems or their implications. It is primarily a technical change, which limits its relevance to the broader categories. The relevance to 'Social Impact' is minimal since there are no discussions of societal implications or protections related to AI usage in video interviewing processes. The 'Data Governance' is also not applicable here since it doesn't discuss data management or accuracy in AI applications. Similarly, there are no security or oversight mandates in this context, limiting the relevance to 'System Integrity' and 'Robustness'. Therefore, while AI is mentioned, the overall applicability to these categories is weak.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text discusses the 'Artificial Intelligence Video Interview Act', which indicates its relevance to the employment sector specifically, as it relates to how AI is integrated into employment practices through video interviews. However, it does not address broader impacts or implications in other sectors. Other sectors such as Healthcare or Government Agencies are not mentioned or relevant here. Thus, the only relevant sector is Private Enterprises, Labor, and Employment, though the details are limited. The legislation touches upon employment but doesn't elaborate on regulations affecting the labor market or employment practices significantly, which leads to a lower relevance score.
Keywords (occurrence): artificial intelligence (2)
Description: A bill to establish the National Artificial Intelligence Research Resource, and for other purposes.
Summary: The CREATE AI Act of 2024 establishes the National Artificial Intelligence Research Resource (NAIRR) to improve access to AI resources for research, fostering diversity and innovation within the field.
Collection: Legislation
Status date: July 27, 2023
Status: Introduced
Primary sponsor: Martin Heinrich
(4 total sponsors)
Last action: Placed on Senate Legislative Calendar under General Orders. Calendar No. 721. (Dec. 17, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The CREATE AI Act of 2023 focuses on establishing the National Artificial Intelligence Research Resource (NAIRR) to improve access to artificial intelligence resources, promote diversity in AI research, and support AI development. It emphasizes the importance of the equitable distribution of AI research resources, which addresses social aspects of AI effects. Data governance is highly relevant, as the act includes mandates regarding data repositories and managing datasets and protocols. System integrity is relevant due to the establishment of governance structures and evaluation criteria for the NAIRR. Robustness is also relevant since the act focuses on performance indicators and evaluation of AI resources and systems.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions
International Cooperation and Standards (see reasoning)
The CREATE AI Act of 2023 touches on multiple sectors, primarily focusing on Academic and Research Institutions by promoting AI research and democratizing resources. It influences Government Agencies and Public Services with its implications for federal resource management and operational practices related to AI. The emphasis on diversity indicates relevance in Private Enterprises, Labor, and Employment sectors as well. While it may indirectly touch on International Cooperation and Standards, the primary relevance remains within academic and public service contexts.
Keywords (occurrence): artificial intelligence (45) show keywords in context
Description: A bill to provide a framework for artificial intelligence innovation and accountability, and for other purposes.
Summary: The Artificial Intelligence Research, Innovation, and Accountability Act of 2024 aims to establish a framework for AI development and accountability, promoting transparency and setting standards for AI systems' deployment and use.
Collection: Legislation
Status date: Nov. 15, 2023
Status: Introduced
Primary sponsor: John Thune
(8 total sponsors)
Last action: Placed on Senate Legislative Calendar under General Orders. Calendar No. 723. (Dec. 18, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This text directly addresses various aspects of artificial intelligence, including its development, application, and accountability. The mention of terms related to AI such as 'artificial intelligence systems', 'generative artificial intelligence', and specific provisions aimed at governing these technologies reveals the intention to create a comprehensive legal framework for AI that includes innovation, risk assessment, and consumer protection. The text covers both technical and ethical considerations of AI deployment, greatly contributing to societal implications and legal governance of AI technologies.
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system
Healthcare
Academic and Research Institutions
International Cooperation and Standards
Nonprofits and NGOs
Hybrid, Emerging, and Unclassified (see reasoning)
The text addresses several different sectors where AI is employed. It is relevant to the Government Agencies and Public Services sector as it outlines standards, practices, and recommendations for the federal government on the use of AI systems. There are clear implications for accountability in potential legal and ethical frameworks that relate to the Judicial System, as well as elements that could impact Political and Elections through proposed consumer education. Healthcare implications may also arise through the use of AI systems for decision making. The text does not specifically pertain to Private Enterprises since the focus is more on governmental and regulatory frameworks, therefore receiving lower relevance in that sector.
Keywords (occurrence): artificial intelligence (225) foundation model (1) show keywords in context
Description: Similar Bills
Summary: The bill appropriates funds for Massachusetts’ Fiscal Year 2023 to support various public services, including education, housing, and health programs, ensuring financial stability and addressing emerging needs.
Collection: Legislation
Status date: March 23, 2023
Status: Introduced
Primary sponsor: Aaron Michlewitz
(sole sponsor)
Last action: Text of an amendment, see H58 (March 23, 2023)
The text primarily outlines budgetary appropriations without delving into any specific impacts or regulations pertaining to AI's influence on society, data governance, integrity, or robustness. The only mention of AI is about the Massachusetts Technology Park Corporation potentially funding projects that involve AI and machine learning under a matching grant program. However, given the absence of detailed discussions on potential social impacts, data security, system integrity, or robustness metrics specifically related to AI, the relevance per category is minimal.
Sector:
Academic and Research Institutions (see reasoning)
The text does not focus on specific applications of AI within any of the sectors listed. It contains a general mention of AI in relation to funding for technology and innovation, but lacks explicit ties to governance, electoral processes, healthcare, or any other defined sectors. Given the lack of context or depth in the application of AI to the sectors, the relevance scores are quite low.
Keywords (occurrence): artificial intelligence (1) machine learning (1) automated (1) show keywords in context
Description: Reinserts the provisions of Senate Amendment No. 1 with the following change. Adds the Attorney General or his or her designee to the Generative AI and Natural Language Processing Task Force.
Summary: The bill establishes a Generative AI and Natural Language Processing Task Force in Illinois to investigate AI technologies, recommend legislation, and assess their impact on various sectors by December 31, 2024.
Collection: Legislation
Status date: Aug. 4, 2023
Status: Passed
Primary sponsor: Abdelnasser Rashid
(16 total sponsors)
Last action: Public Act . . . . . . . . . 103-0451 (Aug. 4, 2023)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly discusses the creation of a task force focused on generative artificial intelligence (AI) and natural language processing. It outlines the responsibilities of the task force, which include investigating generative AI, protecting consumer information related to it, assessing its use in public services and education, and evaluating its implications for cybersecurity and civil rights. These elements tie directly to societal implications, data governance, and system integrity concerns stemming from the deployment of AI technologies, making the text highly relevant to the categories. Given the focus on consumer protection, education, and civil rights, the Social Impact category stands out as particularly relevant. The proposed task force aims to address the societal implications of AI technologies, thus scoring highly in this category. Data Governance is relevant due to the mention of protecting consumer information. System Integrity is indicated by the focus on cybersecurity. Robustness is moderately relevant as it is less central to the task force's objectives but relates to performance assessments. Therefore, the overall score reflects a clear link to Social Impact and a noteworthy, albeit less direct, connection to the other categories.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The text addresses legislation involving a task force dedicated to generative AI, which can be relevant across various sectors. In particular, the task force's assessment of AI in public service delivery impacts Government Agencies and Public Services significantly, thus earning a high score in that sector. The focus on education and the recommendations regarding the use of generative AI by students also points towards relevance to Academic and Research Institutions. Given the potential employment implications mentioned, Private Enterprises, Labor, and Employment is also relevant. However, the text does not mention any unique applications or regulations in the realms of Politics and Elections, the Judicial System, Healthcare, International Cooperation and Standards, Nonprofits and NGOs. Therefore, those sectors score lower. In summary, the Government Agencies and Public Services sector has a direct correlation, while the Academic and Research Institutions and Private Enterprises, Labor, and Employment possess indirect relevance.
Keywords (occurrence): artificial intelligence (6) show keywords in context
Description: To improve the cybersecurity of the Federal Government, and for other purposes.
Summary: The Federal Information Security Modernization Act of 2024 aims to enhance the cybersecurity of the Federal Government, introducing measures for better incident transparency, penetration testing, and implementing zero trust architecture.
Collection: Legislation
Status date: July 11, 2023
Status: Introduced
Primary sponsor: Nancy Mace
(5 total sponsors)
Last action: Placed on the Union Calendar, Calendar No. 790. (Dec. 19, 2024)
Data Governance
System Integrity (see reasoning)
The text primarily focuses on enhancing cybersecurity within the Federal Government, touching upon modernization of existing practices and incorporating advanced technologies. The mention of 'automation and artificial intelligence' indicates a recognition of AI's role in securing federal information systems, hence its relevance to the categories. However, the text does not extensively cover aspects like social impacts, data management, or system integrity beyond security measures, resulting in balanced relevance across categories, but with lower emphasis on Social Impact and Robustness. Therefore, the relevance varies: Social impact focuses on broader societal implications and legislative initiatives that are not predominant here. Data Governance applies as there may be inclusion of management practices around data security related to AI. System Integrity is relevant given the focus on cybersecurity measures ensuring integrity in AI systems. Robustness is addressed minimally related to AI benchmarks and performance measures in cybersecurity; hence it has a lower direct relevance.
Sector:
Government Agencies and Public Services (see reasoning)
The text does not dedicatedly address any specific sectors such as political campaigns or healthcare; instead, it pertains to a governmental focus on bolstering cybersecurity standards. Therefore, 'Government Agencies and Public Services' emerges as the most relevant, as the legislation directly pertains to how federal agencies must approach AI and automation in their cybersecurity frameworks. The discussed provisions enhance public service operations related to security and privacy, firmly categorizing it within government public service enhancements while the other sectors have limited relevance.
Keywords (occurrence): artificial intelligence (6) machine learning (1) automated (6) show keywords in context
Description: Establishes the crime of unlawful dissemination or publication of a fabricated photographic, videographic, or audio record as a class E felony.
Summary: The bill establishes the crime of unlawful dissemination of fabricated audiovisual records as a class E felony, aiming to protect individuals from harm caused by false representations and manipulated media.
Collection: Legislation
Status date: May 8, 2023
Status: Introduced
Primary sponsor: Clyde Vanel
(7 total sponsors)
Last action: referred to codes (Jan. 3, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The legislation directly addresses the consequences of disseminating fabricated media using advanced synthetic media technologies, including deepfakes, which are explicitly identified by keywords related to AI. Hence, it has a strong relevance to all categories, especially Social Impact, as it seeks to mitigate potential harm to individuals caused by deceptive AI-generated content. In terms of Data Governance, it touches on the accuracy and integrity of information, although it doesn't delve deeply into data management practices. System Integrity and Robustness are relevant as well, but less so than Social Impact and Data Governance since the focus is on legal ramifications rather than the operational integrity or performance metrics of AI systems.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The legislation heavily influences several sectors, particularly those related to media and personal rights. It presents significant relevance to the Private Enterprises sector due to implications for social media companies and content platforms that could be held liable for the dissemination of harmful fabricated content. It also affects Judicial System, as it establishes new legal standards regarding the authenticity of media and the consequences of misinformation. Furthermore, while it touches on Government Agencies, it does not significantly affect their operations specifically. Academically, while it notes certain research exceptions, it is not primary in prioritizing academic interests. Political commentary is somewhat relevant, but not enough to place it in that category directly. Overall, sectors like Healthcare and International Cooperation do not pertain as they are not mentioned.
Keywords (occurrence): artificial intelligence (1) synthetic media (1) show keywords in context
Description: A bill to direct the Federal Trade Commission to require impact assessments of automated decision systems and augmented critical decision processes, and for other purposes.
Summary: The Algorithmic Accountability Act of 2023 mandates the Federal Trade Commission to require impact assessments for automated decision systems, ensuring accountability and consumer protection regarding their significant societal implications.
Collection: Legislation
Status date: Sept. 21, 2023
Status: Introduced
Primary sponsor: Ron Wyden
(12 total sponsors)
Last action: Read twice and referred to the Committee on Commerce, Science, and Transportation. (Sept. 21, 2023)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This text sets forth the Algorithmic Accountability Act, which focuses heavily on the impact assessments of automated decision systems. The language directly relates to the oversight and accountability of AI and algorithm-based decision-making processes. The keywords relevant to AI include 'automated decision system' and 'augmented critical decision process.' Furthermore, the bill outlines requirements for these systems, solidifying its relevance to data governance, system integrity, and social impact, particularly concerning consumer protection, the ramifications of AI decisions, and ethical standards in algorithmic transparency.
Sector:
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs
Hybrid, Emerging, and Unclassified (see reasoning)
The bill specifically addresses the use of automated decision systems that can affect areas such as consumer rights, employment, and financial services, making it highly relevant to various sectors. For instance, it acknowledges the significant effects of decision systems on consumers—central to sectors like Healthcare, Government Agencies, and Private Enterprises. The bill emphasizes accountability and assessments that impact consumer welfare, fitting well into sectors that require ethical considerations in the deployment of AI.
Keywords (occurrence): artificial intelligence (3) machine learning (1) automated (44) show keywords in context
Description: A bill to protect the safety of children on the internet.
Summary: The Kids Online Safety Act aims to enhance child safety on the internet by requiring platforms to implement safeguards, provide transparency, and limit risky practices for users under 17.
Collection: Legislation
Status date: May 2, 2023
Status: Introduced
Primary sponsor: Richard Blumenthal
(73 total sponsors)
Last action: Placed on Senate Legislative Calendar under General Orders. Calendar No. 287. (Dec. 13, 2023)
Societal Impact
Data Governance
System Integrity (see reasoning)
The Kids Online Safety Act primarily focuses on ensuring the safety and welfare of minors on online platforms, but it also touches upon AI technologies, particularly in how these platforms utilize personalized recommendation systems and data collection practices. This is significant for the Social Impact category since it addresses potential harms and ethical considerations surrounding AI's influence over children, such as the potential for addiction or exposure to harmful content. In the context of Data Governance, the bill deals with the collection of personal data, user privacy, and mandating safeguards for minors, making this category relevant as well. System Integrity is moderately relevant since it discusses the need for transparency and control over automated systems used to make decisions regarding content presented to minors. The Robustness category is the least relevant here, as the bill does not explicitly focus on performance benchmarks or auditing for AI systems but is concerned more about the safeguarding aspects. Therefore, I conclude that Social Impact and Data Governance are the most relevant categories, followed by System Integrity.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Nonprofits and NGOs (see reasoning)
The Kids Online Safety Act is very relevant to multiple sectors. In particular, it impacts 'Government Agencies and Public Services' as it mandates actions from platforms used by minors, which are typically regulated by government entities. It is equally relevant to 'Private Enterprises, Labor, and Employment' because the legislation applies directly to companies providing online platforms where minors may be present. Furthermore, it engages with 'Nonprofits and NGOs' focused on child welfare and safeguarding. The relevance to 'Politics and Elections' is less explicit since it does not directly address electoral processes, though the implications of online safety could extend to misinformation in children's content. The other sectors are not substantially intersected by this legislation. Therefore, the most pertinent sectors here are Government Agencies and Public Services, Nonprofits and NGOs, and Private Enterprises, Labor, and Employment.
Keywords (occurrence): artificial intelligence (1) machine learning (1) automated (5) recommendation system (5) algorithm (18) show keywords in context
Description: To require the Assistant Secretary for Preparedness and Response shall conduct risk assessments and implement strategic initiatives or activities to address threats to public health and national security due to technical advancements in artificial intelligence or other emerging technology fields.
Summary: The Artificial Intelligence and Biosecurity Risk Assessment Act requires the Assistant Secretary for Preparedness and Response to evaluate risks posed by AI advancements to public health and national security, implementing necessary strategic actions.
Collection: Legislation
Status date: July 18, 2023
Status: Introduced
Primary sponsor: Anna Eshoo
(6 total sponsors)
Last action: Referred to the Subcommittee on Health. (July 21, 2023)
Societal Impact
Data Governance (see reasoning)
The text directly addresses the risks associated with advancements in artificial intelligence, emphasizing the need for strategic initiatives and assessments concerning public health and national security. It explicitly mentions AI and its potential to be misused in developing biohazards, which aligns closely with the social impact of AI on public safety and health. The proposed measures indicate a recognition of the societal implications of AI technologies, thus making it highly relevant to the Social Impact category. Data governance is also relevant as it discusses the risks tied to potential AI misuse in biological contexts, which could encompass data responsibility and management of AI-generated content or data. However, the primary focus remains on societal risks. System Integrity does not have explicit mentions, but there is an implied concern over transparency and control mechanisms needed to manage AI risks. Robustness reflects on creating benchmarks for risk assessment but is less explicitly defined in this text. Therefore, the scores reflect this analysis accordingly.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text appears to be primarily oriented towards public health and national security, positioning it heavily under the Government Agencies and Public Services sector due to the role of the Assistant Secretary for Preparedness and Response. The mention of risks concerning public health directly relates to governmental approaches to AI in public health contexts. While it touches on emergency responses and strategic initiatives, there are no explicit references to politics, the judicial system, healthcare, private enterprises, or research institutions. International cooperation may be inferred in the context of global biological risks but is not directly addressed. Therefore, the scores reflect that focus with a stronger emphasis on government implications.
Keywords (occurrence): artificial intelligence (4) show keywords in context
Description: Prohibits the use of external consumer data and information sources being used when determining insurance rates; provides that no insurer shall unfairly discriminate based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression; or use any external consumer data and information sources, as well as any algorithms or predictive models that use external consumer data and information sources, in a way that unfairly discrimina...
Summary: This bill prohibits insurers in New York from using external consumer data to determine insurance rates, aiming to prevent unfair discrimination based on various personal characteristics.
Collection: Legislation
Status date: Dec. 13, 2023
Status: Introduced
Primary sponsor: Brian Cunningham
(sole sponsor)
Last action: referred to insurance (Jan. 3, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text explicitly targets the use of AI-related technologies, specifically algorithms, machine learning processes, and predictive models in the field of insurance. The relevance to Social Impact is high due to its focus on discrimination and fairness standards, which addresses societal concerns around the use of such technologies. Data Governance is also very relevant, as the legislation mandates insurers to ensure their data practices do not lead to discrimination, thus emphasizing secure and fair data management in AI. System Integrity has moderate relevance because although the law ensures ethical use of algorithms and models, it does not address systemic integrity concerns like security or compliance thoroughly. Robustness is slightly relevant since the legislation does not discuss benchmarks for AI performance but emphasizes testing and assessment of algorithms to prevent unfair discrimination.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The text is primarily focused on the insurance sector and how AI is regulated within it. The relevance to Private Enterprises, Labor, and Employment is high as it outlines how insurers can operate in a manner that avoids discrimination in case management and pricing, directly impacting business practices. Government Agencies and Public Services is also relevant because it involves state oversight in regulating insurers and their practices with AI. Judicial System has some relevance since the implications of discrimination could potentially lead to legal challenges, but there is no direct mention of AI use in legal processes. Other sectors, like Healthcare, Politics and Elections, or Nonprofits and NGOs, do not show relevance here as the legislation is targeted towards insurance practices specifically.
Keywords (occurrence): machine learning (1) algorithm (6) show keywords in context
Description: To counter the military-civil fusion strategy of the Chinese Communist Party and prevent United States contributions to the development of dual-use technology in China.
Summary: The Preventing PLA Acquisition of United States Technology Act of 2023 aims to hinder the Chinese Communist Party's military-civil fusion strategy by restricting U.S. entities from collaborating with identified Chinese entities, thereby preventing technology transfer that could enhance military capabilities.
Collection: Legislation
Status date: April 28, 2023
Status: Introduced
Primary sponsor: Jim Banks
(sole sponsor)
Last action: Referred to the Subcommittee on Communications and Technology. (May 5, 2023)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly addresses the development of technologies including artificial intelligence in the context of preventing military-civil fusion strategies of the Chinese Communist Party. As such, it highlights potential social implications and ethical considerations, such as the misuse of AI technologies for military purposes which can harm social structures and trust. Data governance is relevant as it concerns research relationships and data sharing with entities that are determined to be concerns, which indirectly relates to the data aspect of AI. System integrity is touched upon with the need for oversight and prohibitions against specific partnerships and projects involving AI, ensuring that the development of AI does not become compromised by military objectives. Robustness is less relevant as the bill does not focus on the performance benchmarks or certification of AI technologies but rather on restrictions on specific entities and technologies.
Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)
The legislation specifically addresses how AI technology ties into defense and national security, particularly regarding the prevention of technology exchange with Chinese entities that might result in dual-use technologies, including military applications. The text is not specifically about the judicial system, healthcare, private enterprises, or nonprofits; however, it is relevant in the context of government agencies as it outlines prohibitions and regulatory frameworks that will affect federal institutions and their technological collaborations. Academic institutions are included due to their role in research, particularly involving funding and partnerships that could contribute to military ends.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Amends the Freedom of Information Act. Modifies the definition of "private information" by providing that medical records include electronic medical records and the information contained within or extracted from an electronic medical records system operated or maintained by a Health Insurance Portability and Accountability Act covered entity. Exempts from disclosure all protected health information that may be contained within or extracted from any record held by a covered entity, including i...
Summary: HB2888 amends the Freedom of Information Act in Illinois to redefine "private information" to include electronic medical records and enhance protections against the disclosure of personal health information, in line with HIPAA compliance.
Collection: Legislation
Status date: Feb. 16, 2023
Status: Introduced
Primary sponsor: Lilian Jimenez
(sole sponsor)
Last action: Chief Sponsor Changed to Rep. Lilian Jiménez (Feb. 22, 2023)
This text primarily discusses amendments to the Freedom of Information Act relating to the definition of private information, specifically concerning medical records, including electronic medical records. Although it touches on aspects of data privacy and the importance of safeguarding sensitive information, it does not directly discuss AI, algorithms, or any topics that fall under the scope of AI's social impacts, integrity, data governance, or robustness. Therefore, the relevance of each category is deduced as follows: - Social Impact: This category is not relevant, as the text does not specifically address societal impacts of AI, such as bias, fairness, or misinformation. - Data Governance: This is only slightly relevant because the text does mention policies regarding private information and data management; however, it does not entail frameworks specifically related to AI governance. - System Integrity: The relevance is slight as well, since the text covers rules related to the privacy and security of data but does not discuss AI system integrity or security protocols directly. - Robustness: There is no mention of performance benchmarks or compliance standards related to AI, hence not applicable.
Sector:
Healthcare (see reasoning)
The text makes no specific mention of or implications for sectors listed. It focuses predominantly on medical records and data privacy rather than any direct application of AI technologies in certain sectors. Because of this: - Politics and Elections: No relevant content as the text does not discuss political campaigns or elections. - Government Agencies and Public Services: Slightly relevant because it addresses public records but does not discuss AI applications in government services. - Judicial System: Not relevant; the text does not address AI regulations connected to legal systems. - Healthcare: Moderately relevant due to the content's focus on electronic medical records; however, it does not touch on AI specifically. - Private Enterprises, Labor, and Employment: Not relevant as it does not discuss AI's impact on labor or employment laws. - Academic and Research Institutions: Not relevant since there is no discussion regarding research or educational institutions and their AI utilization. - International Cooperation and Standards: Not relevant, no content on international AI cooperation. - Nonprofits and NGOs: Not relevant; the text does not mention organizations of this type. - Hybrid, Emerging, and Unclassified: Not relevant; does not fit any emerging AI applications.
Keywords (occurrence): automated (2) show keywords in context
Description: An act to amend the Budget Act of 2024 by amending Items 0110-001-0001, 0120-011-0001, 0250-496, 0509-001-0001, 0509-495, 0511-001-0001, 0515-495, 0515-496, 0521-101-3228, 0521-131-0001, 0530-001-0001, 0540-001-0001, 0540-101-0001, 0540-495, 0552-001-0001, 0555-495, 0650-001-0001, 0650-001-0140, 0650-001-0890, 0650-001-3228, 0650-001-9740, 0650-101-0890, 0650-101-3228, 0650-490, 0650-495, 0690-103-0001, 0690-496, 0820-001-0001, 0820-001-0367, 0820-001-0567, 0820-015-0001, 0840-495, 0860-002-0...
Summary: The Budget Act of 2024 amends previous budget items, reallocating funds and making appropriations for various state government operations while ensuring immediate effectiveness as a budget bill.
Collection: Legislation
Status date: March 23, 2023
Status: Engrossed
Primary sponsor: Jesse Gabriel
(sole sponsor)
Last action: Re-referred to Com. on B. & F.R. (July 1, 2024)
Data Governance (see reasoning)
The text mainly revolves around budget amendments and appropriations for various government functions. While it includes a mention of 'Generative Artificial Intelligence' (GenAI) in relation to pilot projects led by the Government Operations Agency and discusses the implications and operational guidelines for these projects, the rest of the text does not have significant relevance to the broader implications of AI on society, data governance, system integrity, or robustness. The provisions regarding GenAI could touch on social impact and data governance, such as the implications for personal information, but these concepts are not significant enough throughout the text to warrant high relevance scores for the broader categories. As such, a score of 2 for Social Impact and 3 for Data Governance is appropriate due to the specific mention of AI-related projects and guidelines, while System Integrity and Robustness are not discussed, so they will receive a score of 1.
Sector:
Government Agencies and Public Services (see reasoning)
The legislation references AI in a specific operational context within governmental functions, particularly the Government Operations Agency. It discusses conducting generative AI pilot projects but does not directly reference its application in the healthcare system, judicial processes, or educational institutions, nor does it address sectors like private enterprises or international cooperation directly. Thus, the relevance is primarily aligned with government operations rather than the distinct sectors defined. A score of 4 for Government Agencies and Public Services is appropriate due to the involvement of governmental operations with the utilization of AI, while the other sectors receive much lower relevance scores. The specific nature of the bill and its focus limits its applicability to other sectors.
Keywords (occurrence): artificial intelligence (8) automated (5) show keywords in context
Description: A bill to require that social media platforms verify the age of their users, prohibit the use of algorithmic recommendation systems on individuals under age 18, require parental or guardian consent for social media users under age 18, and prohibit users who are under age 13 from accessing social media platforms.
Summary: The Protecting Kids on Social Media Act mandates age verification for social media users, prohibits algorithmic recommendations for users under 18, requires parental consent for minors, and restricts access for users under 13.
Collection: Legislation
Status date: April 26, 2023
Status: Introduced
Primary sponsor: Brian Schatz
(11 total sponsors)
Last action: Read twice and referred to the Committee on Commerce, Science, and Transportation. (April 26, 2023)
Societal Impact
Data Governance (see reasoning)
This document explicitly addresses the use of 'algorithmic recommendation systems,' highlighting concerns about the impact of these systems on minors, including age verification requirements and parental consent. This speaks to the societal implications of AI, particularly in relation to the safeguarding of children and the ethical considerations of algorithmically driven content. Therefore, the Social Impact category is highly relevant. The Data Governance category receives a moderately relevant score as it touches upon personal data requirements for age verification and the management of personal data related to minors. The System Integrity category is slightly relevant due to the mention of overseeing technological aspects such as data management and security in the context of age verification, but it does not emphasize system integrity measures in a significant way. Robustness is deemed not relevant since there are no mentions of performance benchmarks or auditing for AI systems. Overall, the bill centers more on social implications and data governance rather than system integrity or robustness.
Sector: None (see reasoning)
The bill is primarily concerned with the protection of minors on social media, which relates closely to the Social Impact sector, particularly in the context of how AI-driven algorithms can influence minors. It does not directly address the role of AI in elections, judicial systems, healthcare, or other defined sectors, making the scores for Politics and Elections, Judicial System, Healthcare, Private Enterprises, Labor, and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified all not relevant. The Government Agencies and Public Services sector receives a slightly relevant score as it touches upon the role of governmental oversight through mandated regulations.
Keywords (occurrence): automated (1) recommendation system (1) show keywords in context