4951 results:


Description: STATE AFFAIRS -- Adds to existing law to establish the Artificial Intelligence Advisory Council.
Summary: The bill establishes the Idaho Artificial Intelligence Advisory Council to oversee and assess AI systems used by state agencies, ensuring ethical practices and evaluating their impact on residents.
Collection: Legislation
Status date: March 1, 2024
Status: Engrossed
Primary sponsor: Environment, Energy and Technology Committee (sole sponsor)
Last action: Introduced, read first time; referred to: State Affairs (March 4, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text establishes the Artificial Intelligence Advisory Council, which has significant relevance to how artificial intelligence impacts society and individuals, thus fitting into the Social Impact category. The council's mandate includes responsibilities such as monitoring AI systems in government, understanding their effects on rights and privileges, and safeguarding against discrimination, all of which are central to the social implications of AI deployment. In terms of Data Governance, the requirement for inventory reports on automated decision systems indicates an approach to secure and accurate data management practices. The council’s responsibilities also touch upon System Integrity, particularly with respect to overseeing the development and ethics of AI systems, and ensuring they don’t infringe on citizens' rights. There’s a clear link to Robustness as well, because the council's findings might lead to new benchmarks and recommendations for AI performance within government systems based on their assessments. Nonetheless, the overwhelming emphasis on societal impact, governance of data pertaining to AI, and the standards of system integrity suggests a higher relevance for Social Impact and Data Governance in particular.


Sector:
Government Agencies and Public Services
Judicial system (see reasoning)

The establishment of an advisory council that will monitor and make recommendations on the use of artificial intelligence in government indicates strong relevance to Government Agencies and Public Services. The council's directives on oversight of automated decision-making systems directly impact how state agencies deploy AI technologies. Regarding Political and Elections, while AI may have implications in this sector, the document does not explicitly address these aspects. Judicial System relevance could be inferred due to oversight of automated decision systems potentially affecting legal rights, yet it is not directly indicated in the text. Healthcare and other sectors do not apply as the legislation specifically focuses on state government functions. Thus, Government Agencies and Public Services stands out as the most relevant sector.


Keywords (occurrence): automated (28) algorithm (2) show keywords in context

Description: An Act To Create A New Section Of Law To Provide That If Any Political Communications Were Generated In Whole Or In Part By Synthetic Media Using Artificial Intelligence Algorithms, Then Such Political Communications Shall Have A Clear And Prominent Disclaimer Stating That The Information Contained In The Political Communication Was Generated Using Artificial Intelligence Algorithms; To Amend Section 23-15-897, Mississippi Code Of 1972, To Provide That If Any Published Campaign Materials Or P...
Summary: The bill mandates that any political communications or campaign materials generated by artificial intelligence must include a clear disclaimer indicating their AI origin, enhancing transparency in political discourse.
Collection: Legislation
Status date: March 5, 2024
Status: Other
Primary sponsor: Timaka James-Jones (10 total sponsors)
Last action: Died In Committee (March 5, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly pertains to the regulation of political communications generated by AI algorithms, predominantly focusing on transparency in political discourse. This ties closely with the Social Impact category as it addresses the implications of AI-generated content on political communication, misinformation, and public trust. The act seems to attempt to mitigate the potential negative social consequences of AI use in election campaigns by enforcing disclaimers, thereby promoting accountability. The Data Governance category is also relevant due to the implications for data used in creating synthetic media and ensuring transparency in its generation, although less so than Social Impact. System Integrity is relevant in that it discusses the transparency and accuracy of disclosures in AI-generated communications, but does not explicitly tackle broader issues of security or oversight. The Robustness category is the least relevant, as it does not deal directly with benchmarks or performance standards for AI systems. Overall, the greatest emphasis appears to be on mitigating the social impacts of AI, especially in relation to misinformation and trust in the electoral process.


Sector:
Politics and Elections (see reasoning)

The Act fundamentally addresses the use of AI in the political sphere, emphasizing the necessity for disclaimers in political communications that are AI-generated. This places it squarely in the Politics and Elections sector, as it seeks to regulate the use of AI in political contexts and enhance transparency in communication within that domain. There is a minor indirect reference to the broader implications on Public Services if these regulations affect how elections are managed and communicated, but the primary focus remains on political communications. The act does not explicitly address or touch upon other areas such as Healthcare, Judicial Systems, or other sectors, which diminishes relevance in those areas. Therefore, while other sectors might have minor relevance, the main impact is solidly within Politics and Elections.


Keywords (occurrence): artificial intelligence (10) synthetic media (4) show keywords in context

Description: Establishes Artificial Intelligence in Education Task Force within DOE; provides requirements for such task force.
Summary: The bill establishes an AI in Education Task Force to evaluate artificial intelligence in K-12 and higher education, create policy recommendations, and develop a statewide computer science education strategic plan in Florida.
Collection: Legislation
Status date: March 8, 2024
Status: Other
Primary sponsor: Choice & Innovation Subcommittee (2 total sponsors)
Last action: Died in Education & Employment Committee (March 8, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text explicitly mentions the establishment of a task force focused on evaluating artificial intelligence in education, indicating relevance to societal impacts such as policy recommendations and ethical considerations. The task force will also assess the ethical, legal, and data privacy implications of AI in education, which aligns with aspects of the Social Impact category. It touches on the importance of workforce development in relation to AI, suggesting implications for employment and societal changes as AI becomes integrated into education. The need for educational standards and potentially ensuring fairness in AI curricula further elevates its relevance to the Social Impact category. On the other hand, while the text mentions aspects like data privacy and ethical implications of AI use, it does not go deeply into the policy frameworks or specific mandates that would fit strongly into the Data Governance or System Integrity categories. Overall, the focus is mostly about the social implications arising from the integration of AI into education, leading to an understanding that the Social Impact category is very relevant. Data Governance and System Integrity, while present in context, do not receive as strong a connection, especially since they are not the primary focus of the legislation. Robustness is not applicable as the legislation is not about performance benchmarks or regulatory compliance measures for AI systems.


Sector:
Academic and Research Institutions (see reasoning)

The focus on establishing an AI in Education Task Force, evaluating the uses of AI technologies in K-12 and higher education, and making policy recommendations indicates relevance specifically to the education sector. The task force aims to address the needs of the educational landscape in relation to AI, making a clear connection to policies surrounding education and technology. While it indirectly touches on government agencies due to its administrative aspects, the primary focus remains in the education realm, particularly the effects of AI applications in educational settings. The text does not address the sectors of Politics and Elections, Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, or Hybrid, Emerging, and Unclassified, which limits applicability to the education sector directly.


Keywords (occurrence): artificial intelligence (4) machine learning (1) show keywords in context

Description: An act to amend Section 33548 of the Education Code, relating to pupil instruction.
Summary: Assembly Bill No. 2876 mandates the inclusion of media and artificial intelligence literacy in California's educational curriculum and instructional materials for grades K-8, promoting critical skills for digital citizenship.
Collection: Legislation
Status date: May 9, 2024
Status: Engrossed
Primary sponsor: Marc Berman (2 total sponsors)
Last action: Read second time. Ordered to third reading. (June 25, 2024)

Category:
Societal Impact
System Integrity (see reasoning)

The text explicitly discusses the inclusion of Artificial Intelligence literacy as part of the curriculum for grades K-8 in California. This highlights the direct impact of AI on education and students, particularly focusing on knowledge and skills related to AI's principles and applications, which are essential for understanding technology in a modern context. It also touches upon ethical considerations, which could link to societal impacts such as responsible use and trust in technology. This makes the relevance to 'Social Impact' very strong. However, while the inclusion of AI literacy does pertain to issues of teaching responsible technology use and ethical considerations, it does not focus explicitly on discrimination, psychological harm, or misinformation which would strengthen its connection to that category. Therefore, I will score it highly for Social Impact. 'Data Governance' relates less directly as the text emphasizes curriculum content rather than the management or regulatory aspects of data in AI. 'System Integrity' is somewhat relevant as it involves the integrity of educational materials that inform on AI literacy, but not to the extent of needing oversight or standards. 'Robustness' does not apply here as there is no mention of benchmarks or performance measures for AI itself.


Sector:
Academic and Research Institutions (see reasoning)

The text focuses on AI literacy as part of the education curriculum, which indicates a direct relevance to the 'Academic and Research Institutions' sector because it involves teaching foundational knowledge about AI and its applications to school-age children. The bill does not mention AI use in political campaigns, government service delivery, healthcare, or other specific sectors like judicial systems or nonprofits, making those categories less relevant. The focus on education does not quite fit into 'Politics and Elections' or 'Hybrid, Emerging, and Unclassified', which are broader and less specific. 'Private Enterprises, Labor, and Employment' is also not applicable here as the text doesn't address workplace implications directly.


Keywords (occurrence): artificial intelligence (7) show keywords in context

Description: Requires BPU to provide funding for purchase and installation of photovoltaic technologies for age-restricted community clubhouse facilities from societal benefits charge.
Summary: The bill mandates the New Jersey Board of Public Utilities to fund the purchase and installation of photovoltaic systems for age-restricted community clubhouses, using funds from the societal benefits charge.
Collection: Legislation
Status date: Jan. 9, 2024
Status: Introduced
Last action: Introduced in the Senate, Referred to Senate Environment and Energy Committee (Jan. 9, 2024)

Category: None (see reasoning)

The text primarily focuses on the allocation of funding for photovoltaic technologies, which relates mainly to energy efficiency and renewable energy but does not notably address artificial intelligence (AI) or its relevant implications. Therefore, the AI-related categories will likely score low. The references to AI directly are absent, leading to minimal connections with the categories, and as such, they will not be relevant for inclusion.


Sector: None (see reasoning)

The bill is concerned with funding for renewable energy technologies specifically targeting age-restricted communities. It does not make any references to sectors such as healthcare, governance, or judicial implications related to AI. Consequently, relevance to the sectors specified is negligible. The legislation does not discuss AI in relation to politics, government services, the judicial system, healthcare, employment, academic settings, or international standards, and thus, very low scores are appropriate.


Keywords (occurrence): algorithm (1) show keywords in context

Description: Consumer protection: identity theft; identity theft protection act; modify. Amends secs. 3, 12 & 12b of 2004 PA 452 (MCL 445.63 et seq.); adds secs. 11a, 11b, 20, 20a, 20b & 20c & repeals secs. 15 & 17 of 2004 PA 452 (MCL 445.75 & 445.77).
Summary: The bill amends Michigan's Identity Theft Protection Act to enhance consumer safeguards against identity theft. It mandates security procedures for personal data, establishes breach notification requirements, and updates definitions related to personal information.
Collection: Legislation
Status date: May 30, 2024
Status: Introduced
Primary sponsor: Rosemary Bayer (sole sponsor)
Last action: Referred To Committee On Finance, Insurance, And Consumer Protection (May 30, 2024)

Category:
Data Governance
System Integrity (see reasoning)

The text primarily focuses on consumer protection, specifically regarding identity theft and the security measures associated with personal information. While AI is mentioned in the context of automation (e.g., 'algorithmic process' in the definition of 'encrypted'), it is not a central theme. Instead, the legislation is more concerned with the safeguarding and management of personal data and breaches related to identity theft. Thus, the relevance to the categories can be outlined as follows: 1. **Social Impact (Score: 2)** - While the legislation addresses identity theft, it does not specifically engage with broader societal impacts of AI technology such as bias, misinformation, or consumer protections related specifically to AI products. The minimal connection stems from the acknowledgment of personal data and its potential misuse by automated systems, which is not elaborated upon. 2. **Data Governance (Score: 4)** - The bill emphasizes the protection and proper management of personal information, including mandates for security procedures, indicating a solid connection to data governance with respect to data accuracy, security, and responsibility. 3. **System Integrity (Score: 3)** - There is mention of implementing security measures to safeguard personal information but not a strong focus on system integrity in the context of AI systems directly. It addresses data security within the context of data breaches rather than the integrity of the AI systems themselves. 4. **Robustness (Score: 2)** - The text does not address AI performance benchmarks or regulatory compliance for AI systems specifically, making its relevance in this category weak. Overall, the focus on data management and protection is more pronounced in the text, leading to a higher score for Data Governance.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text outlines protections against identity theft and emphasizes the importance of safeguarding personal data, which has implications for various sectors. The sector relevance can be assessed as follows: 1. **Politics and Elections (Score: 1)** - There is no reference to political campaigns or electoral processes. 2. **Government Agencies and Public Services (Score: 4)** - The involvement of state agencies in protecting personal data directly relates to government operations and public service delivery. The legislation applies to agencies managing personal information, adding relevance to this category. 3. **Judicial System (Score: 2)** - While identity theft could intersect with legal outcomes, there is limited emphasis on judicial use or regulation of AI. 4. **Healthcare (Score: 2)** - The text briefly mentions medical information but does not engage with AI applications in healthcare settings. 5. **Private Enterprises, Labor, and Employment (Score: 3)** - The protections discussed have implications for businesses in managing consumer data and addressing security breaches, tying it moderately to this sector. 6. **Academic and Research Institutions (Score: 1)** - There is no mention of AI's role in educational contexts. 7. **International Cooperation and Standards (Score: 1)** - The text does not address international standards or cooperation. 8. **Nonprofits and NGOs (Score: 2)** - Nonprofits may deal with personal information but there is no specific mention of their relationship with this legislation. 9. **Hybrid, Emerging, and Unclassified (Score: 2)** - The bill does not fit neatly into emerging or hybrid sectors, but does touch on the automated concerns of data management. The most pronounced relevance is to Government Agencies and Public Services due to its foundational focus on safeguarding personal information handled by state entities.


Keywords (occurrence): automated (1) show keywords in context

Description: Ballot processing; electronic adjudication; limitation
Summary: Senate Bill 1360 amends Arizona election laws to limit the use of electronic voting systems, prohibiting artificial intelligence in ballot processing and ensuring strict certification and oversight procedures for voting equipment.
Collection: Legislation
Status date: Feb. 21, 2024
Status: Engrossed
Primary sponsor: Frank Carroll (6 total sponsors)
Last action: House read second time (Feb. 27, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly addresses the use of Artificial Intelligence (AI) within the context of electronic voting systems. It prohibits the use of any AI or learning software in the certification and processing of ballots, which has substantial societal implications in terms of transparency, trust, and integrity of election processes. This indicates a significant concern for the social impact of AI and its potential for misuse in electoral procedures, hence a high relevance for the Social Impact category. There are also mentions of standards and regulations regarding voting equipment, which could relate to Data Governance, but the primary focus is clearly on the supervision and impact of AI-related technologies in a societal context. System Integrity is somewhat relevant as it emphasizes security and control over voting processes, particularly in hardware and processes, but this is not as strong as the concerns for social issues. There is an absence of discussions surrounding benchmarks or performance metrics for AI, thus scoring low relevance for Robustness.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The text directly addresses the regulatory framework for the use of electronic voting systems, which falls predominantly within the realm of Politics and Elections. The content highlights the mechanisms for ensuring that the use of AI in such systems is controlled and prohibited, which speaks specifically to electoral processes. Although it does touch upon potential implications for government agency operations in administering elections, this is a secondary consideration. There are no references to AI applications in healthcare, labor, or other sectors covered in the provided descriptions, so those categories score lower. Thus, the highest relevance is for the Politics and Elections sector.


Keywords (occurrence): artificial intelligence (2)

Description: Amends the Courses of Study Article of the School Code. In provisions concerning bullying and cyber-bullying, provides that, beginning with the 2025-2026 school year, the term "cyber-bullying" includes bullying through the distribution by electronic means or the posting of a digital replica of an individual who is engaged in an activity in which the depicted individual did not engage in, including, but not limited to, sexually explicit digitized depictions of the individual. Defines "artifici...
Summary: The bill amends Illinois school code to expand the definition of cyber-bullying to include digital replicas created using AI, specifically for the 2025-2026 school year. Its aim is to address and mitigate bullying's impact in educational environments.
Collection: Legislation
Status date: May 14, 2024
Status: Introduced
Primary sponsor: Janet Yang Rohr (11 total sponsors)
Last action: Referred to Rules Committee (May 15, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text presents a bill aimed at amending school policies related to bullying, explicitly addressing bullying through digital means using artificial intelligence and generative AI technologies. The definition of cyber-bullying is broadened to include the use of digital replicas produced using AI, which implicates significant social impacts regarding misinformation, emotional harm, and psychological effects on individuals. Therefore, this strongly relates to the category of Social Impact. Regarding Data Governance, the text also touches upon managing and reporting instances of bullying and collecting non-identifiable data, indicating a concern for data collection and integrity, albeit slightly less emphasized. System Integrity is not highly relevant since the legislation mainly focuses on educational policies rather than technical specifications or security measures for AI systems. Robustness is not relevant since the text does not address performance benchmarks or auditing of AI systems. Overall, the focus is primarily on social impacts stemming from AI usage in cyber-bullying contexts.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

The bill primarily pertains to the educational sector by addressing bullying in schools through technology and AI. Its intent to provide guidance on acceptable behavior, the role of AI in digitally replicating individuals, and the mandates for reporting incidents of cyber-bullying show the bill's relevance to educational institutions. There are indications of implications for student well-being, data concerns, and the necessity for information management on bullying cases, but it does not pertain to sectors outside of education. Therefore, while its main focus is on education, it entertains important issues regarding public service provision within schools. As such, it also relates to the Government Agencies and Public Services sector. Other sectors like Judicial System, Healthcare, Nonprofits, International Cooperation, and Standards are not directly influenced by this legislation.


Keywords (occurrence): artificial intelligence (4) automated (1) algorithm (1) show keywords in context

Description: A bill to amend title XI of the Social Security Act to establish a pilot program for testing the use of a predictive risk-scoring algorithm to provide oversight of payments for durable medical equipment and clinical diagnostic laboratory tests under the Medicare program.
Summary: The Medicare Transaction Fraud Prevention Act aims to establish a pilot program testing a predictive risk-scoring algorithm to oversee Medicare payments for durable medical equipment and lab tests, helping prevent fraud.
Collection: Legislation
Status date: Jan. 18, 2024
Status: Introduced
Primary sponsor: Mike Braun (2 total sponsors)
Last action: Read twice and referred to the Committee on Finance. (Jan. 18, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text discusses a pilot program that aims to use a predictive risk-scoring algorithm related to oversight of Medicare payments for durable medical equipment and diagnostic laboratory tests. There is a clear connection to the social impact of AI due to the potential effects on healthcare transactions and patient interactions. Data governance is highly relevant as the legislation emphasizes the secure and ethical use of a predictive algorithm, including aspects that involve collecting patient data and addressing accuracy. System integrity also plays a crucial role as the use of the algorithm necessitates human oversight and transparency in decision-making related to Medicare payments. Finally, given the focus on testing algorithms against established benchmarks, robustness is relevant as it speaks to the need for evaluating AI performance in this setting.


Sector:
Government Agencies and Public Services
Healthcare (see reasoning)

The legislation directly pertains to the healthcare sector, as it deals with the oversight of Medicare payments, which is a critical aspect of providing medical services and ensuring appropriate billing practices. It also touches on predictive algorithms in a healthcare context, which further emphasizes its relevance to this sector. It does not specifically mention government agencies in a broad sense but does involve the Secretary and Medicare, further grounding its importance in governmental healthcare operations. As a result, it is rated highly for healthcare, moderately for government agencies, and not applicable to the other sectors. Since it deals with algorithms aimed at preventing fraud, it does not fall into politics and elections or judicial system areas either.


Keywords (occurrence): algorithm (11) show keywords in context

Description: To improve the National Oceanic and Atmospheric Administration's weather research, support improvements in weather forecasting and prediction, expand commercial opportunities for the provision of weather data, and for other purposes.
Summary: The Weather Research and Forecasting Innovation Reauthorization Act of 2023 aims to enhance NOAA's weather research, forecasting capabilities, and commercial weather data opportunities for improved public safety and resource management.
Collection: Legislation
Status date: May 1, 2024
Status: Engrossed
Primary sponsor: Frank Lucas (30 total sponsors)
Last action: Received in the Senate and Read twice and referred to the Committee on Commerce, Science, and Transportation. (May 1, 2024)

Category: None (see reasoning)

This legislation focuses primarily on weather research and forecasting. While artificial intelligence (AI) is not explicitly mentioned in the text, the process of improving weather forecasts and predictions inherently leans on data analysis and computational modeling techniques, which may include AI methodologies like machine learning. Furthermore, sections discussing innovative observations and expanding computational resources suggest a potential intersection with AI, particularly in the contexts of data management and predictive modeling. However, without direct mention of AI or relevant technologies, the relevance to the categories remains limited but possible. Therefore, Social Impact may be slightly relevant due to the implications of weather forecasting on public safety; Data Governance could apply if we consider data collection aspects; System Integrity may be relevant if we consider security aspects of weather data; Robustness may apply through the potential for certification of forecasting technologies. Overall, the connection to AI is minimal and indirect, leading to lower relevance scores.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

The Weather Act primarily relates to weather research and forecasting. It does not specifically address use cases for AI in the legally defined sectors. However, the legislation may have implications for sectors such as Government Agencies and Public Services, given that it pertains to the NOAA's operations, public safety, and disaster management, as well as Academic and Research Institutions through the collaboration with academic partners. The relevance to Politics and Elections is weak since there is no mention of AI in political contexts; Judicial System lacks any direct ties; Healthcare does not appear to intersect; Private Enterprises may have some minor relevance given commercial data aspects; International Cooperation may pertain, but only indirectly. Overall, the sector relevance is low due to the absence of direct implications for defined sector areas.


Keywords (occurrence): artificial intelligence (4) machine learning (7) automated (3) show keywords in context

Description: Recognizing the accomplishments of the University of Florida and designating February 14, 2024, as "Gator Day" at the Capitol, etc.
Summary: The bill designates February 14, 2024, as "Gator Day" at the Capitol to recognize the University of Florida's achievements in education, research, healthcare, and support for veterans.
Collection: Legislation
Status date: Feb. 14, 2024
Status: Passed
Primary sponsor: Keith Perry (sole sponsor)
Last action: Adopted (Feb. 14, 2024)

Category:
Societal Impact (see reasoning)

The text primarily recognizes the accomplishments of the University of Florida, including its education in artificial intelligence (AI), which is mentioned in the context of offering AI courses and support for K-12 AI education. Given the limited mention of AI in terms of societal impacts or governance, there are minimal direct connections to the categories provided. The reference to education in AI aligns closely with the category of Social Impact as it concerns AI's role in enhancing education, but it lacks depth in discussing issues related to AI-driven discrimination, psychological harm, misinformation, or systemic accountability. Data Governance isn't pertinent, as there are no references to data management or accuracy. System Integrity and Robustness are similarly not relevant, as there are no discussions about security measures or performance benchmarks for AI systems. Therefore, the relevance to Social Impact is moderately notable due to the educational emphasis on AI, but overall, the document does not dive deep enough to warrant high scores across the board.


Sector:
Academic and Research Institutions (see reasoning)

The text does not address specific sectors in relation to the use or regulation of AI, focusing instead on the accomplishments of the University of Florida as an institution. While it mentions education and medical advancements, it does not specifically discuss the application or regulation of AI within these sectors. Consequently, the references to education and research could marginally relate to the Academic and Research Institutions sector, but they do not fulfill the criteria to a significant degree. All other sectors lack direct relevance as there are no discussions regarding government use, judicial applications, healthcare AI, employment impacts, or cooperation on an international level. The document does not explore AI's implications across most sectors, rendering the scores low across the board.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Description: The purpose of this bill is to prohibit the use of deep fake images for the criminal invasion of privacy or the unlawful depiction of nude or partially nude minors or minors engaged in sexually explicit conduct; establishing such conduct as criminal offenses, subject to criminal penalties.
Summary: The bill criminalizes the use of deep fakes for nonconsensual disclosure of intimate images and unlawful depictions of minors, imposing penalties for such offenses.
Collection: Legislation
Status date: Feb. 27, 2024
Status: Engrossed
Primary sponsor: Jarred Cannon (11 total sponsors)
Last action: To Judiciary (Feb. 28, 2024)

Category:
Societal Impact (see reasoning)

The text primarily relates to the criminalization of deep fake images and addresses the possible harms stemming from their misuse, particularly concerning minors and privacy breaches. This aligns most closely with the Social Impact category, as it covers the repercussions of AI-driven technology on society, specifically the protection of individual rights and the prevention of misuse. It has a significant focus on establishing legal penalties for actions that can cause psychological or material harm due to deepfake technology. Data Governance could have a connection regarding the regulation of data, but the text is less focused on data management and more on criminal offenses. System Integrity and Robustness are less relevant, as this legislation does not discuss safeguard practices or performance benchmarks for AI systems in this context.


Sector:
Government Agencies and Public Services
Judicial system (see reasoning)

The text explicitly addresses the misuse of AI through deep fakes in the context of privacy violations and the protection of minors, which primarily aligns with the Government Agencies and Public Services sector as it speaks to the enforcement of laws regarding these concerns. The judicial implications of the proposed legislation tie into the Judicial System sector, as it involves legal definitions and criminal penalties. However, the core focus is on the societal consequences of deep fake misuse, which makes the Government Agencies and Public Services sector the more suitable categorization. The other sectors, such as Healthcare, Politics and Elections, and Academic Institutions, are not relevant here as they do not pertain to the text's content.


Keywords (occurrence): artificial intelligence (2) deepfake (6) show keywords in context

Description: To establish in the Cybersecurity and Infrastructure Security Agency of the Department of Homeland Security a task force on artificial intelligence, and for other purposes.
Summary: The CISA Securing AI Task Force Act establishes a task force within the Department of Homeland Security to enhance the security and safety of artificial intelligence technologies over five years.
Collection: Legislation
Status date: May 10, 2024
Status: Introduced
Primary sponsor: Troy Carter (2 total sponsors)
Last action: Referred to the Subcommittee on Cybersecurity and Infrastructure Protection. (May 10, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text of the CISA Securing AI Task Force Act explicitly pertains to artificial intelligence, focusing on its safe and secure design, development, deployment, and the unique challenges associated with AI in cybersecurity. The establishment of a task force dedicated to these aspects highlights the relevance of this legislation to addressing the social implications of AI, especially concerning safety and security measures. The text emphasizes privacy, civil rights, and civil liberties standards, indicating a concern for the social impact of AI systems. It also suggests the need for systemic governance around AI use, known as data governance, particularly regarding the management and reliability of AI data used by the Agency. However, while the call for secure AI use directly connects to system integrity, the focus on establishing a task force does not emphasize performance benchmarks or standards prevalent in robustness discussions. Thus, it is inferred that social impact and data governance take precedence as categories associated with this text.


Sector:
Government Agencies and Public Services
International Cooperation and Standards (see reasoning)

The text pertains primarily to the establishment of a task force within the Department of Homeland Security (DHS) with a focus on artificial intelligence. This indicates a significant implication for Government Agencies and Public Services, as it relates to federal efforts in coordinating AI safety and security. There is no explicit mention of politics and elections or judicial systems, and while it could potentially touch upon private enterprises in terms of software recommendations, it does not primarily focus on them. There is a mild relevance to International Cooperation and Standards due to the inter-agency coordination mentioned, but not enough to rank highly. Therefore, the most applicable sector for this legislation is Government Agencies and Public Services.


Keywords (occurrence): artificial intelligence (10) show keywords in context

Description: To establish requirements relating to credit scores and educational credit scores, and for other purposes.
Summary: The Free Credit Scores for Consumers Act of 2024 mandates consumer reporting agencies provide free credit scores and educational credit scores, enhance transparency about scoring differences, and offer consumers better educational resources on credit management.
Collection: Legislation
Status date: April 29, 2024
Status: Introduced
Primary sponsor: Joyce Beatty (sole sponsor)
Last action: Referred to the House Committee on Financial Services. (April 29, 2024)

Category: None (see reasoning)

The legislation primarily focuses on credit scores and the responsibilities of credit reporting agencies, with no direct references to AI technologies or systems. Although it mentions algorithms in terms of credit scoring models, it does not address AI implications such as bias, robustness, data governance, or systemic impacts of AI technologies. Therefore, no categories are deemed highly relevant to this bill.


Sector: None (see reasoning)

The legislation pertains to consumer reporting and credit scores but does not focus on specific sectors where AI applications are prominent. The only mention of relevant technology is related to credit scoring models, which might involve statistical algorithms but does not reflect a sectoral impact or use of AI technology specifically. Given this, none of the defined sectors such as healthcare, judicial systems, or government services are relevant.


Keywords (occurrence): algorithm (1) show keywords in context

Summary: The ARMS SALES NOTIFICATION bill mandates congressional notification before certain arms sales, ensuring oversight. It details a proposed $23 billion sale of F-16 aircraft and modernization services to Turkey.
Collection: Congressional Record
Status date: Feb. 1, 2024
Status: Issued
Source: Congress

Category: None (see reasoning)

This text predominantly discusses the notification of an arms sale, including technical specifications and geopolitical context. It does not explicitly address or pertain to aspects of AI, such as its societal impact, data governance, integrity of systems, or robustness in AI technologies. The entire content is centered around defense articles without a mention of AI frameworks or their implications. This results in scores of 1 across all four categories, indicating no relevance.


Sector: None (see reasoning)

The text mainly pertains to arms sales and defense contracts related to the military and foreign relations, without discussing AI applications within any sector. As such, there are no relevant connections to politics and elections, government activities concerning AI, the judicial system, healthcare, or other sectors mentioned. Thus, each sector rating results in a score of 1, indicating non-relevance.


Keywords (occurrence): automated (1) show keywords in context

Description: To promote a 21st century artificial intelligence workforce and to authorize the Secretary of Education to carry out a program to increase access to prekindergarten through grade 12 emerging and advanced technology education and upskill workers in the technology of the future.
Summary: The "Workforce of the Future Act of 2024" aims to enhance education in artificial intelligence and technology from prekindergarten to grade 12, while upskilling workers for future job demands.
Collection: Legislation
Status date: Sept. 16, 2024
Status: Introduced
Primary sponsor: Barbara Lee (2 total sponsors)
Last action: Referred to the Committee on Education and the Workforce, and in addition to the Committee on Science, Space, and Technology, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (Sept. 16, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text primarily focuses on the impact of artificial intelligence (AI) on workforce development and education. This directly contributes to the understanding of the social implications of AI, primarily regarding job displacement and the preparation of future workers. It also addresses the educational steps necessary to align the workforce with advancements in technology, indicating a strong relationship with social impact, as it aims to mitigate negative consequences of AI. Data governance is touched upon through discussions related to the collection and handling of workforce data required for analysis. However, there is limited focus on technology system integrity or robustness benchmarks, marking these categories as less relevant.


Sector:
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

The sectors mainly addressed in the legislation include education and workforce development, with a clear emphasis on how AI will change job prospects. The act targets the intersection of education and technology, particularly in K-12 settings, aligning closely with both the education sector and workforce development aspects. While such elements indirectly touch upon government agencies, the direct focus remains on education and labor, suggesting limited relevance to other sectors. The legislation does not address specific applications of AI in healthcare, politics, or judicial systems, marking those sectors as less relevant.


Keywords (occurrence): artificial intelligence (18) algorithm (3) show keywords in context

Summary: The bill refers several measures to committees, including studies on AI in consumer safety, U.S. manufacturing for critical infrastructure, and mandatory safety standards for retractable awnings.
Collection: Congressional Record
Status date: May 15, 2024
Status: Issued
Source: Congress

Category:
Societal Impact (see reasoning)

The text includes a reference to the use of artificial intelligence in the context of establishing a pilot program by the Consumer Product Safety Commission. This directly relates to the category of Social Impact as it may concern the implications of AI technology in safety regulation, potentially involving ethical considerations and accountability. It is less relevant to Data Governance, System Integrity, and Robustness, as there is no explicit discussion about data management, security protocols, or performance benchmarks related to AI systems.


Sector:
Government Agencies and Public Services (see reasoning)

The text specifically mentions the use of AI in a Consumer Product Safety Commission program, which aligns it closely with Government Agencies and Public Services as it pertains to governmental oversight and service delivery. It does not mention any other sectors like Healthcare, Private Enterprises, Politics and Elections, etc., making it less relevant for those areas.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Description: Creates the Digital Forgeries Act. Provides that an individual depicted in a digital forgery has a cause of action against any person who, without the consent of the depicted individual, knowingly distributes a digital forgery, creates a digital forgery with intent to distribute, or solicits the creation of a digital forgery with the intent to distribute: (i) in order to harass, extort, threaten, or cause physical, emotional, reputational, or economic harm to an individual falsely depicted; (...
Summary: The Digital Forgeries Act establishes legal recourse for individuals depicted in unauthorized digital forgeries, allowing them to seek damages and protection against harmful distributions, while excluding clearly labeled AI-generated content.
Collection: Legislation
Status date: Feb. 5, 2024
Status: Introduced
Primary sponsor: Jennifer Gong-Gershowitz (sole sponsor)
Last action: House Committee Amendment No. 1 Rule 19(c) / Re-referred to Rules Committee (April 5, 2024)

Category:
Societal Impact (see reasoning)

The Digital Forgeries Act is primarily focused on the social implications of digital forgeries, particularly highlighting the dangers of AI-manipulated content. It emphasizes the potential harms such forgeries can inflict on individuals, including emotional, reputational, and economic damage. By defining 'digital forgery' in terms of AI's role in creating misleading content, the Act ensures accountability for those who misuse AI technologies. This clearly aligns with the goals of the Social Impact category, as it addresses consumer protections and the harm caused by AI. The Act also contains stipulations that aim to mitigate the negative consequences of AI-generated digital forgeries by emphasizing the importance of consent and accountability. Overall, the legislation exemplifies a proactive approach to the societal challenges posed by AI-generated content, which is central to this category. Therefore, I would rate Social Impact as 5 for its direct focus on the societal consequences of AI. Data Governance is less relevant as it does not directly tackle issues of data management or accuracy beyond the context of digital forgeries. The System Integrity considerations are minimal as the legislation does not delve deeply into security or transparency of AI systems, focusing more on the consequences of misuse rather than the integrity of the technology itself. Likewise, Robustness is not a primary focus, as the Act does not discuss performance benchmarks for AI systems. Thus, I would rate it 2 for Data Governance, 1 for System Integrity, and 1 for Robustness.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The Digital Forgeries Act explicitly addresses the creation and distribution of AI-generated content, implicating its relevance across multiple sectors but most notably in the realm of politics and elections due to its implications for misinformation in political contexts. The act is also relevant for government agencies as it outlines legal remedies in relation to how AI is used within civil frameworks, showing its relevance in government accountability regarding AI applications. However, its direct implications for the judicial system are vague, focusing primarily on civil actions. It does not pertain to healthcare, private enterprises directly, or academic institutions. International cooperation is not a focus, and nonprofits may only have a tangential connection at best. Therefore, I would rate Politics and Elections as 3, Government Agencies and Public Services as 4, and Judicial System as 2, with the other sectors receiving a score of 1.


Keywords (occurrence): artificial intelligence (4) show keywords in context

Description: An act to add Chapter 11.1 (commencing with Section 21760) to Division 8 of the Business and Professions Code, relating to social media platforms.
Summary: The bill aims to mandate California generative AI companies to adopt technical open standards and content credentials for verifying digital content authenticity, addressing issues like deepfakes.
Collection: Legislation
Status date: May 21, 2024
Status: Engrossed
Primary sponsor: Akilah Weber (sole sponsor)
Last action: In committee: Held under submission. (Aug. 15, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

This act concerns the management of digital content and specifically mentions technologies related to digital content forgery such as deepfakes. It emphasizes accountability and standards for social media platforms in handling provenance data, which relates to the impact of AI and digital methods on public trust and information verification. This makes the Social Impact category very relevant. Additionally, while the act does not directly discuss the governance of data specifically related to biases or inaccuracies within datasets, it does establish guidelines for content integrity and verification, which slightly leans toward Data Governance. It does not address system integrity through mandates like human oversight, nor does it set performance benchmarks, making Robustness and System Integrity less relevant.


Sector:
Government Agencies and Public Services (see reasoning)

The legislation mentions the impact of digital technologies on social media platforms, focusing on the handling of provenance data which is central to online content authenticity. However, it doesn't specifically address the political implications of AI in decision-making or electoral processes, nor does it focus on regulation within the healthcare, judicial, or employment sectors. The dimensions of the government and public services are indirectly relevant as it addresses state departments potentially but not in a direct manner. Its implications for privacy and content management make it somewhat relevant to public services, but not strongly enough to warrant a higher score. Thus, the most relevant sectors are Government Agencies and Public Services due to its mention of policies governing how state departments interact with digital content but very indirectly related to the other sectors, hence obtaining low scores for them.


Keywords (occurrence): show keywords in context

Summary: This bill condemns anti-Semitism at U.S. universities, highlighting university leadership failures in addressing harassment and intimidation of Jewish students and calling for institutional reforms to enhance student safety.
Collection: Congressional Record
Status date: May 23, 2024
Status: Issued
Source: Congress

Category: None (see reasoning)

The text primarily discusses issues surrounding anti-Semitism at U.S. universities, focusing on the responses of university leaders to this social issue. While it addresses leadership accountability and the social environment on campuses, it does not explicitly pertain to AI or related technologies. Given this lack of relevance to AI-specific legislation or impacts, the scores for each category reflect that disconnect, as none of the categories are applicable to the content. Thus, all categories are rated as 'Not Relevant'.


Sector: None (see reasoning)

The content of the text revolves around the political and social ramifications of anti-Semitism in universities rather than the influence or application of AI technologies. While it discusses the actions of university leaders in response to social issues, it does not engage with any AI-related sectors directly. Therefore, all sector scores are also rated as 'Not Relevant'.


Keywords (occurrence): artificial intelligence (3) show keywords in context
Feedback form