4160 results:


Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
Data Governance
System Integrity (see reasoning)

The text outlines detailed quality management requirements predominantly focused on image quality performance parameters in digitization processes, which may involve the use of AI technologies, specifically in automated quality control processes. However, it does not explicitly discuss AI applications or their societal impacts, making it less relevant to the broader legislative themes of AI. The focus is more on procedural and technical specifications than on ethical, societal, or governance issues related to AI usage.


Sector:
Government Agencies and Public Services (see reasoning)

The legislation does not directly address specific sectors but touches on quality control processes that could be applied in various contexts including government archives. The mention of automated techniques for verifying metadata accuracy hints at potential applications in governmental operations, but overall, it lacks explicit sectoral focus. Therefore, while it could be tangentially related to government operations, it does not clearly pertain to any one sector predominantly.


Keywords (occurrence): automated (2) show keywords in context

Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily focuses on performance testing and compliance requirements related to emissions in iron and steel foundries. It does not mention any AI-related technologies, systems, or implications. Therefore, none of the categories regarding Social Impact, Data Governance, System Integrity, or Robustness are relevant since they deal specifically with AI systems and their implications. The text strictly outlines regulatory and procedural frameworks for emissions testing, which does not involve the concerns or focuses of these categories.


Sector: None (see reasoning)

The text addresses the compliance requirements for emissions limits in iron and steel foundries and specifies performance tests and methodologies required by the Environmental Protection Agency (EPA). None of the sectors outlined pertain to AI regulation or its application, as the content centers solely on environmental standards and testing frameworks, which do not include political, governmental, healthcare, or other sectors involving AI technologies. As such, all sectors receive the lowest relevance score.


Keywords (occurrence): automated (4) show keywords in context

Description: A bill to amend the Federal Election Campaign Act of 1971 to provide further transparency for the use of content that is substantially generated by artificial intelligence in political advertisements by requiring such advertisements to include a statement within the contents of the advertisements if generative AI was used to generate any image, audio, or video footage in the advertisements, and for other purposes.
Collection: Legislation
Status date: March 6, 2024
Status: Introduced
Primary sponsor: Amy Klobuchar (2 total sponsors)
Last action: Placed on Senate Legislative Calendar under General Orders. Calendar No. 389. (May 15, 2024)

Category:
Societal Impact
System Integrity (see reasoning)

The text of the AI Transparency in Elections Act of 2024 explicitly addresses the use of artificial intelligence in political advertisements, including definitions and requirements related to generative artificial intelligence. The bill mandates that any advertisement employing content generated or significantly altered by AI must include clear disclaimers. It directly relates to social impact in terms of misinformation and public trust in election processes. The need for accountability in AI-generated media is also emphasized, impacting consumer protection regarding misinformation. Thus, it conveys a strong relevance to both Social Impact and System Integrity, while the relevance to Data Governance and Robustness is less pronounced since the primary focus is on transparency in communications rather than data management or performance benchmarks. The respective scores reflect the identified significance of each area to the text.


Sector:
Politics and Elections (see reasoning)

The legislation focuses on political advertisements and their regulation, particularly in the electoral context. It includes the use of AI in political communication and seeks to enhance transparency to safeguard the integrity of electoral processes, thus making it highly relevant to the Politics and Elections sector. The mention of generative AI elements provides substantial context for regulation within this sector. Other sectors like Government Agencies and Public Services, Judicial System, and Healthcare show minimal direct relevance since the text does not primarily deal with those areas. Scores are allocated based on this evaluation of direct relevance to the respective sectors.


Keywords (occurrence): artificial intelligence (12) machine learning (2) show keywords in context

Description: Creates the Digital Forgeries Act. Provides that an individual depicted in a digital forgery has a cause of action against any person who, without the consent of the depicted individual, knowingly distributes a digital forgery, creates a digital forgery with intent to distribute, or solicits the creation of a digital forgery with the intent to distribute: (i) in order to harass, extort, threaten, or cause physical, emotional, reputational, or economic harm to an individual falsely depicted; (...
Collection: Legislation
Status date: Feb. 5, 2024
Status: Introduced
Primary sponsor: Jennifer Gong-Gershowitz (sole sponsor)
Last action: House Committee Amendment No. 1 Rule 19(c) / Re-referred to Rules Committee (April 5, 2024)

Category:
Societal Impact (see reasoning)

The Digital Forgeries Act is primarily focused on the social implications of digital forgeries, particularly highlighting the dangers of AI-manipulated content. It emphasizes the potential harms such forgeries can inflict on individuals, including emotional, reputational, and economic damage. By defining 'digital forgery' in terms of AI's role in creating misleading content, the Act ensures accountability for those who misuse AI technologies. This clearly aligns with the goals of the Social Impact category, as it addresses consumer protections and the harm caused by AI. The Act also contains stipulations that aim to mitigate the negative consequences of AI-generated digital forgeries by emphasizing the importance of consent and accountability. Overall, the legislation exemplifies a proactive approach to the societal challenges posed by AI-generated content, which is central to this category. Therefore, I would rate Social Impact as 5 for its direct focus on the societal consequences of AI. Data Governance is less relevant as it does not directly tackle issues of data management or accuracy beyond the context of digital forgeries. The System Integrity considerations are minimal as the legislation does not delve deeply into security or transparency of AI systems, focusing more on the consequences of misuse rather than the integrity of the technology itself. Likewise, Robustness is not a primary focus, as the Act does not discuss performance benchmarks for AI systems. Thus, I would rate it 2 for Data Governance, 1 for System Integrity, and 1 for Robustness.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

The Digital Forgeries Act explicitly addresses the creation and distribution of AI-generated content, implicating its relevance across multiple sectors but most notably in the realm of politics and elections due to its implications for misinformation in political contexts. The act is also relevant for government agencies as it outlines legal remedies in relation to how AI is used within civil frameworks, showing its relevance in government accountability regarding AI applications. However, its direct implications for the judicial system are vague, focusing primarily on civil actions. It does not pertain to healthcare, private enterprises directly, or academic institutions. International cooperation is not a focus, and nonprofits may only have a tangential connection at best. Therefore, I would rate Politics and Elections as 3, Government Agencies and Public Services as 4, and Judicial System as 2, with the other sectors receiving a score of 1.


Keywords (occurrence): artificial intelligence (4) show keywords in context

Description: Prohibits motor vehicle insurers from discrimination on the basis of socioeconomic factors in determining algorithms used to construct actuarial tables, coverage terms, premiums and/or rates.
Collection: Legislation
Status date: May 4, 2023
Status: Introduced
Primary sponsor: Kevin Parker (3 total sponsors)
Last action: REFERRED TO INSURANCE (Jan. 3, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text is primarily concerned with discrimination in the insurance industry based on socioeconomic factors and the use of algorithms in setting actuarial tables and insurance rates. Although 'algorithm' is explicitly mentioned, the focus is more on the social implications of how these algorithms may be used rather than the technical aspects of AI. Consequently, the Social Impact category is highly relevant due to its focus on discrimination and equity in insurance policies. Data Governance is moderately relevant as it touches on the fairness of algorithms but does not delve into data security or management. System Integrity has low relevance since the text does not address transparency or security standards of the algorithms being discussed. Robustness has limited relevance as the text doesn’t consider performance benchmarks for these algorithms; it is primarily a matter of fairness rather than technical robustness.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The text mainly revolves around the motor vehicle insurance sector, addressing the use of algorithms by insurers to prevent socioeconomic discrimination. Thus, it has high relevance to the Private Enterprises, Labor, and Employment sector due to its implications on business operations and market fairness. Though it could be applicable to Government Agencies and Public Services relative to regulatory oversight, this connection is less direct compared to the private sector focus. It does not bear a strong relationship to other sectors like Healthcare or International Cooperation due to its specific focus on insurance and socioeconomic factors.


Keywords (occurrence): algorithm (1) show keywords in context

Description: License plate reader systems; civil penalty. Provides requirements for the use of license plate reader systems, defined in the bill, by law-enforcement agencies. The bill limits the use of such systems to scanning, detecting, and recording data about vehicles and license plate numbers for the purpose of identifying a vehicle that is (i) associated with a wanted, missing, or endangered person or human trafficking; (ii) stolen; (iii) involved in an active law-enforcement investigation; or (iv) ...
Collection: Legislation
Status date: Feb. 13, 2024
Status: Engrossed
Primary sponsor: Scott Surovell (3 total sponsors)
Last action: Constitutional reading dispensed (40-Y 0-N) (Feb. 13, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly discusses the use of automated high-speed cameras and computer algorithms for license plate reader systems employed by law enforcement, indicating a direct relevance to AI under the terminology of Automated systems and Algorithms. The legislation stipulates requirements for the operation of these systems, highlighting accountability, data handling, and compliance with certain standards, connecting to the categories of System Integrity (due to the regulations around oversight and access) and Social Impact (considering implications for society regarding data privacy, surveillance, and the potential for misuse). Data Governance is also relevant as the bill mandates control over data management and security measures. However, there is less emphasis on developing new AI benchmarks or performance audits directly outlined in the text, which makes Robustness less relevant.


Sector:
Government Agencies and Public Services
Judicial system (see reasoning)

The text directly relates to Government Agencies and Public Services, as it defines how government law enforcement agencies use technology (license plate readers) to gather and manage data for public safety purposes. It discusses data retention, access management, and compliance with laws, connecting directly to the operations of public services. While aspects of AI regulation could touch on Judicial System in terms of evidence admissibility, the text does not explicitly focus on the judicial implications. Other sectors like Healthcare, Political and Elections, and Nonprofits and NGOs do not find direct relevance in the context provided by this text.


Keywords (occurrence): automated (1) show keywords in context

Description: Repealing provisions which comprise the Florida Motor Vehicle No-Fault Law; revising the motor vehicle insurance coverages that an applicant must show to register certain vehicles with the Department of Highway Safety and Motor Vehicles; revising minimum liability coverage requirements for motor vehicle owners or operators; revising authorized methods for meeting such requirements; revising financial responsibility requirements for owners or lessees of for-hire passenger transportation vehicl...
Collection: Legislation
Status date: March 8, 2024
Status: Other
Primary sponsor: Erin Grall (4 total sponsors)
Last action: Died in Banking and Insurance (March 8, 2024)

Category: None (see reasoning)

The legislation primarily pertains to motor vehicle insurance regulations and reforms in Florida. It does not explicitly address AI technology or its impact on society, data governance, system integrity, or robustness, therefore scoring low in all relevant categories. While there are mentions of autonomous delivery vehicles (which could imply a technological aspect), the overall text is focused on insurance requirements rather than any systemic or social implications of AI systems. The sections regarding insurance coverages and proof of financial responsibility do not involve considerations applicable to the AI categories defined.


Sector: None (see reasoning)

The sector classifications are broad and encompass AI's application across various fields. However, this text focuses on motor vehicle insurance and regulatory provisions without a focus on AI applications, making it non-relevant for the defined sectors. While autonomously operated vehicles are mentioned, the lack of specific AI applications or regulatory frameworks within those contexts results in a low score across all sectors evaluated.


Keywords (occurrence): automated (2) autonomous vehicle (1) show keywords in context

Description: An act to amend the Budget Act of 2024 by amending Items 0110-001-0001, 0120-011-0001, 0250-496, 0509-001-0001, 0509-495, 0511-001-0001, 0515-495, 0515-496, 0521-101-3228, 0521-131-0001, 0530-001-0001, 0540-001-0001, 0540-101-0001, 0540-495, 0552-001-0001, 0555-495, 0650-001-0001, 0650-001-0140, 0650-001-0890, 0650-001-3228, 0650-001-9740, 0650-101-0890, 0650-101-3228, 0650-490, 0650-495, 0690-103-0001, 0690-496, 0820-001-0001, 0820-001-0367, 0820-001-0567, 0820-015-0001, 0840-495, 0860-002-0...
Collection: Legislation
Status date: June 29, 2024
Status: Passed
Primary sponsor: Scott Wiener (sole sponsor)
Last action: Chaptered by Secretary of State. Chapter 35, Statutes of 2024. (June 29, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

This text primarily focuses on the amendments to the Budget Act of 2024 without any explicit mention of Artificial Intelligence-related systems, regulations, or their societal impacts. However, there is a part regarding Generative Artificial Intelligence (GenAI) that outlines pilot projects and compliance requirements which could imply relevance to multiple categories. Specifically, the mention of AI pilot projects ties into areas concerning societal impacts, data governance, system integrity, and robustness due to the nature of managing AI-generated content and personal data. However, it lacks comprehensive detail that would strongly associate it with systemic changes or measures in these categories beyond compliance and appropriations. Therefore, the overall relevance to the categories is limited but present; the track of funding for AI-related projects merely nudges these areas into slightly relevant or moderately relevant status rather than distinct categorization.


Sector:
Government Agencies and Public Services (see reasoning)

The text relates to various state budget appropriations and funding mechanisms but does not delve deeply into specific sectors as they pertain to AI implementation or its consequences. An aspect concerning Generative AI indicates potential impacts on data governance, ethical applications, and possibly government efficiency when AI is integrated into public sectors, which supports the idea of budgetary commitments to AI in the Government Agencies and Public Services sector. However, the text lacks sufficient depth to categorize firmly, as references are more procedural than innovative. So, while the presence of AI-related content relates to some sectors, the relevance is weak without clear definitions and outcomes concerning AI use in these areas.


Keywords (occurrence): artificial intelligence (6) automated (5) show keywords in context

Description: STATE AFFAIRS AND GOVERNMENT -- DIGITAL ASSET KEYS -- PROHIBITION OF PRODUCTION OF PRIVATE KEYS - Prohibits the compelled production of a private key as it relates to a digital asset, digital identity or other interest or right.
Collection: Legislation
Status date: March 1, 2024
Status: Introduced
Primary sponsor: Louis Dipalma (4 total sponsors)
Last action: Committee recommended measure be held for further study (April 9, 2024)

Category:
Data Governance (see reasoning)

The text does not explicitly address the societal impacts of AI, such as bias, discrimination, or consumer protections. While it touches on digital identities and assets, which could be indirectly related to AI applications, the primary focus is on the legal handling and security of private cryptographic keys rather than AI issues. Therefore, the relevance to the Social Impact category is quite low. Data Governance loosely applies, as digital assets involve data security and identity management which can relate to the governance of AI data, but this is indirect. There's no mention of AI system integrity or benchmarking, so scores for System Integrity and Robustness also remain low.


Sector: None (see reasoning)

The legislation is primarily concerned with the regulation of digital asset keys and their compelled production. It does not explicitly or implicitly address any of the sectors such as politics, healthcare, or education in regard to AI's role. While the use of AI could be a topic within these frameworks, the text itself doesn't provide relevant content that falls under those categories, making the connection tenuous at best. Thus, scores for sectors related to AI are deemed low because there is a lack of relevance to the defined sectors.


Keywords (occurrence): algorithm (1) show keywords in context

Description: A bill to prohibit the distribution of materially deceptive AI-generated audio or visual media relating to candidates for Federal office, and for other purposes.
Collection: Legislation
Status date: Sept. 12, 2023
Status: Introduced
Primary sponsor: Amy Klobuchar (6 total sponsors)
Last action: Placed on Senate Legislative Calendar under General Orders. Calendar No. 388. (May 15, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The legislation explicitly addresses the impact of AI-generated media on elections, referring directly to the potential psychological and material harm that could arise from deceptive AI-generated content. It emphasizes accountability by prohibiting misleading AI-generated media related to candidates, indicating a legislative attempt to protect the integrity of the electoral process. This aligns closely with Social Impact, justifying a high relevance score. Data Governance has moderate relevance as well, since ensuring the integrity of data in political contexts can relate to AI outputs, though the focus on deception slightly shifts it away from core data governance concerns. System Integrity evaluates how AI-generated content must undergo scrutiny and potential oversight, reflecting the nature of AI's role in maintaining a fair election process, thereby presenting moderate relevance. Robustness is less relevant, as no specific performance benchmarks or standards for AI systems are outlined in the text, indicating that it is insufficiently emphasized within the proposed regulation.


Sector:
Politics and Elections (see reasoning)

This text directly pertains to the regulation of AI in the political realm, given its focus on preventing the spread of deceptive AI-generated content regarding federal electoral candidates. By explicitly dealing with AI technology's implications in politics and elections, the legislation shows a clear intent to protect voters and candidates from misinformation, thus scoring highly in this sector. Other sectors do not receive the same level of focus, as the legislation primarily centers around electoral processes and does not significantly address the roles of other sectors like healthcare, private enterprises, or government agencies.


Keywords (occurrence): artificial intelligence (1) machine learning (1) deep learning (2) show keywords in context

Description: An Act providing for civil liability for fraudulent misrepresentation of candidates; and imposing penalties.
Collection: Legislation
Status date: May 29, 2024
Status: Introduced
Primary sponsor: Tarik Khan (28 total sponsors)
Last action: Laid on the table (Sept. 23, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text directly discusses the fraudulent use of AI-generated content for political misrepresentation, indicating its significant societal impacts, especially in elections. This fits well within the realm of social impact as it raises concerns about misinformation, accountability of content creators, and potential harm to trust in political communications. It also touches upon consumer protections regarding AI-generated media in campaign advertisements. Furthermore, the text addresses the ethical implications of using AI in political campaigns, highlighting the importance of transparency and fairness, which further solidifies its relevance to social impact. Data governance could also be relevant because of the emphasis on the proper use and disclosure of synthetic content; however, it is primarily focused on the consequences of misuse rather than data management principles. System integrity and robustness are less applicable here as they revolve around operational security and performance standards rather than the specific issues the text aims to resolve - namely, misinformation and its penalties. Overall, this piece primarily exemplifies social impact, with some relevance to data governance.


Sector:
Politics and Elections (see reasoning)

The text explicitly addresses the implications of using artificial intelligence in political campaign advertisements, particularly regarding fraudulent misrepresentation of candidates. This makes it highly relevant to the Politics and Elections sector as it sets legal frameworks for the use of AI in electoral contexts. There is no mention or discussion of AI use in government agencies, healthcare, or other sectors, which places little to no relevance there. However, the text does touch upon accountability, principles of fairness, and transparency, all of which are essential in electoral processes, hence reinforcing its strong categorization within Politics and Elections. Other sectors such as Judicial System (related to the enforcement of laws regarding AI misuse) and Private Enterprises, Labor, and Employment could arguably have slight relevance when considering the responsibilities of political committees as covered persons, but they are not the primary focus of this legislation. Thus, the text is predominantly pertinent to the Politics and Elections sector.


Keywords (occurrence): artificial intelligence (4) machine learning (1) automated (1) show keywords in context

Description: Establishes criminal penalties for production or dissemination of deceptive audio or visual media, commonly known as "deepfakes."
Collection: Legislation
Status date: June 28, 2024
Status: Engrossed
Primary sponsor: Herbert Conaway (16 total sponsors)
Last action: Received in the Senate without Reference, 2nd Reading (Sept. 19, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text explicitly addresses the production and dissemination of deceptive audio or visual media, popularly known as deepfakes, which leverages artificial intelligence technology. It outlines legal repercussions for such activities, indicating a direct impact on accountability for AI-derived content creation. Given these aspects, the Social Impact category is highly relevant due to its implications on societal trust and consumer protection, especially regarding misinformation. Data Governance is moderately relevant as it hints at data use and management within the framework of AI-generated content, particularly regarding rights and privacy. System Integrity is slightly relevant as it alludes to security aspects concerning the misuse of these AI technologies, while Robustness is not addressed as the legislation focuses on penalties rather than performance metrics or benchmarks for AI systems.


Sector:
Politics and Elections
Nonprofits and NGOs (see reasoning)

The legislation is primarily concerned with the implications of AI-generated media, thus it significantly impacts the sector of Nonprofits and NGOs, particularly those focused on media ethics and misinformation. It has limited relevance to sectors like Government Agencies and Public Services since it does not speak explicitly to government operations, or healthcare, which are also important areas for AI policy. The Political and Elections sector might also find relevance due to implications for political campaigns and misinformation, though it's indirect. There is no direct relevance to the Judicial System or Private Enterprises, as they are not the primary focus here, while the remaining sectors either do not apply at all or only slightly apply. The overall emphasis on ethical implications and consumer protection reflects a broader concern that intersects across various sectors, notably Nonprofits and NGOs.


Keywords (occurrence): artificial intelligence (3) show keywords in context

Description: A bill to support the use of technology in maternal health care, and for other purposes.
Collection: Legislation
Status date: May 18, 2023
Status: Introduced
Primary sponsor: Robert Menendez (5 total sponsors)
Last action: Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (May 18, 2023)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The Tech to Save Moms Act is primarily focused on technology's role in improving maternal health care, incorporating elements of telehealth and other digital tools. Its relevance to AI is highlighted particularly in the context of studying how innovative technology, including AI, impacts maternal health outcomes, particularly regarding racial and ethnic biases. This specifically touches on Social Impact due to addressing disparities and biases, Data Governance due to the focus on health data and its management, and System Integrity due to the emphasis on privacy and security in using technology. Robustness can be associated with evaluating AI technologies and their performance, particularly in a medical context.


Sector:
Government Agencies and Public Services
Healthcare
Academic and Research Institutions (see reasoning)

The bill is explicitly focused on maternal health care, making it highly relevant to the Healthcare sector due to its direct implications for improving health outcomes, addressing disparities in maternal health, and utilizing technology for better care. It indirectly relates to Government Agencies and Public Services, as it involves grants to support health care improvements, and may touch on Academic and Research Institutions through the collaboration with the National Academies for studies on technology in maternity care. There is no significant direct connection to other sectors.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Description: COMMERCIAL LAW -- GENERAL REGULATORY PROVISIONS -- AUTOMATED DECISION TOOLS - Requires companies that develop or deploy high-risk AI systems to conduct impact assessments and adopt risk management programs, would apply to both developers and deployers of AI systems with different obligations based on their role in AI ecosystem.
Collection: Legislation
Status date: March 22, 2024
Status: Introduced
Primary sponsor: Louis Dipalma (6 total sponsors)
Last action: Committee recommended measure be held for further study (April 11, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

This legislation primarily addresses the development and deployment of high-risk AI systems, specifically focusing on required impact assessments and risk management programs. It discusses the implications of artificial intelligence (AI) in making consequential decisions that significantly affect individuals, illustrating the potential for bias in AI outputs and the need for transparency in AI deployment. Therefore, it has clear relevance to Social Impact due to its focus on accountability and the ethical implications of AI usage. Data Governance is also relevant as the law mandates accurate documentation and management of data used in AI systems to prevent bias and protect privacy. System Integrity is pertinent due to the consideration of risk management and oversight in AI processes. Robustness is relevant, but to a lesser extent, as it mentions general requirements for assessing the performance of AI systems without delving into specific benchmarks. Overall, the text's provisions primarily address social implications arising from AI deployment, with robust requirements for both developers and deployers of AI.


Sector:
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)

The legislation is highly relevant to various sectors that involve the application of AI technology. Specifically, it directly pertains to the Government Agencies and Public Services sector as it outlines regulatory requirements for developers and deployers of AI, which likely includes public service applications. It is also relevant to Private Enterprises, Labor, and Employment as it determines how businesses use AI systems to make significant decisions impacting individuals' lives, including employment and financial decisions. While it has implications for the Judicial System regarding potential bias in AI outcomes affecting legal aspects, its focus does not specifically target judicial processes. Healthcare may be considered due to the impact on AI systems used in healthcare decision-making, but it is not a primary focus. The legislation does not directly address political implications or nonprofit usage of AI, falling short in relevance to these sectors. Overall, the strongest connections are with Government Agencies and Public Services and Private Enterprises, Labor, and Employment.


Keywords (occurrence): artificial intelligence (3) machine learning (1) automated (4) show keywords in context

Description: To require disclosures for AI-generated content, and for other purposes.
Collection: Legislation
Status date: Nov. 21, 2023
Status: Introduced
Primary sponsor: Thomas Kean (3 total sponsors)
Last action: Referred to the Subcommittee on Innovation, Data, and Commerce. (Nov. 24, 2023)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The AI Labeling Act of 2023 explicitly addresses the need for disclosures related to AI-generated content, which directly impacts societal perceptions and interactions with such media. This is especially relevant for the Social Impact category, as it connects to misinformation, trust issues, and consumer protection. The Data Governance category is also applicable because the Act outlines requirements for metadata related to AI-generated content and emphasizes accuracy in content identification. System Integrity is moderately relevant due to the emphasis on transparency and enforcement measures, while Robustness receives a lower relevance since the text primarily focuses on posting and identifying disclosures rather than performance benchmarks or compliance standards.


Sector:
Politics and Elections
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

In terms of sectors, the AI Labeling Act has significant implications for multiple areas. In Politics and Elections, the regulation of AI-generated media could influence campaigning and public discourse. While the Government Agencies and Public Services sector is relevant due to federal agencies being involved in the implementation of the Act, it may not be explicitly targeted in the text. The Judicial System could be indirectly affected through potential legal considerations regarding false representation and consumer rights. The Healthcare sector is not relevant as it does not pertain to healthcare AI applications. The Private Enterprises category gets a relevant score because businesses utilizing such technology will need to adapt to compliance guidelines. Academic and Research Institutions may find the findings from the working group useful for future studies related to AI ethics and standards. International Cooperation is only slightly relevant given its focus on domestic disclosures, while Nonprofits are not addressed. Hybrid, Emerging, and Unclassified can receive a low score as the text focuses primarily on established frameworks for AI disclosure rather than broader innovations.


Keywords (occurrence): artificial intelligence (4) chatbot (3) show keywords in context

Description: To require covered platforms to remove nonconsensual intimate visual depictions, and for other purposes.
Collection: Legislation
Status date: July 10, 2024
Status: Introduced
Primary sponsor: Maria Salazar (21 total sponsors)
Last action: Referred to the House Committee on Energy and Commerce. (July 10, 2024)

Category:
Societal Impact (see reasoning)

The TAKE IT DOWN Act specifically addresses the misuse of technology related to deepfakes, a form of AI-generated content. It emphasizes the need for accountability on digital platforms to prevent and mitigate the psychological and reputational harm caused by nonconsensual intimate visual depictions. This indicates a significant concern for the social impact of AI and the role it plays in the privacy and security of individuals. It does not deeply involve issues of data governance outside of consent and privacy aspects, nor does it address system integrity or robustness of AI systems directly. Therefore, the most fitting category is 'Social Impact.'


Sector: None (see reasoning)

The legislation pertains primarily to the use of deepfake technology within the context of individual rights and protections rather than any specific sector such as healthcare or government. However, it has implications for the technology sector, especially regarding the regulation of platforms that deal with user-generated content. Importantly, it addresses the impact of AI technologies on personal privacy within public and online spaces but does not engage with broader sector-specific applications. Therefore, it does not strongly fit within the predefined sectors.


Keywords (occurrence): deepfake (4) show keywords in context

Description: Requires disclosure of the use of artificial intelligence in political communications; directs the state board of elections to create criteria for determining whether a political communication contains an image or video footage created through generative artificial intelligence and to create a definition of content generated by artificial intelligence.
Collection: Legislation
Status date: July 19, 2023
Status: Introduced
Primary sponsor: Clyde Vanel (8 total sponsors)
Last action: print number 7904a (Feb. 27, 2024)

Category:
Societal Impact (see reasoning)

This text addresses the use of artificial intelligence in political communications, specifically focusing on the disclosure of AI-generated content. The mention of requirements for disclosure aligns closely with the Social Impact category, particularly with regard to how AI could affect public trust and perceptions in political processes. The provisions also aim to regulate AI's role in mitigating misinformation in political communications. However, the text does not engage with data collection, system security, or performance benchmarks explicitly, which limits relevance for Data Governance, System Integrity, and Robustness categories, resulting in lower scores for those. The explicit connections primarily draw a straight line to societal implications, hence the higher relevance for Social Impact.


Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)

Given the focus on political communications, the legislation is highly relevant to the Politics and Elections sector, which will address the regulation of AI within electoral contexts. Its invocation of misinformation and the regulatory oversight mechanisms also relevantly connect to Government Agencies and Public Services, but to a lesser extent. Some mention of AI's implications for trust and communications does not align with judicial systems or healthcare, which leads to lower scores in those areas. The text does not speak to other sectors, resulting in minimal relevance for them.


Keywords (occurrence): artificial intelligence (3) automated (1) show keywords in context

Description: Requires certain disclosures by automobile insurers relating to the use of telematics systems in determining insurance rates and/or discounts.
Collection: Legislation
Status date: Jan. 5, 2023
Status: Introduced
Primary sponsor: Kevin Thomas (sole sponsor)
Last action: REFERRED TO INSURANCE (Jan. 3, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text primarily addresses telematics systems used by automobile insurers, focusing on how these systems gather data to determine insurance rates. The relevance to the four categories reflects this focus: Social Impact is very relevant due to provisions that call for testing against discrimination, consumer data access, and the implications for various protected classes. Data Governance is also essential given the emphasis on secure data management and mandates against unauthorized use of collected data. System Integrity is relevant since the legislation mandates transparency around algorithms and risk factors. Robustness, however, is less relevant as the text does not discuss benchmarks or performance standards per se; it's more about the general procedures related to data and discrimination than about performance certifications or compliance audits.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The legislation has direct implications for the insurance sector as it involves telematics, which deeply relates to how insurance rates are calculated. It promotes fairness and reduces potential bias in insurance algorithms, making it highly relevant to Private Enterprises, Labor, and Employment, given the potential impact of AI on workers' rights and employment practices in related fields. Government Agencies and Public Services is also moderately relevant as the regulations involve both insurer and governmental oversight by requiring a report to the superintendent. The other sectors, such as Politics and Elections or Healthcare, do not connect to this legislation.


Keywords (occurrence): algorithm (2) show keywords in context

Description: Concerns social media privacy and data management for children and establishes New Jersey Children's Data Protection Commission.
Collection: Legislation
Status date: Jan. 9, 2024
Status: Introduced
Primary sponsor: Herbert Conaway (2 total sponsors)
Last action: Introduced, Referred to Assembly Science, Innovation and Technology Committee (Jan. 9, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text primarily addresses the privacy and data management concerns for children on social media platforms. It includes provisions for conducting Data Protection Impact Assessments, which inherently ties into the concept of assessing risks associated with AI algorithms used in social media, such as profiling and targeted advertising systems. This legislation emphasizes accountability for social media platforms in protecting children while navigating the intersection of AI technology and data handling, which hints at potential social impacts. Therefore, it is closely aligned with the Social Impact and Data Governance categories. Although it does cover security measures, the focus is primarily on data privacy rather than system integrity or performance benchmarks, yielding lower relevance in those areas. Hence, the scores reflect this differentiation.


Sector:
Government Agencies and Public Services
Healthcare (see reasoning)

The text discusses regulations concerning social media platforms that children are likely to access, hence it is relevant to sectors involving children and digital interactions. This primarily aligns with Government Agencies and Public Services because it establishes a commission and sets legal requirements for online services likely used by children, which are inherently governmental in nature. The focus on data management and privacy for minors may also be tangentially relevant to the Healthcare sector, particularly concerning children's well-being, but this is less direct. Overall, the legislation reflects significant relevance to Government Agencies and Public Services, while other sectors are relevant to a lesser degree.


Keywords (occurrence): automated (1) show keywords in context

Description: Amends the Illinois Credit Union Act. Provides that a credit union regulated by the Department of Financial and Professional Regulation that is a covered financial institution under the Illinois Community Reinvestment Act shall pay an examination fee to the Department subject to the adopted by the Department. Provides that the aggregate of all credit union examination fees collected by the Department under the Illinois Community Reinvestment Act shall be paid and transferred promptly, accompa...
Collection: Legislation
Status date: May 22, 2024
Status: Enrolled
Primary sponsor: David Koehler (4 total sponsors)
Last action: Sent to the Governor (June 20, 2024)

Category: None (see reasoning)

The text primarily addresses regulations related to credit unions, specifically concerning examination fees and the structure of governance for credit unions in Illinois. However, it does not make any direct or explicit references to AI, algorithms, machine learning, or any related technologies. Therefore, all categories that encompass AI-related impacts, data management, system integrity, and performance benchmarks are deemed not relevant since the content does not touch upon AI topics, implications, or regulations. The legislation focuses solely on the financial activities and supervision of credit unions without implicating AI in any form.


Sector: None (see reasoning)

Similar to the analysis on the categories, the text focuses on legal and administrative details related to credit unions. There is no mention of AI applications or regulations across various sectors including politics, government, judicial, healthcare, etc. Therefore, it is not relevant to any of the sectors outlined. It only addresses credit unions and their examination fees, lacking references to the impact or control of AI in the specified sectors.


Keywords (occurrence): algorithm (1) show keywords in context
Feedback form