4162 results:


Description: Establishes School Funding Formula Evaluation Task Force.
Collection: Legislation
Status date: March 7, 2024
Status: Introduced
Primary sponsor: Andrea Katz (3 total sponsors)
Last action: Introduced, Referred to Assembly Education Committee (March 7, 2024)

Category:
Data Governance (see reasoning)

The text primarily discusses the establishment of a task force aimed at evaluating the school funding formula, focusing on methodologies for calculating state school aid and the impacts of these methodologies on various student demographics. It references the 'software program algorithm' used in determining funding rates, indicating a limited relevance to AI-related concerns regarding data accuracy and algorithmic fairness. However, there are no explicit discussions on social impact metrics, data governance standards, system integrity security measures, or robustness benchmarks pertaining to AI. Thus, while mentioning algorithms, it does not engage deeply enough with AI issues to warrant high scores in any category.


Sector:
Government Agencies and Public Services (see reasoning)

The text is relevant to the education sector as it establishes a task force aimed at evaluating the school funding formula, which directly affects educational institutions, but it does not specifically address the use of AI in educational settings. The references to algorithms in funding calculations could imply some relationship with data and analytics used in educational contexts, but the focus is primarily on budgeting and funding methodologies. Therefore, while related to education, it does not delve deeply into AI applications within this sector, resulting in moderate but not high relevance.


Keywords (occurrence): algorithm (1) show keywords in context

Description: Creating a charter of people's personal data rights.
Collection: Legislation
Status date: Jan. 31, 2023
Status: Introduced
Primary sponsor: Robert Hasegawa (7 total sponsors)
Last action: By resolution, reintroduced and retained in present status. (Jan. 8, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly addresses issues related to personal data rights and privacy, focusing heavily on the implications of technology on individual privacy rights. It highlights the need for accountability regarding how personal information, potentially influenced by AI technologies, is collected and processed. Though AI is not explicitly named, the context of automated surveillance and the use of algorithms to manage personal data is heavily implied. The act emphasizes the urgent need to address privacy violations and introduces elements that could relate to fairness and bias, thus affecting social impact, making this category very relevant. Data governance emerges as the primary focus, as it deals directly with how data is processed, shared, and managed under the legislation. System integrity is also pertinent due to the necessity for secure processing of the data and accountability of the entities handling it, while robustness is less relevant here since performance benchmarks for AI systems are not mentioned. Overall, the act aims to shape how technology, including AI, interacts with individuals in society, ultimately proving its relevance to Social Impact and Data Governance more than the others.


Sector:
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)

The text mainly discusses privacy rights that could apply broadly across various sectors where personal data is collected or utilized, but it particularly emphasizes consumer protection which aligns well with sectors like Government Agencies and Public Services as it operates under state legislation affecting residents' rights. The obligations for privacy safeguards mentioned resonate with the healthcare sector where personal data protection is critical as well, especially with sensitive information. Although the text could relate to Private Enterprises, it focuses more on individuals' rights rather than corporate governance. The act does not directly address education, international standards, or nonprofit application. Therefore, the sectors most relevant to the content of the text are primarily Government Agencies and Public Services, with some relevance to Healthcare and potentially Private Enterprises.


Keywords (occurrence): artificial intelligence (3) automated (5) show keywords in context

Description: A bill to require the Director of the Defense Media Activity to establish a course of education on digital content provenance and to carry out a pilot program on implementing digital content provenance standards, and for other purposes.
Collection: Legislation
Status date: July 10, 2023
Status: Introduced
Primary sponsor: Gary Peters (sole sponsor)
Last action: Read twice and referred to the Committee on Armed Services. (July 10, 2023)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text explicitly addresses the use of artificial intelligence and machine learning techniques in the context of digital content forgery, highlighting the implications for securing and authenticating digital content. This connection is crucial to understanding the societal impact of AI technologies, especially regarding their potential for misuse. Furthermore, by laying out a course of education related to these technologies, the bill emphasizes creating standards that will influence how AI impacts security measures and information integrity. Therefore, the Social Impact category receives a high relevance score. Other categories related to governance of data, system integrity, and robustness are relevant to a lesser extent because the focus here is primarily on education and standards rather than detailed regulatory requirements or technical benchmarks.


Sector:
Government Agencies and Public Services
Judicial system
Academic and Research Institutions (see reasoning)

Given the focus on the Defense Media Activity, the text has significant relevance to the Government Agencies and Public Services sector, as it pertains directly to government-sponsored educational programs and initiatives aimed at managing digital content. It also touches on the use of emerging technologies such as AI in the context of defense, implicating both the Judicial System and National Security sectors indirectly through its implications for misinformation. However, it does not have explicit relevance to the other sectors like Healthcare or Education. Thus, the Government Agencies and Public Services category scores the highest, reflecting its core focus.


Keywords (occurrence): machine learning (1) show keywords in context

Description: HEALTH AND SAFETY -- THE RHODE ISLAND CLEAN AIR PRESERVATION ACT - Establishes regulations to prohibit stratospheric aerosol injection (SAI), solar radiation modification (SRM) experimentation, and other hazardous weather engineering activities.
Collection: Legislation
Status date: Jan. 26, 2024
Status: Introduced
Primary sponsor: Evan Shanley (2 total sponsors)
Last action: Committee recommended measure be held for further study (Feb. 6, 2024)

Category:
Societal Impact (see reasoning)

The text outlines regulations that specifically prohibit hazardous weather engineering activities such as stratospheric aerosol injection and solar radiation modification. It recognizes the potential hazards associated with these activities, including the role of AI in atmospheric activities. As the text explicitly addresses the implications of artificial intelligence related to health and safety and atmospherical risks, this suggests a direct link to the 'Social Impact' category, as the legislation discusses how such practices could harm individuals and the environment. For 'Data Governance,' while the text mentions the use of data collection for monitoring pollution, it does not delve deeply into data management or privacy rights within AI systems. 'System Integrity' is not fully applicable here as the focus is more on environmental safety than on the inherent security and transparency of AI systems. 'Robustness', which deals with benchmarks and compliance for AI performance, is also less relevant, as the legislation primarily concerns environmental impacts rather than AI system auditing or performance standards.


Sector:
Government Agencies and Public Services
Healthcare (see reasoning)

The text implicates several sectors, albeit indirectly. The mention of AI hints at applications in public services and addressing potential environmental hazards. Therefore, the applicability to 'Government Agencies and Public Services' is relevant due to potential government oversight and regulation of weather engineering activities. The mention of health impacts connects it to 'Healthcare,' but the focus is not specifically on healthcare AI applications. 'Private Enterprises, Labor, and Employment' seems slightly relevant due to discussions on how these regulations impact corporate and industrial entities involved in weather engineering, although not directly addressing employment-related concerns. Other sectors such as 'Politics and Elections', 'Judicial System', 'Academic and Research Institutions', and 'International Cooperation and Standards' seem less related given the scope of the text, which does not explicitly address these areas.


Keywords (occurrence): artificial intelligence (2) machine learning (2) show keywords in context

Description: A bill to promote the economic security and safety of survivors of domestic violence, dating violence, sexual assault, or stalking, and for other purposes.
Collection: Legislation
Status date: Sept. 19, 2024
Status: Introduced
Primary sponsor: Patty Murray (11 total sponsors)
Last action: Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (Sept. 19, 2024)

Category:
Societal Impact (see reasoning)

The SAFE for Survivors Act of 2024 primarily focuses on promoting the economic security of survivors of domestic violence, sexual assault, and stalking. While this text does not explicitly reference AI technologies or issues directly related to AI, there are mentions of deepfakes, which are relevant to artificial intelligence and automated content generation. The relevance of AI to the Social Impact category could be inferred in discussions around how technological advancements, specifically AI, can be exploited for discriminatory or harmful purposes, such as deepfakes being used to harass or manipulate individuals. However, the overall focus of the text is more aligned with social welfare, victim rights, and safety rather than on the broader implications of AI and its impact on society. Thus, the relevance in this category is moderate. The Data Governance category could be considered less relevant since the text does not mention issues related to data management or privacy concerns pertinent to AI systems. System Integrity remains minimal given that the text does not address security, transparency, or regulatory processes for AI technologies. Robustness also is not relevant; the text does not discuss AI performance metrics or regulatory compliance procedures for AI systems.


Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)

The SAFE for Survivors Act of 2024 does intersect with several sectors due to its focus on legislative protections and provisions for survivors of violence. The relevance to the Healthcare sector is notable in terms of mental health support for victims, as the act indirectly entails provisions for health coverage related to victimization. The Private Enterprises, Labor, and Employment sector is also significant, as the act addresses employment challenges and protections for survivors. The connection to Government Agencies and Public Services is evident since this legislation may involve state and federal agency compliance, and addressing workplace issues calls for public policy engagement. However, the act doesn't specifically target the utilization or regulation of AI within these contexts. Judicial System relevance is minimal unless considering claims filed by survivors, and other sectors such as Politics and Elections or International Cooperation do not apply.


Keywords (occurrence): deepfake (2) show keywords in context

Description: To improve the classification and declassification of national security information, and for other purposes.
Collection: Legislation
Status date: Oct. 17, 2023
Status: Introduced
Primary sponsor: Brad Wenstrup (2 total sponsors)
Last action: Referred to the Subcommittee on Energy, Climate and Grid Security. (Nov. 3, 2023)

Category:
System Integrity
Data Robustness (see reasoning)

The bill focuses primarily on improving the classification and declassification of national security information. The mention of using artificial intelligence and machine learning technologies (Section 7) indicates a direct interaction with technologies relevant to AI. However, the overall goal of the bill is more about enhancing efficiency in classification processes rather than addressing impactful socio-economic issues directly tied to AI, or data management and protection concerns. Hence, while the AI technology aspect is significant, it doesn't directly engage with broader social impacts, data governance, integrity of AI systems, or robustness as defined in the categories.


Sector:
Government Agencies and Public Services (see reasoning)

The legislation discusses the application of AI and machine learning specifically in the public sector for classification and declassification processes, indicating its relevance to government operations. The mention of AI technologies and the context of national security point to a strong association with Government Agencies and Public Services, as these technologies are being leveraged to improve governmental functions. However, the bill does not provide a detailed exploration of AI's implications in political campaigns, healthcare, judicial processes, or broader AI impacts on society or the workplace. This context frames the legislation primarily within the Government Agencies and Public Services sector.


Keywords (occurrence): machine learning (1) show keywords in context

Description: Requires publishers of books created wholly or partially with the use of generative artificial intelligence to disclose such use of generative artificial intelligence before the completion of such sale; applies to all printed and digital books consisting of text, pictures, audio, puzzles, games or any combination thereof.
Collection: Legislation
Status date: Jan. 3, 2024
Status: Introduced
Primary sponsor: Nathalia Fernandez (sole sponsor)
Last action: REFERRED TO INTERNET AND TECHNOLOGY (Jan. 3, 2024)

Category:
Societal Impact (see reasoning)

The text directly addresses the disclosure requirements for books using generative artificial intelligence. Its focus on generative AI aligns closely with potential social impacts, such as transparency and consumer protection, indicating that the publication of AI-generated content could have psychological and informational effects on readers. As such, it pertains strongly to Social Impact. The legislation does not primarily discuss data management or the governance of AI, so Data Governance is less relevant. It does not mention system security or transparency measures that fall under System Integrity, nor does it establish benchmarks or certification processes for performance, hence Robustness is not applicable either. Overall, the key focus revolves around the social implications of generative AI disclosure in publishing, making it distinctly pertinent to the Social Impact category.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The text primarily relates to the publishing sector and its obligation to inform consumers about the use of generative artificial intelligence. Since it mandates disclosures applicable to any printed and digital book, it does not specifically fit into traditional sectors like Politics and Elections or Healthcare. However, it does intersect with elements of Academic and Research Institutions if considering how generative AI might also impact educational materials. Overall, the legislation's primary relevance lies in reforming publishing practices, thereby mainly categorizing it under Private Enterprises and Labor, and measuring its broader implications for the content creation landscape.


Keywords (occurrence): artificial intelligence (2) machine learning (2) show keywords in context

Description: Amends the Department of Innovation and Technology Act. Makes changes to the composition of the Task Force. Provides that the Task Force shall include 2 members (rather than one) appointed by the Speaker of the House of Representatives, one of whom shall serve as a co-chairperson.
Collection: Legislation
Status date: Feb. 9, 2024
Status: Introduced
Primary sponsor: Abdelnasser Rashid (sole sponsor)
Last action: Rule 19(a) / Re-referred to Rules Committee (April 5, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text directly mentions 'generative artificial intelligence' and discusses the establishment of a Task Force focused on the use of such technology in various sectors, including education and public services. It emphasizes the assessment of generative AI's impact on society, consumer protection, civil rights, and the workforce, which are all facets aligned with the Social Impact category. Additionally, the intent to recommend legislation regarding consumer information and assess public services indicates relevance to Data Governance. However, there is no discussion related to System Integrity or Robustness, as the focus is on the application of AI rather than the technical integrity or compliance standards of AI systems.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

The legislation outlines the formation of a Task Force that involves various stakeholders from education, cybersecurity, and business perspectives. It addresses the implications of generative AI for education, public services, consumer protections, and civil rights, demonstrating significant relevance to the sectors identified. Particularly, it pertains to Academic and Research Institutions regarding school policies, Government Agencies and Public Services for service delivery improvement, and potentially to Private Enterprises, Labor, and Employment when considering how AI impacts these areas. Other sectors such as Politics and Elections, Judicial System, Healthcare, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified are not clearly connected within the text.


Keywords (occurrence): artificial intelligence (4) show keywords in context

Description: To prohibit the use of algorithmic systems to artificially inflate the price or reduce the supply of leased or rented residential dwelling units in the United States.
Collection: Legislation
Status date: June 5, 2024
Status: Introduced
Primary sponsor: Becca Balint (16 total sponsors)
Last action: Referred to the House Committee on the Judiciary. (June 5, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly mentions the use of algorithmic systems to manipulate rental housing prices, indicating a direct relevance to social impacts, such as affordability and housing equity. The legislation seeks to prevent the use of algorithms in a way that can perpetuate discriminatory practices in housing, thereby linking it strongly to the Social Impact category. The Data Governance category is also relevant as it implies a need for data integrity and accuracy related to the algorithmic pricing systems. System Integrity is significantly relevant due to the emphasis on controlling AI systems that affect market behavior. Robustness is less directly relevant despite the mention of algorithms, as it does not focus on benchmarks or compliance measures. Hence, the average scores reflect that while System Integrity is highly relevant, it is primarily a social issue, with significant implications on government agency oversight and transparency.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text primarily relates to the regulation of AI in the housing sector, posing implications on how algorithms can affect market behaviors and regulations around this. The Government Agencies and Public Services sector is relevant, as it outlines the Federal Trade Commission's role in enforcing these regulations, impacting public housing policies. The Private Enterprises, Labor, and Employment sector is also pertinent due to its implications on rental property owners and housing market dynamics. The legislation does not strongly connect to areas such as healthcare, judicial systems, international cooperation, or nonprofit sectors. Therefore, the average scores reflect a strong emphasis on regulatory frameworks within the housing market and government oversight.


Keywords (occurrence): algorithm (1) show keywords in context

Description: Regulates use of artificial intelligence enabled video interview in hiring process.
Collection: Legislation
Status date: April 8, 2024
Status: Introduced
Primary sponsor: Kristin Corrado (sole sponsor)
Last action: Introduced in the Senate, Referred to Senate Labor Committee (April 8, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly discusses the regulation of artificial intelligence in the hiring process, particularly focusing on ensuring that employers inform applicants about AI usage and obtain consent. This directly relates to issues around social impact, namely discrimination through AI evaluations and consumer protection—ensuring fairness and accountability in AI hiring processes. Moreover, the requirement for employers to collect and report demographic data reinforces the relevance of social impact because it aims to identify any potential bias in AI systems. Thus, it is highly relevant to the social impact category. For data governance, the text outlines mandates for consent, the use of applicant data, and issues of data retention, which are relevant as they address the ethical collection and management of data processed by AI systems, hence a strong relevance score. The system integrity category is relevant as it mentions consent procedures and the obligation to delete data, indicating an emphasis on transparency and control over AI decisions. Robustness is less relevant as the text does not focus on performance benchmarks or auditing AI but rather on procedural and ethical considerations. Overall, the emphasis on the responsible use and implications of AI in hiring supports higher scores for social impact and data governance, with moderate relevance in system integrity.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text is explicitly related to the hiring process, which falls within labor and employment legislation due to its direct application to how artificial intelligence may be employed by employers when assessing job applicants. Furthermore, it addresses consumer protections and civil rights within hiring practices, making it relevant to private enterprises. There is also an element of public service as it mandates reporting to the Department of Labor, reinforcing the broader implications for government oversight and regulation in employment practices related to AI. Thus, the text is highly relevant to the Private Enterprises, Labor, and Employment sector. It holds moderate relevance to Government Agencies and Public Services since it mentions cooperation with a state department but is not primarily focused on the use of AI by government agencies. Other sectors do not apply due to the specific context regarding employment and hiring.


Keywords (occurrence): artificial intelligence (17) show keywords in context

Description: To direct the Federal Trade Commission to require impact assessments of automated decision systems and augmented critical decision processes, and for other purposes.
Collection: Legislation
Status date: Sept. 21, 2023
Status: Introduced
Primary sponsor: Yvette Clarke (16 total sponsors)
Last action: Referred to the Subcommittee on Innovation, Data, and Commerce. (Sept. 22, 2023)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The Algorithmic Accountability Act of 2023 explicitly addresses impact assessments of automated decision systems, which fundamentally includes AI techniques such as machine learning. The definitions within the text acknowledge automated decision systems and augmented critical decision processes, clarifying their relationship with AI. This forms a basis for accountability and regulatory assessments directly connected to the implications of AI on various critical decisions, thereby underscoring the societal impact and the necessity for ethical guidelines around AI deployment. Accordingly, it is very relevant to both the Social Impact and System Integrity categories. Data Governance is relevant as the Act entails performance documentation and potential consumer protections through assessments, including aspects of data correctness and citizen engagement. Robustness is less relevant since the document does not focus explicitly on performance benchmarks or standards for AI systems, leaning more towards assessment and accountability instead of direct performance metrics or compliance benchmarks. Thus it garners a lower score.


Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)

The text involves applications across various sectors, particularly impacting Government Agencies and Public Services by necessitating regulatory compliance and assessments of automated decision systems used by public authorities. The implications for Healthcare also arise given that healthcare decisions are classified under critical decisions within the text. The potential implications on Private Enterprises are notable since they may also have to comply with this regulation if deploying automated systems that affect consumer decisions. Academic and Research Institutions may find relevance in the collaborative aspects of developing best practices for AI governance, but the main focus isn't central to academic settings. Other sectors have tenuous connections. Politics and Elections could be touched upon indirectly through automated decision systems used in campaign strategies, but the legislation does not directly address political mechanisms. Therefore, the strongest relevance lies in Government Agencies and Public Services and Healthcare, while other sectors score lower due to less direct connections.


Keywords (occurrence): artificial intelligence (3) machine learning (1) automated (44) show keywords in context

Description: As introduced, requires TACIR to conduct a study on approaches to the regulation of artificial intelligence and submit a report of such study, including recommended legislative approaches, to the speakers of each house and the legislative librarian no later than January 1, 2025. - Amends TCA Title 4 and Title 47.
Collection: Legislation
Status date: Jan. 31, 2024
Status: Introduced
Primary sponsor: Karen Camper (sole sponsor)
Last action: Taken off notice for cal in s/c Business & Utilities Subcommittee of Commerce Committee (March 19, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text of this act focuses primarily on the regulation, oversight, and study of artificial intelligence by the Tennessee advisory commission on intergovernmental relations (TACIR). The emphasis on regulation suggests an intention to address issues that could impact society at large, which corresponds with the Social Impact category. The study might also highlight concerns on data governance, particularly regarding how regulatory frameworks could affect data collection and management practices related to AI systems. However, the text doesn't explicitly delve into areas related to system security, integrity, or performance benchmarks, which limits its relevance to the System Integrity and Robustness categories. Overall, the act is primarily about establishing a framework for regulation and study rather than providing specific provisions or frameworks related to direct governance of AI systems or performance standards.


Sector:
Government Agencies and Public Services (see reasoning)

The act is directed towards a state regulatory body that will prepare a study concerning the frameworks and mechanisms for regulating AI, thereby impacting governance. It does not directly touch on sectors like healthcare, politics, or employment, nor does it specifically address the judicial system. However, the nature of the bill suggests governance is a primary area of focus without delving into specific applications of AI in these sectors, which results in low relevance scores across those categories. The emphasis is strongly on governmental study and recommendations, applying rather broadly to the operations of state governance.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Description: A bill to prohibit the discriminatory use of personal information by online platforms in any algorithmic process, to require transparency in the use of algorithmic processes and content moderation, and for other purposes.
Collection: Legislation
Status date: July 13, 2023
Status: Introduced
Primary sponsor: Edward Markey (3 total sponsors)
Last action: Read twice and referred to the Committee on Commerce, Science, and Transportation. (July 13, 2023)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

This legislation explicitly addresses the social impact of algorithmic processes, particularly focusing on discrimination, transparency, and user rights. Its provisions for requiring transparency and accountability from online platforms directly relate to the social consequences of AI through its influence on individuals and marginalized communities. The bill also discusses the risks associated with algorithmic decision-making, indicating concern for potential harm relevant to social dynamics, hence scoring high on social impact. In terms of data governance, the legislation stipulates specific requirements for data collection, transparency, and management of personal information which aligns directly with secure and accurate data practices. The references to algorithmic processes and the associated data practices link it moderately to the robustness and system integrity categories, but the focus on transparency and accountability strongly underscores the impact on social justice, earning a very high relevance score. Overall, this bill illustrates a strong need for ethical considerations in algorithm usage, reinforcing its relevance to all areas of potential social harm tied to AI, data governance, and systemic integrity, making it notably impactful.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Nonprofits and NGOs (see reasoning)

The bill’s focus on online platforms directly engages with topics relevant to Private Enterprises, Labor, and Employment as it touches on algorithmic usage that affects job advertising and opportunities. Its references to content moderation and algorithmic processes also suggest strong implications for Government Agencies and Public Services, as they highlight how government regulatory measures can be informed by the transparency standards set for these platforms. There is an indirect relation to Nonprofits and NGOs for their roles in advocating for fair use of technology. However, the main impact is on the private sector regarding their interaction with AI, algorithmic processes, and data, resulting in a moderately high score in that area. The emphasis on algorithmic transparency and anti-discrimination is also crucial concerning governmental oversight in public services.


Keywords (occurrence): artificial intelligence (1) machine learning (1) automated (5) show keywords in context

Description: Establishes the Office of Artificial Intelligence Safety and Regulation within the Department of Commerce and Consumer Affairs to regulate the development, deployment, and use of artificial intelligence technologies in the State. Prohibits the deployment of artificial intelligence products in the State unless affirmative proof establishing the product's safety is submitted to the Office. Makes an appropriation.
Collection: Legislation
Status date: Jan. 19, 2024
Status: Introduced
Primary sponsor: Mike Gabbard (sole sponsor)
Last action: The committee on CPN deferred the measure. (Feb. 15, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text discusses the creation of an Office of Artificial Intelligence Safety and Regulation aimed at overseeing the deployment and use of AI technologies. This direct mention of the regulatory action in the field of AI supports strong relevance to the 'Social Impact' category, as it addresses public safety, individual rights, and societal risks associated with AI—a core focus of the category. The necessity of prior safety proof for AI deployments aligns with accountability measures for developers, again emphasizing the social implications of AI usage. Regarding 'Data Governance', the text also mentions establishing standards and guidelines for data privacy and transparency in AI systems, which is critical for data collection practices. The focus on maintaining a balance between innovation and public safety also lends weight to the importance of system integrity concerning AI technologies. However, the robustness of AI through performance benchmarks and auditing appears less emphasized in this text, making it the least relevant category. Therefore, this text is particularly relevant to the 'Social Impact' and 'Data Governance' categories, with less relevance to 'System Integrity' and 'Robustness.'


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Nonprofits and NGOs (see reasoning)

This text establishes an office within the state government specifically tasked with regulating AI technologies, which places it squarely within the 'Government Agencies and Public Services' sector. The direct involvement of the Department of Commerce and Consumer Affairs in AI safety and regulation adds to the sector's fit. While the text does not reference political campaigns or direct implications for the judiciary or healthcare sectors, it does concern issues relevant to private enterprises and the workforce due to the regulation of AI deployment in business contexts. However, its primary focus remains on government regulation and oversight of AI technologies. This leads to a high score in 'Government Agencies and Public Services' but lower scores in other sectors where less direct relevance is noted.


Keywords (occurrence): artificial intelligence (49) show keywords in context

Description: Establishes and appropriates funds for an artificial intelligence government services pilot program to provide certain government services to the public through an internet portal that uses artificial intelligence technologies.
Collection: Legislation
Status date: Jan. 24, 2024
Status: Introduced
Primary sponsor: Sean Quinlan (14 total sponsors)
Last action: Referred to HET, FIN, referral sheet 3 (Jan. 26, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text explicitly establishes an artificial intelligence government services pilot program and appropriates funds for it. Given the focus on using AI technologies for various government services, the legislation is closely tied to social impact as it indicates how AI will interact with the public regarding essential services like unemployment, death certificates, building permits, and driver's licenses. There’s also a component regarding the consultation with AI experts, indicating a concern for system integrity and ensuring reliable implementation. However, the primary focus seems to rest on the social impact AI will have as it is used in government service delivery. Data governance is indirectly relevant as handling sensitive information (like death certificates and licenses) requires secure data practices but is less highlighted in the document. System integrity and robustness are minimally addressed, as they refer more to homegrown security measures and performance benchmarks over the core aim of this legislation. Therefore, Social Impact is very relevant, followed by moderate relevance for Data Governance and lower relevance for System Integrity and Robustness.


Sector:
Government Agencies and Public Services (see reasoning)

The text explicitly addresses the use of artificial intelligence in government services, indicating a direct application of AI by state and county agencies. It speaks to how AI is to be utilized for delivering public services and improving government operations, making it highly relevant to the sector of Government Agencies and Public Services. It does not specifically mention aspects related to Politics and Elections, the Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, or Hybrid, Emerging, and Unclassified sectors, thus those receive a lower relevance score. Therefore, the Government Agencies and Public Services sector stands out as very relevant, while others rank significantly lower for relevance.


Keywords (occurrence): artificial intelligence (7) show keywords in context

Description: STATE AFFAIRS AND GOVERNMENT -- DIGITAL ASSET KEYS -- PROHIBITION OF PRODUCTION OF PRIVATE KEYS - Prohibits the compelled production of a private key as it relates to a digital asset, digital identity or other interest or right.
Collection: Legislation
Status date: Feb. 29, 2024
Status: Introduced
Primary sponsor: Thomas Noret (6 total sponsors)
Last action: Committee recommended measure be held for further study (March 28, 2024)

Category:
Data Governance (see reasoning)

The text explicitly pertains to the regulation of private keys related to digital assets and digital identities, making it relevant across several categories. However, the primary focus lies on the governance of cryptographic keys, which relates mostly to data management rather than the specific impacts, integrity, or robustness of AI systems. As such, while it discusses algorithms necessary for encryption (which could be loosely related to data governance), it does not primarily focus on AI's social impact, system integrity, or robustness in the context described in the categories. For Social Impact, slight relevance is observed due to potential implications on consumer protection and privacy. Data Governance is moderately relevant as it addresses secure management of cryptographic data. System Integrity is slightly relevant as it implies the need for security measures associated with private keys. Robustness does not significantly apply here, as the legislation does not set benchmarks or performance standards for AI. Overall, the legislation primarily centers on data and cryptographic key management without a strong AI focus.


Sector: None (see reasoning)

The legislation's primary focus is on the regulation of digital asset keys rather than any direct application or regulation of AI within specific sectors. The implications for digital identity and privacy relate primarily to cybersecurity and personal data management, rather than the usage of AI in those contexts. Thus, the legislation has slight relevance to sectors like Government Agencies and Public Services and Private Enterprises; it does not explicitly address any sector concerning elections, healthcare, or academic contexts. It provides a protective measure related to digital assets and identities but does not delve into the specific applications or implications of AI technology across sectors. Therefore, the final evaluations reflect its limited applicability across the described sectors, with potential relevance to Government Agencies and Public Services and Private Enterprises due to the digital asset context.


Keywords (occurrence): algorithm (1) show keywords in context

Description: To accelerate the identification of solutions to the challenges of the Joint Force by assigning to specific components of the Department of Defense certain responsibilities for the delivery of essential integrated joint warfighting capabilities, and for other purposes.
Collection: Legislation
Status date: Nov. 17, 2023
Status: Introduced
Primary sponsor: Darrell Issa (sole sponsor)
Last action: Referred to the House Committee on Armed Services. (Nov. 17, 2023)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The JADC2 Implementation Act centers around enhancing integrated joint warfighting capabilities within the Department of Defense. It explicitly mentions the Chief Digital and Artificial Intelligence Officer's responsibilities, indicating a direct involvement of AI in military operations and decision-making processes. The creation of a factory-based approach for software development and the need for operational awareness imply that AI and algorithmic capabilities will be integral to the Act's implementation. Given that the legislation encompasses AI's role in military strategies, technology integration, and operational effectiveness, the relevance to all categories—Social Impact, Data Governance, System Integrity, and Robustness—is substantial. However, the primary focus is on leveraging AI for operational capabilities rather than societal implications, hence the scores are reflecting that prioritization.


Sector:
Government Agencies and Public Services (see reasoning)

This Act is primarily targeted at enhancing the operational capacity of the Department of Defense through the integration of AI in military strategies. It mentions multiple roles that pertain to AI's application in military settings—such as acquiring mission capabilities and ensuring operational effectiveness—which falls squarely within the Government Agencies and Public Services sector. The absence of references to politics, the judicial system or specific healthcare applications limits relevance to those sectors.


Keywords (occurrence): artificial intelligence (2)

Description: Regulates use of automated tools in hiring decisions to minimize discrimination in employment.
Collection: Legislation
Status date: Jan. 9, 2024
Status: Introduced
Primary sponsor: Andrew Zwicker (sole sponsor)
Last action: Introduced in the Senate, Referred to Senate Labor Committee (Jan. 9, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The legislation focuses on the regulation of automated tools used in hiring decisions, specifically targeting the potential for bias and discrimination inherent in these tools. It discusses how algorithms—including neural networks, decision trees, and other learning algorithms—function in employment settings, warranting a strong connection to the Social Impact category due to its concern with discrimination and fairness in employment processes. Furthermore, it involves governance of automated decision-making and the need for bias audits, aligning with Data Governance also, while addressing system integrity by emphasizing the importance of transparency in using these tools, particularly the requirement for candidates to be notified about the use of such tools. The aspects of ensuring accountability and oversight can also relate to System Integrity and Robustness with regards to establishing standards and compliance procedures.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The text is highly relevant to Private Enterprises, Labor, and Employment as it regulates the use of automated decision-making tools in hiring processes, aimed at reducing discrimination in employment. It is less relevant to the other sectors such as Healthcare, Politics and Elections, or Judicial System, as those topics are not addressed in this text. While there are elements that could indirectly affect the Government Agencies and Public Services sector concerning compliance and governance norms, the direct implications are strongest for the employment sector.


Keywords (occurrence): automated (13) show keywords in context

Description: Requires school districts to provide instruction on artificial intelligence; requires Secretary of Higher Education to develop artificial intelligence model curricula.
Collection: Legislation
Status date: Oct. 21, 2024
Status: Introduced
Primary sponsor: Reginald Atkins (4 total sponsors)
Last action: Introduced, Referred to Assembly Science, Innovation and Technology Committee (Oct. 21, 2024)

Category:
Societal Impact (see reasoning)

The text explicitly mentions the instruction of artificial intelligence across K-12 education and the development of AI curricula at the higher education level. As such, it has direct implications for the social impact of AI by ensuring students are educated on AI concepts, which could influence their understanding and engagement with technology in the future. The emphasis on responsible and ethical use of AI also pertains to social responsibility. For Data Governance, while there are underlying themes of how data might be used in AI education, the bill does not specify mandates for handling data, thereby making it less relevant. For System Integrity and Robustness, though related to education about AI systems, the legislation primarily focuses on curricula development rather than ensuring security or benchmarks of AI technologies. Therefore, the primary relevance appears to be in the realm of Social Impact regarding educational transformation and fostering responsible use of AI.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

The legislation primarily addresses the incorporation of artificial intelligence instruction in educational settings, relevant to both K-12 education and higher education systems. It mandates public educational institutions to offer programs specific to AI, which directly affects governmental operations in educational contexts. While there is a connection to workforce development and preparing students for careers in AI, it does not explicitly address sectors like healthcare, politics, or other industries. Its focal point remains on academic institutions rather than private enterprises, nonprofits, or other sectors. Thus, its strongest relevance lies within the academic and research realm, while it could moderately affect the governmental sector in education due to its regulatory nature.


Keywords (occurrence): artificial intelligence (30) automated (2) show keywords in context

Description: Amends the Right of Publicity Act. Grants additional enforcement rights and remedies to recording artists. Provides for the liability of any person who materially contributes to, induces, or otherwise facilitates a violation of a specified provision of the Act by another party after having reason to know that the other party is in violation. Defines "artificial intelligence" and "generative artificial intelligence". Changes the definition of "commercial purpose" and "identity".
Collection: Legislation
Status date: Feb. 7, 2024
Status: Introduced
Primary sponsor: Mary Edly-Allen (27 total sponsors)
Last action: Rule 3-9(a) / Re-referred to Assignments (April 19, 2024)

Category:
Societal Impact (see reasoning)

The text primarily focuses on the definitions and regulations surrounding the use of artificial intelligence in the context of publicity rights for recording artists. It defines 'artificial intelligence' and 'generative artificial intelligence,' which are crucial elements relating to the impact of AI on identity and commercial use. The legislation reflects the implications of AI technologies in commercial purposes, particularly concerning identity rights and how these technologies could potentially infringe upon individuals' rights of publicity. This aligns closely with issues regarding social impact, as it attempts to address accountability in the face of potential misuse of AI-generated content that mimics identities. Additionally, the text does not make extensive references to data governance, system integrity, or robustness aspects like data management or performance benchmarks, which could mean moderate relevance at best in those areas. Overall, the categories that reflect the text's primary focus are clearly established, particularly Social Impact, due to the protections being considered for identity rights in the age of generative AI. Therefore, Social Impact will score the highest, while other categories may receive lower scores as their relevance is more indirect or limited.


Sector:
Private Enterprises, Labor, and Employment
Nonprofits and NGOs (see reasoning)

The text is most relevant to the sector of Private Enterprises, Labor, and Employment as it discusses the rights and remedies related to commercial use of identities, which would directly affect how private entities, particularly those involved in entertainment and media, utilize AI for promotion and commercial purposes. The act's implications could extend to how recording artists engage their audiences vis-à-vis generative AI outputs in commercial contexts. The text may also have moderate relevance to Nonprofits and NGOs, as organizations that focus on artists' rights and protections could be impacted by this legislation, although their role is not explicitly detailed in the text. Other sectors like Politics and Elections or Government Agencies and Public Services are less relevant, as the primary focus here deals with commercial rights rather than governmental applications or concerns. Hence, the focus remains on private enterprise's impacts regarding AI applications.


Keywords (occurrence): artificial intelligence (7) automated (1) show keywords in context
Feedback form