4162 results:
Description: Creates the Protect Health Data Privacy Act. Provides that a regulated entity shall disclose and maintain a health data privacy policy that clearly and conspicuously discloses specified information. Sets forth provisions concerning health data privacy policies. Provides that a regulated entity shall not collect, share, or store health data, except in specified circumstances. Provides that it is unlawful for any person to sell or offer to sell health data concerning a consumer without first ob...
Collection: Legislation
Status date: May 16, 2023
Status: Introduced
Primary sponsor: Ann Williams
(19 total sponsors)
Last action: Rule 19(a) / Re-referred to Rules Committee (April 19, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text focuses extensively on the implications of health data privacy, emphasizing secure data management, consumer rights, and consent related to health information. This is particularly relevant to Social Impact, as it addresses consumer protections, potential discrimination based on consent decisions, and ensures that AI applications processing health data adhere to ethical standards. The Data Governance category is highly relevant as the Act involves regulations regarding the management of health data privacy, secure processing, and protections against unauthorized sharing. System Integrity is also pertinent given the provisions for secure handling and retention of private health data, although less directly than the other two categories. Robustness is less relevant as the text does not discuss benchmarks or performance metrics for AI systems. Overall, the legislation is primarily aimed at protecting individual rights and ensuring responsible handling of health data, which strongly correlates with Social Impact and Data Governance, with support for System Integrity.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
The legislation predominantly concerns the regulation and management of health data privacy, impacting various sectors interacting with health services and consumer data. As such, it is highly relevant to the Healthcare sector, where AI is increasingly involved in processing health-related information. Additionally, impacts may extend to Private Enterprises, Labor, and Employment because organizations dealing with health data must comply with these regulations, affecting labor practices and corporate governance within health-related businesses. While some principles may influence other sectors, such as Government Agencies and Public Services, the primary focus remains within the healthcare sector specifically, hence scoring lower for those additional sectors. Overall, this bill is chiefly relevant to the Healthcare sector, with significant implications for Private Enterprises.
Keywords (occurrence): machine learning (1) show keywords in context
Description: A bill to promote a 21st century artificial intelligence workforce and to authorize the Secretary of Education to carry out a program to increase access to prekindergarten through grade 12 emerging and advanced technology education and upskill workers in the technology of the future.
Collection: Legislation
Status date: Sept. 12, 2024
Status: Introduced
Primary sponsor: Laphonza Butler
(3 total sponsors)
Last action: Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (Sept. 12, 2024)
Societal Impact
Data Governance (see reasoning)
The bill 'Workforce of the Future Act of 2024' explicitly addresses various aspects of the workforce's relationship with artificial intelligence (AI). It includes measures to prepare the workforce for a job market increasingly influenced by AI, highlighting the importance of AI education and the skills necessary for collaboration alongside AI technologies. This indicates a significant social impact due to AI's role in potential job displacement and the creation of new job opportunities. The document lays out educational initiatives tied to advanced technology, making it very relevant to the category of Social Impact. Data governance is touched upon in the context of necessary data for analyzing the workforce's relationship with AI, implying the need for secure and accurate data management practices. System Integrity and Robustness categories appear less relevant, as the text focuses primarily on workforce implications and educational aspects rather than internal security, transparency, or performance benchmarks of AI systems.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The bill contains several references to AI's impact on jobs and emphasizes collaboration with educational institutions and workforce organizations. The focus on educational programs for emerging technologies makes it relevant to the Academic and Research Institutions sector. Furthermore, the mention of workforce development relates to Private Enterprises, Labor, and Employment as it explores job roles and skills needed in a technology-driven economy. Although there are implications for Government Agencies and Public Services in terms of educational programs, the text is primarily oriented toward educational institutions and workforce-related organizations, suggesting moderate relevance in those areas.
Keywords (occurrence): artificial intelligence (18) algorithm (3) show keywords in context
Description: Create new sections of Subchapter 1 of KRS Chapter 224 to make findings and declarations regarding the dangers of atmospheric polluting activities and the Commonwealth's authority to prohibit geoengineering; define terms; prohibit geoengineering; require the Department for Environmental Protection to issue a notice to any federal agency that has approved geoengineering activities that those activities cannot be lawfully carried out in the Commonwealth; require the department to prohibit forei...
Collection: Legislation
Status date: Feb. 16, 2024
Status: Introduced
Primary sponsor: Adrienne Southworth
(sole sponsor)
Last action: to Natural Resources & Energy (S) (Feb. 20, 2024)
Societal Impact (see reasoning)
This bill addresses the prohibition of geoengineering activities and outlines the consequences for violating this prohibition. It discusses the dangers posed by geoengineering to human health, the environment, and agriculture. The relevance to the categories can be interpreted as follows: Social Impact is highly relevant since the bill discusses the potential harm from geoengineering to individuals and society, which directly correlates to the impact of AI on society mentioned in this category. Data Governance is not directly relevant because the bill does not pertain to data management within AI systems but rather focuses on environmental concerns. System Integrity is somewhat relevant as it can relate to the secure and regulated management of technologies involved in geoengineering, but it is not the main focus of this legislation. Robustness is not relevant because it does not address benchmarks or standards for AI performance or compliance. The term 'artificial intelligence' is included in the definitions, implying a context where AI could be involved in atmospheric polluting activities, but it does not directly address AI performance or integrity issues.
Sector:
Government Agencies and Public Services (see reasoning)
The text mentions the potential application of artificial intelligence in atmospheric activities, particularly in geoengineering endeavors. However, the bill does not revolve around specific sectors like politics or healthcare, as it rather deals with environmental legislation that may have broader implications for government policies or practices. Therefore, although AI's role could be considered for various legislative efforts, it is not specifically emphasized in the context of politics and elections, government services, health care, or others. The only context where it could have relevance is under Government Agencies and Public Services, due to the Department for Environmental Protection's role in enforcement and oversight regarding geoengineering activities, which could imply a regulatory oversight of AI in such contexts.
Keywords (occurrence): artificial intelligence (1)
Description: Online content discrimination prohibition
Collection: Legislation
Status date: Feb. 8, 2023
Status: Introduced
Primary sponsor: Torrey Westrom
(2 total sponsors)
Last action: Referred to Judiciary and Public Safety (Feb. 8, 2023)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text explicitly addresses the implications of algorithms related to discrimination in online content. It establishes provisions to prohibit online discrimination based on factors like race, sex, and political ideology while utilizing algorithms, making it highly relevant to the area of Social Impact, particularly as it pertains to fairness and bias in AI systems. The text also considers accountability and consumer protections, which further aligns with the Social Impact category. Data Governance is relevant due to mention of algorithms and interactions that might lead to biased data outputs, but it is not as centrally focused on data management practices. System Integrity receives moderate relevance since it pertains to accountability in algorithmic actions but lacks specific mandates for security and oversight of AI systems, while Robustness is only slightly relevant as there is no mention of performance benchmarks, compliance standards, or auditing processes related to AI systems. Thus, Social Impact achieves a score of 5, Data Governance a score of 3, System Integrity a score of 3, and Robustness a score of 2.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The legislation focuses on online platforms, which connects it primarily to the politics surrounding content regulation and discrimination. It explicitly mentions political ideology, indicating strong relevance to Politics and Elections, especially in light of the implications for online discourse and campaign activities. However, it also relates to Government Agencies and Public Services, as the act could impact how public institutions interact with technology platforms and manage content restrictions. There is less direct relevance to other sectors such as Healthcare, Private Enterprises, or the Judicial System, though a case may be made for some implications in these areas. Thus, Politics and Elections receives a score of 4, Government Agencies and Public Services a score of 3, and the remaining sectors score 1 or 2 due to their lack of direct connection to the text.
Keywords (occurrence): algorithm (2) show keywords in context
Description: Amends the Freedom of Information Act. Provides that proposals or bids submitted by engineering consultants in response to requests for proposal or other competitive bidding requests by the Department of Transportation or the Illinois Toll Highway Authority are exempt from disclosure under the Act.
Collection: Legislation
Status date: Feb. 9, 2023
Status: Introduced
Primary sponsor: Donald DeWitte
(2 total sponsors)
Last action: Added as Co-Sponsor Sen. Neil Anderson (Feb. 21, 2024)
The text primarily discusses amendments to the Freedom of Information Act (FOIA) in Illinois, specifically regarding the exemption of proposals or bids submitted by engineering consultants from disclosure. There are no explicit references to AI technologies or their impact on society, data governance, system integrity, or benchmarks for performance. As a result, the text does not align strongly with the defined categories on legislation specifically addressing these issues.
Sector: None (see reasoning)
The text does not focus on any of the nine specified sectors including politics, public services, or healthcare. It is centered around administrative procedures related to FOIA and does not delve into the use or regulation of AI within any sector. Therefore, relevance to the sectors is also minimal.
Keywords (occurrence): automated (2) show keywords in context
Description: A bill to direct the Secretary of Energy and the Administrator of the National Oceanic and Atmospheric Administration to conduct collaborative research to advance weather models in the United States, and for other purposes.
Collection: Legislation
Status date: Jan. 23, 2024
Status: Introduced
Primary sponsor: Ben Lujan
(2 total sponsors)
Last action: Read twice and referred to the Committee on Commerce, Science, and Transportation. (Jan. 23, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text contains significant references to AI and machine learning in the context of advanced weather models and advanced computing techniques, which are key themes in the legislation. The intent behind the bill is to enhance weather prediction capabilities using AI technologies, including machine learning. This aligns strongly with understanding the social impact of improved weather prediction (Social Impact), ensuring data integrity from weather data models (Data Governance), and integrity in the computational processes used (System Integrity). Additionally, the bill touches upon performance measures and benchmarks related to AI in weather modeling, which impacts robustness. Given these connections, it is clear that multiple categories are indeed relevant to the bill's provisions.
Sector:
Government Agencies and Public Services
Academic and Research Institutions
Hybrid, Emerging, and Unclassified (see reasoning)
The bill is primarily focused on the intersection of AI with the realm of environmental science and meteorology, indicating a sector that broadly falls into government agencies and public services. It also implicates public interest in terms of improved weather forecasting and climate prediction. While it does not specifically mention sectors such as healthcare, judicial systems, or political operations, its focus aligns closely with government operations that use advanced computing and AI technologies in public service contexts. This leads to a strong relevance for the Government Agencies and Public Services sector, potentially impacting other areas through enhanced predictive capabilities.
Keywords (occurrence): artificial intelligence (2) machine learning (1) show keywords in context
Description: Amend The South Carolina Code Of Laws By Adding Section 39-5-630 So As To Provide For Social Media Accountability And Define Terms; By Adding Section 39-5-630 So As To Prohibit Social Media Websites From Censoring Users' Religious Or Political Speech And To Provide Legal Remedies For Social Media Website Users.
Collection: Legislation
Status date: Jan. 10, 2023
Status: Introduced
Primary sponsor: Lawrence Grooms
(sole sponsor)
Last action: Referred to Committee on Labor, Commerce and Industry (Jan. 10, 2023)
Societal Impact (see reasoning)
The text explicitly addresses algorithms used by social media platforms to sort, filter, or categorize user content, which directly aligns with concerns about 'Algorithm/Algorithmic' impacts on individuals' political and religious expression. It sets out guidelines and legal remedies for those who believe they have been subject to unfair use of algorithms, thus highlighting the social ramifications of AI and algorithmic content management. Since this directly relates to social influence and accountability expectations regarding AI, I scored this category as 5. The accountability and governance aspects imply a high social impact of AI in terms of free speech and discrimination. The text does not delve deeply into data governance, system integrity, or robustness, as it primarily focuses on the social accountability and legal frameworks surrounding social media interactions. Therefore, these categories received lower scores. Overall, 'Social Impact' is crucial and receives the highest score, while 'Data Governance', 'System Integrity', and 'Robustness' are not generally addressed in this context, leading to low scores.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The text has significant implications for the sector of Politics and Elections, given its focus on political speech and the regulation of algorithms that may affect political discourse and behavior on social media platforms. By prohibiting censorship of political content and requiring accountability from social media platforms, it intersects with political expression. However, there is no specific mention of the Judicial System or the operation of AI in healthcare, private enterprises or international cooperation, which suggests primarily a political focus. Thus, 'Politics and Elections' earns a high score, while others remain low due to lack of relevance.
Keywords (occurrence): algorithm (4) show keywords in context
Description: A bill to amend the Social Security Act to remove the restriction on the use of Coronavirus State Fiscal Recovery funds, to amend the Internal Revenue Code of 1986 to codify the Trump administration rule on reporting requirements of exempt organizations, and for other purposes.
Collection: Legislation
Status date: March 30, 2023
Status: Introduced
Primary sponsor: Mike Braun
(2 total sponsors)
Last action: Read twice and referred to the Committee on Finance. (March 30, 2023)
Data Governance (see reasoning)
The text of this bill largely focuses on amending tax administration aspects and financial practices related to the Internal Revenue Service (IRS) without addressing broader implications of AI on society, data governance, system integrity, or benchmarking of AI systems. However, the use of artificial intelligence and neural machine learning is mentioned in relation to improving the IRS audit process and analyzing tax gap data. This is a specific and limited integration of AI that primarily concerns operational efficacy rather than its societal impact, data practices, or standards for reliability. As such, while there is a mention of AI, the topics addressed in this bill do not deeply intersect with the category definitions.
Sector:
Government Agencies and Public Services (see reasoning)
The bill references the IRS's intentions to implement AI for improving audit outcomes, but it does not engage with broader sector-specific matters such as regulation of AI in public services or implications for labor markets. The mention of AI is specifically in the context of calculations and efficiency for the IRS, which may relate more closely to governance rather than any overarching changes in sector norms or practices. There is minimal relevance to other defined sectors that suggest a transformative AI influence. Hence, scores vary depending on the context in which AI is mentioned.
Keywords (occurrence): artificial intelligence (2) machine learning (1) show keywords in context
Description: Imposes liability for misleading, incorrect, contradictory or harmful information to a user by a chatbot that results in financial loss or other demonstrable harm.
Collection: Legislation
Status date: May 29, 2024
Status: Introduced
Primary sponsor: Clyde Vanel
(sole sponsor)
Last action: referred to consumer affairs and protection (May 29, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text of the legislation explicitly addresses the liability of proprietors of chatbot systems for the provision of misleading, incorrect, or harmful information. This directly connects to the category of Social Impact, as it considers the implications of chatbots on users, their protection against potential harm, and the accountability of companies using these AI systems. It also pertains to System Integrity, given the mandates for ensuring chatbots provide accurate information aligned with company policies. Lastly, it involves some aspects of Data Governance, particularly in ensuring the information provided by the chatbots is accurate and responsible. Overall, the legislation indicates a significant concern for the effects of AI (specifically, chatbots) on individuals and society, making Social Impact and System Integrity the most relevant categories.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text directly relates to the use of AI in business settings, particularly focused on how chatbots operate within commercial interactions. It addresses the accountability of AI systems in the context of their impact on users, which directly ties it to Private Enterprises, Labor, and Employment. While there are elements of governance, consumer protection, and potentially some intersections with Government Agencies and Public Services, the primary focus remains on the relationship between businesses and users through AI interactions. Thus, the most significant associations are with Private Enterprises and possibly Government Agencies, evaluating their roles in implementing and enforcing these regulations.
Keywords (occurrence): artificial intelligence (1) chatbot (12) show keywords in context
Description: Establishes criteria for the use of automated employment decision tools; provides for enforcement for violations of such criteria.
Collection: Legislation
Status date: Jan. 9, 2023
Status: Introduced
Primary sponsor: Latoya Joyner
(11 total sponsors)
Last action: enacting clause stricken (Jan. 10, 2024)
Societal Impact (see reasoning)
The presented text establishes criteria for the use of automated employment decision tools, highlighting terms like 'automated employment decision tool', 'neural networks', 'machine learning algorithms', and conducting disparate impact analysis. This indicates a focus on the implications of AI and automated systems in employment contexts, effectively addressing many facets of AI’s social impact, such as bias in hiring processes, accountability for AI-driven outcomes, and consumer protection related to employment. The text does not address issues related to data governance, system integrity, or robustness explicitly; therefore, the scores will reflect that focus on social impact as it is central to the automated decision-making tools mentioned.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The reference to automated employment decision tools directly connects to the sector of Private Enterprises, Labor, and Employment, where it pertains to the use of AI in hiring processes, the regulatory framework around the evaluation of such tools, and their impact on labor markets. The text minimally relates to other sectors, as it primarily concentrates on employment and labor-related implications of automated tools. Therefore, the scoring reflects a strong relevance to the labor and employment sector, with lesser implications to the other sectors mentioned.
Keywords (occurrence): machine learning (1) automated (9) show keywords in context
Description: To amend title XI of the Social Security Act to establish a pilot program for testing the use of a predictive risk-scoring algorithm to provide oversight of payments for durable medical equipment and clinical diagnostic laboratory tests under the Medicare program.
Collection: Legislation
Status date: Jan. 30, 2024
Status: Introduced
Primary sponsor: David Schweikert
(sole sponsor)
Last action: Referred to the Subcommittee on Health. (Feb. 2, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The legislation focuses on the implementation of a predictive risk-scoring algorithm as part of oversight for Medicare transactions, which clearly ties into the realm of Artificial Intelligence and machine learning. As it pertains to the accountability of the algorithm used to prevent fraud and potentially mitigate harm caused to beneficiaries (Social Impact), assess risks and ensure data handling is appropriate (Data Governance), maintain human oversight and security checks (System Integrity), and establish standards for the algorithm's performance (Robustness), it is essential to evaluate each category thoroughly.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
This text primarily deals with the use of AI specifically in the context of Medicare services, which falls primarily under the sector of Healthcare. Additionally, considerations for potential fraud prevention and algorithm implementation involve elements of Government Agencies and Public Services as it outlines how government oversight will be exercised via this pilot program. There is a slight mention of collaboration with industry representatives that could invoke aspects of the Private Enterprises sector. Hence, the Healthcare and Government Agencies and Public Services sectors are most relevant here.
Keywords (occurrence): algorithm (11) show keywords in context
Description: Requires the owner, licensee or operator of a generative artificial intelligence system to conspicuously display a warning on the system's user interface that is reasonably calculated to consistently apprise the user that the outputs of the generative artificial intelligence system may be inaccurate and/or inappropriate.
Collection: Legislation
Status date: May 3, 2024
Status: Introduced
Primary sponsor: Clyde Vanel
(2 total sponsors)
Last action: substituted by s9450a (June 6, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The legislation pertains to the necessity of providing warnings about the outputs of generative AI systems, which has implications for social responsibility and consumer protection. This clearly connects to the Social Impact category as it addresses the effects of AI outputs on individuals and society at large. It also involves issues of data integrity and user trust. The Data Governance category is relevant due to the need to manage and oversee the accuracy of AI outputs, which relates to how AI systems handle information. The System Integrity category applies here as ensuring transparency and accountability in how outputs are generated contributes to the security and trustworthiness of AI systems. Lastly, the Robustness category is relevant as well since the legislation indirectly addresses quality assurance in AI outputs, which could lead to the establishment of benchmarks for generative AI systems to meet. Hence, all four categories have relevant connections.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily addresses the implications of generative AI in the context of consumer protection, requiring businesses to display warnings on AI systems. This directly relates most strongly to the Private Enterprises, Labor, and Employment sector, as it regulates the behavior of companies utilizing generative AI technologies and dictates compliance practices for user awareness and safety. Additionally, there are implications for the Government Agencies and Public Services sector because public institutions may also need to adhere to such regulations if they utilize generative AI in their services. The text does not directly address other sectors (e.g., Healthcare or Judicial System) as it focuses mainly on business practices related to AI technology usage.
Keywords (occurrence): artificial intelligence (5) machine learning (1) automated (1) show keywords in context
Description: An act to amend Section 38750 of the Vehicle Code, relating to autonomous vehicles.
Collection: Legislation
Status date: Dec. 2, 2024
Status: Introduced
Primary sponsor: Cecilia Aguiar-Curry
(sole sponsor)
Last action: From printer. May be heard in committee January 2. (Dec. 3, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text focuses on the specific legislation concerning the operation and regulation of autonomous vehicles, which directly relates to the development and implications of AI technologies. It addresses aspects such as definitions of autonomous technology and vehicles, requirements for testing, safety standards, and manufacturer responsibilities. The social impact may be relevant, as it touches on public safety and regulatory frameworks governing AI implementations in transportation, potentially affecting individuals' trust and interactions with these technologies. Data governance may also be relevant due to the mention of data collection protocols and compliance with privacy laws. System integrity is highly relevant since it discusses oversight and regulations ensuring the secure operation and accountability of autonomous vehicles. Robustness is somewhat less relevant since the bill mainly outlines operational standards without extensive benchmarks or performance metrics.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
This text is primarily relevant to the sector of Government Agencies and Public Services, as it outlines regulations that will govern the use of autonomous vehicles by the government and the processes established by the Department of Motor Vehicles. It also touches on the implications for private enterprises related to manufacturing and deploying autonomous vehicle technology. The judicial system could be slightly relevant, as regulations may touch on liability issues but the primary focus is not on legal adjudications. Healthcare and politics are not directly referenced or significantly implicated by this legislation. Academic and research institutions might have an indirect connection through research partnerships mentioned, but it is not a primary focus of this bill. International cooperation is not mentioned, making it less relevant.
Keywords (occurrence): automated (1) autonomous vehicle (26) show keywords in context
Description: To reform Federal Aviation Administration safety requirements for commercial air tour operators, and for other purposes.
Collection: Legislation
Status date: April 13, 2023
Status: Introduced
Primary sponsor: Jill Tokuda
(2 total sponsors)
Last action: Referred to the Subcommittee on Aviation. (April 14, 2023)
The text primarily focuses on safety improvements for air tour and parachuting operations regulated by the Federal Aviation Administration (FAA). It lacks significant references to AI technologies or their implications. While it mentions 'automated systems' in the context of pilot training and flight operations, it does not explicitly discuss AI or automated decision-making in a broader societal context, thereby limiting its relevance to the categories. Social Impact and System Integrity are slightly relevant due to considerations of safety and public service, while Data Governance and Robustness are not relevant as there's no mention of data collection, management, or AI performance standards.
Sector:
Government Agencies and Public Services (see reasoning)
The text pertains to the aviation sector, focusing on the regulation and safety protocols for air tours and parachuting activities. There is a limited but applicable connection to Government Agencies and Public Services due to the involvement of the FAA and legislative oversight. However, the connection to other sectors like Healthcare or Judicial Systems is non-existent. Overall, the relevance to the predefined sectors is low outside of government operation considerations.
Keywords (occurrence): automated (1) algorithm (1) show keywords in context
Description: Establishes a program for the licensure, regulation, and oversight of digital currency companies. Appropriates funds.
Collection: Legislation
Status date: Jan. 23, 2023
Status: Introduced
Primary sponsor: May Mizuno
(3 total sponsors)
Last action: Carried over to 2024 Regular Session. (Dec. 11, 2023)
The text primarily focuses on the licensure, regulation, and oversight of digital currency companies without direct reference to AI technologies or their social implications. It discusses the significance of digital currencies, regulation, and the infrastructure required for compliance. Since AI specifically is not the central theme—rather, it is digital currency—the categories related to the impact of AI, governance of data in AI, integrity of AI systems, or benchmarks for AI performance do not apply to this legislation. Therefore, all categories are assessed as not relevant.
Sector: None (see reasoning)
The text predominantly highlights digital currency, a topic distinct from the specified sectors. While it relates to financial institutions and regulatory oversight, it does not directly address any of the sectors indicated such as politics, healthcare, or academia. Consequently, all related sectors are rated as not relevant since the exploration of AI's use in the specified categories or sectors pertaining to digital currency is absent.
Keywords (occurrence): automated (1) algorithm (1) show keywords in context
Description: Relates to the use of automated decision tools by landlords for making housing decisions; sets conditions and rules for use of such tools.
Collection: Legislation
Status date: Nov. 3, 2023
Status: Introduced
Primary sponsor: Cordell Cleare
(sole sponsor)
Last action: PRINT NUMBER 7735A (April 2, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text focuses on the use of automated decision tools in the context of housing decisions made by landlords, which directly relates to how AI might impact fairness, bias, and accountability. The legislation mandates the conduct of disparate impact analyses, thereby addressing potential discrimination in housing applications influenced by AI. This aligns strongly with the Social Impact category. Because it also involves the regulation and oversight of systems (automated decision tools), it is relevant to System Integrity, although to a lesser degree. Data Governance is also applicable as the legislation dictates how data used in these tools must be treated, but the emphasis isn't solely on data management. Robustness has minimal relevance since the text doesn’t address AI performance benchmarking or regulatory compliance measures explicitly. Overall, the explicit focus on AI's societal implications earns a high score for the Social Impact category, while the other categories receive moderate to low scores based on their relevance to the text.
Sector:
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The text relates primarily to the application of automated decision tools by landlords which are influential in housing decisions, placing it directly in the Private Enterprises, Labor, and Employment sector due to landlord-tenant dynamics. This can indirectly touch on Government Agencies and Public Services due to regulatory oversight, but the primary context lies in the landlord-tenant relationship rather than public sector applications. The Judicial System can be linked through the legal implications of enforcement and compliance under the oversight of the attorney general, although this is less direct. Healthcare, Politics and Elections, Academic Institutions, International Cooperation, Nonprofits, and the emerging sectors are not applicable as there are no direct mentions or implications in these areas within the text.
Keywords (occurrence): machine learning (1) automated (11) show keywords in context
Description: Establishes the offenses of virtual token fraud, illegal rug pulls, private key fraud and fraudulent failure to disclose interest in virtual tokens.
Collection: Legislation
Status date: Jan. 4, 2023
Status: Introduced
Primary sponsor: Kevin Thomas
(sole sponsor)
Last action: REFERRED TO CODES (Jan. 3, 2024)
The text primarily addresses fraud related to virtual tokens, with a specific focus on the legal definitions and penalties associated with such actions. Given this focus, the consideration of categories in relation to AI becomes nuanced. While the text mentions concepts like algorithms (in the context of private keys and transactions), it does not engage significantly with AI-related themes like system integrity, robustness, or the social implications of AI's use in fraud. The indirect mention of algorithms suggests a potential relevance to technology but lacks the depth or emphasis needed for higher categorization under any of the defined categories. As a result, the relevance to the categories is minimal, primarily due to the absence of explicit AI focus in the text.
Sector: None (see reasoning)
The text addresses the establishment of legal frameworks around cryptocurrencies and virtual tokens rather than focusing specifically on AI applications. While cryptocurrencies may utilize algorithmic processes and some AI technologies in broader contexts, the legislation itself is primarily concerned with fraud prevention in virtual transactions. This places the text outside the specific focus required to score highly in any sector related to AI applications. The mention of algorithms and blockchain technology has some linkage to data governance but does not sufficiently exceed a score of 2. Other sectors including healthcare, judicial systems, and public services were not related enough to warrant any scoring.
Keywords (occurrence): algorithm (1) show keywords in context
Description: Amends the Artificial Intelligence Video Interview Act. Makes a technical change in a Section concerning the short title.
Collection: Legislation
Status date: Jan. 12, 2023
Status: Introduced
Primary sponsor: Emanuel Welch
(sole sponsor)
Last action: Rule 19(a) / Re-referred to Rules Committee (March 27, 2023)
Societal Impact (see reasoning)
The text primarily revolves around a technical amendment to the Artificial Intelligence Video Interview Act. It directly references 'Artificial Intelligence' in relation to video interviewing processes in the realm of employment. Therefore, the relevance of this text within the category of 'Social Impact' can be considered significant since it touches on employment practices affected by AI. However, because the text does not delve into broader implications such as consumer protections or societal issues, it is not rated at the highest level. For 'Data Governance', 'System Integrity', and 'Robustness', the text does not provide any clarity or specifications regarding data management or AI performance standards, which leads to lower relevance in those areas.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text explicitly addresses legislation concerning employment and specifically the use of AI in video interviews, making it directly relevant to the sector of employment and labor. It does not reference other specific sectors such as healthcare, government agencies, or the judicial system, nor does it explore broader international implications. Therefore, it is rated as highly relevant in terms of Private Enterprises, Labor, and Employment while being completely not relevant for other sectors.
Keywords (occurrence): artificial intelligence (2)
Description: Establishing an artificial intelligence task force.
Collection: Legislation
Status date: March 18, 2024
Status: Passed
Primary sponsor: Joe Nguyen
(19 total sponsors)
Last action: Effective date 3/18/2024. (March 18, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly discusses the establishment of an artificial intelligence task force with a focus on assessing current uses and trends in AI. It highlights the societal impacts of AI, risks, and potential harms, which connects strongly with the Social Impact category. Concerns regarding bias, privacy rights, and the need for accountability are present, making this category highly relevant. The Data Governance category is relevant as the bill discusses guidelines for AI use, data privacy, and the importance of ethical considerations and protections against algorithmic discrimination. System Integrity is also applicable, as the bill mentions promoting transparency and accountability in the use of AI systems. Robustness receives a lower score since the discussions focus more on guidelines and legislation rather than directly measurable benchmarks for AI performance.
Sector:
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The text addresses a broad range of areas where AI can create impact, necessitating evaluation and regulation. It mentions uses of AI in various sectors including public safety, healthcare, and labor, but it overall frames AI in the context of government oversight and public services, which aligns best with the Government Agencies and Public Services sector. The text emphasizes the importance of ethical guidelines and public safety in AI use, making it relevant. It also briefly touches upon impacts on vulnerable communities, adding some relevance to the Judicial System sector as it discusses potential harms and protections. The other sectors are less applicable since specific applications of AI in those contexts are not detailed here.
Keywords (occurrence): artificial intelligence (34) machine learning (5) foundation model (1) show keywords in context
Description: Amends the Election Code. In provisions concerning the prevention of voting or candidate support and conspiracy to prevent voting, provides that the term "deception or forgery" includes, but is not limited to the creation and distribution of a digital replica or deceptive social media content that a reasonable person would incorrectly believe is a true depiction of an individual, is made by a government official or candidate for office within the State, or is an announcement or communication ...
Collection: Legislation
Status date: Jan. 31, 2024
Status: Introduced
Primary sponsor: Mary Edly-Allen
(sole sponsor)
Last action: Referred to Assignments (Jan. 31, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text is primarily focused on the implications of AI in the context of elections, particularly how AI can be utilized in generating deceptive content. The mention of 'deception or forgery' directly relates to social impacts such as misinformation, which can affect public trust and democratic processes. It also outlines the legal definitions of AI-related deception, implicating concerns about accountability and potential biases in AI outputs. Thus, it has a strong relevance to the 'Social Impact' category. Additionally, the legislation implicitly deals with the management of digital content generated by AI, which can relate to data governance, especially regarding the context of misinformation and how it may affect voter behavior. However, it lacks significant content regarding the collection and management of data or the direct governance of AI systems, so Data Governance is moderately relevant. The references to AI in its potential to influence voting behavior and integrity also suggest relevance to System Integrity as it connects to security and oversight of election processes, but this connection is not as prominent, leading to a lower score here. Robustness is less relevant as it mainly involves performance standards for AI systems rather than the legal implications of their use in elections.
Sector:
Politics and Elections (see reasoning)
The text significantly addresses the intersection of AI and the electoral process, which is clearly situated within the realm of Politics and Elections. It outlines potential legal ramifications around the use of AI-generated content that could mislead voters, making it highly relevant to this sector. The references to government officials and agencies using AI in creating content further underline the importance of proper regulations in enhancing electoral integrity. While there are some tangential implications for other sectors, such as Government Agencies and Public Services due to the involvement of governmental announcements, the focus remains strongly tied to electoral processes. Consequently, no other sectors receive significantly relevant scores.
Keywords (occurrence): artificial intelligence (10) automated (2) show keywords in context