4400 results:
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register
Data Governance
System Integrity (see reasoning)
The text outlines detailed quality management requirements predominantly focused on image quality performance parameters in digitization processes, which may involve the use of AI technologies, specifically in automated quality control processes. However, it does not explicitly discuss AI applications or their societal impacts, making it less relevant to the broader legislative themes of AI. The focus is more on procedural and technical specifications than on ethical, societal, or governance issues related to AI usage.
Sector:
Government Agencies and Public Services (see reasoning)
The legislation does not directly address specific sectors but touches on quality control processes that could be applied in various contexts including government archives. The mention of automated techniques for verifying metadata accuracy hints at potential applications in governmental operations, but overall, it lacks explicit sectoral focus. Therefore, while it could be tangentially related to government operations, it does not clearly pertain to any one sector predominantly.
Keywords (occurrence): automated (2) show keywords in context
Collection: Code of Federal Regulations
Status date: July 1, 2023
Status: Issued
Source: Office of the Federal Register
The text primarily focuses on performance testing and compliance requirements related to emissions in iron and steel foundries. It does not mention any AI-related technologies, systems, or implications. Therefore, none of the categories regarding Social Impact, Data Governance, System Integrity, or Robustness are relevant since they deal specifically with AI systems and their implications. The text strictly outlines regulatory and procedural frameworks for emissions testing, which does not involve the concerns or focuses of these categories.
Sector: None (see reasoning)
The text addresses the compliance requirements for emissions limits in iron and steel foundries and specifies performance tests and methodologies required by the Environmental Protection Agency (EPA). None of the sectors outlined pertain to AI regulation or its application, as the content centers solely on environmental standards and testing frameworks, which do not include political, governmental, healthcare, or other sectors involving AI technologies. As such, all sectors receive the lowest relevance score.
Keywords (occurrence): automated (4) show keywords in context
Collection: Congressional Record
Status date: Nov. 19, 2024
Status: Issued
Source: Congress
Societal Impact
Data Governance
System Integrity (see reasoning)
The text includes significant references to artificial intelligence, specifically within the context of addressing frauds and scams enabled by AI. This has direct implications for social impact, as it discusses protecting consumers from potential harms caused by AI-enabled systems. The mention of measures to safeguard against consumer fraud aligns closely with the themes of accountability and consumer protection within AI legislation. The topic of ensuring fairness and the prevention of discrimination in AI decision-making processes gives a strong relevance to the social impact category, suggesting the need for societal oversight in AI applications. The data governance category also plays a role since fraud prevention requires accurate data handling, though this is less emphasized in comparison to social impact. System integrity is relevant due to the need for reliable and secure AI systems that provide transparency and mitigate risks. Robustness is not as evidently addressed since there are no references to performance benchmarks or auditing standards. Therefore, social impact and data governance will be scored higher, with system integrity being moderately relevant due to the context of security and oversight within AI applications. Robustness will receive a lower score due to lack of explicit references.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text specifically mentions the regulation of artificial intelligence in consumer protection contexts, particularly around fraud and scams. This directly relates to how AI is employed in the private sector, affecting consumers and businesses alike, thereby showcasing its relevance for the Private Enterprises, Labor, and Employment sector. Additionally, the Committee on Commerce, Science, and Transportation's focus on protecting consumers from AI-enabled fraud hints at governmental oversight in public services. However, while the healthcare sector is crucial in AI discussions, it does not pertain to this text. Hence, the scoring for Private Enterprises is high, and for Government Agencies it is also notable due to the preventative measures discussed. Other sectors such as Politics, Judicial, Academic, International, Nonprofits, and Hybrid, Emerging sectors have little to no relevance and will be scored accordingly.
Keywords (occurrence): artificial intelligence (2) show keywords in context
Description: An act to add Chapter 25.1 (commencing with Section 22757.20) to Division 8 of the Business and Professions Code, relating to artificial intelligence.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Rebecca Bauer-Kahan
(sole sponsor)
Last action: Read first time. To print. (Feb. 20, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly establishes regulations specifically governing AI systems intended for children. Due to its focus on adverse impacts, risk assessments, and protections for minors, the relevance to the Social Impact category is extremely high. The regulation of AI systems underscores the importance of managing psychological and material harm caused by these technologies, directly aligning it with issues of fairness, accountability, and consumer protection in AI applications. The Data Governance category is also highly relevant, as the act discusses criteria for AI system classification, risk evaluation related to personal information, and establishes compliance requirements to ensure children's data privacy. The System Integrity category is moderately relevant, as it touches on oversight mechanisms but is not as focused on the inherent security or transparency of AI systems. Robustness is slightly relevant, mainly because it infers performance benchmarks without specifically detailing any new benchmarks or audit standards for AI systems. Overall, this act is focused on ethical development and ensuring safety for children using AI technology, making it relevant for the Social Impact and Data Governance categories primarily.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
This legislation is particularly focused on children's interactions with AI technology, impacting several sectors related to their welfare. The most direct connection is with Government Agencies and Public Services, as it mandates the establishment of the LEAD for Kids Standards Board and outlines responsibilities for developers and deployers regulated by state authorities. The Healthcare sector is somewhat less relevant, though indirectly related to children's health impacts due to AI technology. The Private Enterprises, Labor, and Employment sector is relevant since it discusses developer obligations and business practices concerning AI products intended for children. Academic and Research Institutions relate to the act in terms of gathering relevant expertise for standards development. Other sectors like Politics and Elections, Judicial System, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified sectors do not directly pertain to the focus of this legislation, resulting in lower relevance scores. This bill primarily influences sectors that are involved directly with child welfare and business practices governing AI technology aimed at minors.
Keywords (occurrence): artificial intelligence (16) show keywords in context
Description: To require covered platforms to remove nonconsensual intimate visual depictions, and for other purposes.
Collection: Legislation
Status date: Jan. 22, 2025
Status: Introduced
Primary sponsor: Maria Salazar
(10 total sponsors)
Last action: Referred to the House Committee on Energy and Commerce. (Jan. 22, 2025)
Societal Impact
Data Governance (see reasoning)
The text centers around the regulation of nonconsensual intimate visual depictions, particularly those that involve digital forgery or deepfakes created through AI technologies. This clearly ties into the Social Impact category as it addresses psychological and reputational harm caused by nonconsensual uses of AI-generated imagery. Furthermore, it encompasses accountability of technologies that could lead to exploitation, aligning with existing issues around fairness and bias. There are also elements that touch upon data governance, particularly in how identity and consent are managed and safeguarded within AI systems. However, the primary focus remains on individual and societal implications. System Integrity and Robustness categories are less relevant here, as the text does not lay out specific safeguards, compliance measures, or performance benchmarks for AI itself, rather it focuses on the ramifications of negative societal impacts stemming from misuse of such technologies.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The legislation's focus on the regulation of digital forgeries created by AI expands into the political discourse surrounding technology's role in public safety and individual rights, thus moderately connecting to Politics and Elections. It has strong relevance to the category of Government Agencies and Public Services, considering that government oversight and enforcement via the Federal Trade Commission is elaborated in the enactment and enforcement sections, indicating a direct impact on public service mechanics. The regulation doesn’t specifically address the Judicial System but aligns with broader legal implications. The healthcare sector, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and the Hybrid, Emerging, and Unclassified categories do not relate closely to the text, rendering them significantly less relevant. Overall, it prominently intersects with social, governmental, and legal frameworks.
Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context
Description: An act to amend Section 17075.10 of the Education Code, relating to school facilities.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Christopher Cabaldon
(sole sponsor)
Last action: Introduced. Read first time. To Com. on RLS. for assignment. To print. (Feb. 20, 2025)
Societal Impact (see reasoning)
The text explicitly mentions the use of machine learning twice in relation to automating aspects of the school facilities permitting process. This aligns closely with the Social Impact category as it discusses the implications of AI technology (in this case, machine learning) on health and safety projects in schools, potentially affecting students and communities. The mention of health and safety, alongside automated decision-making processes for facilitating construction and permitting, also suggests a concern for accountability and bias that could arise from AI applications in public services. There's little focus on data governance, system integrity, or robustness as standalone themes beyond the mention of machine learning, which does not imply broader legislative perspectives on data management or system quality. Therefore, Social Impact is rated highly relevant, while the other categories receive lower relevance scores.
Sector:
Government Agencies and Public Services (see reasoning)
The relevance of the sectors varies based on the content of the text. The most direct relevance is to Government Agencies and Public Services, as the bill involves state agencies (the Department of Education, the State Architect, and the State Allocation Board) in the implementation of machine learning technologies for public safety projects in schools. While there is an implication of impact on the education sector, it does not specifically reference legislation concerning educational policy, thereby limiting the rating for Academic and Research Institutions. Healthcare has no direct connection, and the other sectors are not pertinent in the context of this text. Thus, only Government Agencies and Public Services receives a high score.
Keywords (occurrence): machine learning (4) show keywords in context
Description: An act relating to restricting electronic monitoring of employees and the use of employment-related automated decision systems
Collection: Legislation
Status date: Feb. 19, 2025
Status: Introduced
Primary sponsor: Monique Priestley
(9 total sponsors)
Last action: Read first time and referred to the Committee on General and Housing (Feb. 19, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This legislation directly addresses the use and implications of automated decision systems and electronic monitoring in employment contexts, making it highly relevant to the categories concerning Social Impact and System Integrity. The focus on automated decision systems specifically highlights potential societal implications, implications for fairness and bias, and the protection of employees within automated workplace environments. Additionally, the requirement for impact assessments and notice requirements relates closely to data governance and system integrity principles, ensuring transparency and accountability for AI usage in employment settings. The bill seeks to establish safeguards that protect employee rights and privacy in regards to automated systems, specifically addressing concerns around accountability, the handling of sensitive data, and fairness in decision-making processes.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment
Hybrid, Emerging, and Unclassified (see reasoning)
This text is particularly relevant to several sectors due to its implications for labor practices and employee rights in the context of AI technologies. It explicitly deals with the regulation of automated decision systems in employment contexts, setting standards for transparency and fairness in decision-making. This affects the Private Enterprises, Labor, and Employment sector significantly, as well as the Government Agencies and Public Services sector because it sets a precedent for government regulations that may influence various public and private organizations. Additionally, implications may extend to the Judicial System as cases may arise challenging the legality of the use of automated decision aids in hiring or employment evaluations. While its direct relevance to sectors like Healthcare and Academic Institutions is less clear, the principles discussed could be applicable in contexts where AI-driven decisions also exist.
Keywords (occurrence): artificial intelligence (1) automated (42) algorithm (2) show keywords in context
Description: An act to add Section 12817 to the Government Code, relating to artificial intelligence.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Steve Padilla
(sole sponsor)
Last action: Introduced. Read first time. To Com. on RLS. for assignment. To print. (Feb. 20, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly addresses mental health in relation to artificial intelligence, focusing on how AI can improve mental health outcomes, as well as assessing ethical standards and potential risks of using AI in mental health settings. This direct emphasis on the societal implications and individual impact of AI technologies places it strongly within the 'Social Impact' category. Additionally, the act involves the evaluation and management of data and frameworks concerning AI tools in mental health, which suggests relevance to 'Data Governance' as well. The mention of appointing a working group implies a level of oversight regarding system integrity, but this is less explicit than the implications for social impact and data governance. While there are elements that could touch on robustness, such as references to best practices, they are not as prominent. Overall, the text lends itself best to the 'Social Impact' category for its clear focus on individual well-being and ethical implications of AI in mental health.
Sector:
Healthcare
Academic and Research Institutions (see reasoning)
The legislation targets the use of artificial intelligence in mental health, which directly relates to the healthcare sector. By evaluating AI's role in treatment and diagnosis, addressing potential risks, and proposing training frameworks for mental health professionals, it highlights its importance in healthcare settings. The focus on stakeholder engagement and input suggests the bill's aim to inform healthcare practices and regulatory measures. Although there are components that could overlap with potential implications for government agencies, the primary emphasis remains within the healthcare sector, thus solidifying its classification in that field.
Keywords (occurrence): artificial intelligence (13) automated (1) show keywords in context
Description: An act to amend Section 24011.5 of the Vehicle Code, relating to vehicles.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Lena Gonzalez
(sole sponsor)
Last action: Introduced. Read first time. To Com. on RLS. for assignment. To print. (Feb. 20, 2025)
Societal Impact
System Integrity (see reasoning)
The text primarily discusses consumer protection in the context of vehicles equipped with partial driving automation features. The relevance to AI comes from the references to 'partial driving automation features', which are commonly powered by AI algorithms. However, the discussion is more focused on consumer notices and liability issues rather than the broader societal impacts of AI, the management of data, integrity of AI systems, or robustness of benchmarks. For Social Impact, while consumer protection is essential, it’s limited in scope regarding AI's broader societal effects. For Data Governance, there's a mention of sharing information related to these features, but it does not delve into the core issues surrounding data management or accuracy. System Integrity is somewhat relevant as it deals with transparency in features, but it lacks a deep focus on security and oversight of AI systems. Robustness is not relevant as the text does not discuss performance benchmarks or compliance standards for AI systems. Overall, the relevance to the categories is minimal to moderate at best.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text is primarily focused on regulations pertaining to passenger vehicles with automation features, which closely relates to the sector of Private Enterprises, Labor, and Employment, as it affects manufacturers and dealers. The Consumer notice aspect connects with regulation and accountability for these entities. There is limited relevance to other sectors like Politics and Elections, Government Agencies and Public Services, or Judicial System, as the legislation does not appear to directly address those contexts. Healthcare, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified sectors are not connected to the focus of this legislation. Thus, the scoring reflects the primary impact on the business and consumer landscape regarding automotive technologies.
Keywords (occurrence): autonomous vehicle (1) show keywords in context
Description: Requires bureau within DOC to study economic impact of automation, artificial intelligence & robotics on employment in state; specifies contents of study; requires bureau to consult with specified entities in conducting study; requires bureau to submit report to Governor & Legislature by specified date; requires bureau to conduct this study at specified intervals of time.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Leonard Spencer
(sole sponsor)
Last action: Filed (Feb. 20, 2025)
Societal Impact (see reasoning)
The text focuses on studying the economic impact of artificial intelligence (AI) and automation on employment. It explicitly mentions how AI may lead to job displacement and requires an analysis of its impact on different demographics, wages, and industry sectors. There are clear connections to social implications through discussions of job loss, training, and policy recommendations aimed at workforce resilience. Thus, it aligns closely with 'Social Impact.' The 'Data Governance' category is not applicable as the text does not concern data collection practices nor accuracy issues in datasets. 'System Integrity' and 'Robustness' are similarly less relevant as there are no mechanisms for safety, transparency, or performance benchmarks outlined in the text.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation is highly relevant to the 'Private Enterprises, Labor, and Employment' sector because it addresses the impact of AI on jobs, employment practices, and the economy. It mentions specific industries and demographics affected by automation, making it pertinent. There is also relevance to 'Government Agencies and Public Services' because the study is conducted by a bureau within the Department of Commerce, linking it to government operations. Although not directly mentioned, the study relates to academic considerations through the inclusion of academic institutions in consultations, so it holds slight relevance to 'Academic and Research Institutions.' The other sectors like Politics and Elections, Judicial System, Healthcare, International Cooperation, Nonprofits, and Hybrid sectors have minimal or no connection to the content of this bill.
Keywords (occurrence): artificial intelligence (2) automated (1) show keywords in context
Description: AN ACT relating to elections; prohibiting the use of artificial intelligence in equipment used for voting, ballot processing or ballot counting; requiring certain published material that is generated through the use of artificial intelligence or that includes a materially deceptive depiction of a candidate to include certain disclosures; prohibiting, with certain exceptions, the distribution of synthetic media that contains a deceptive and fraudulent deepfake of a candidate; providing penalti...
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Bert Gurr
(4 total sponsors)
Last action: Read first time. Referred to Committee on Legislative Operations and Elections. To printer. (Feb. 20, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text primarily addresses the implications of artificial intelligence in the electoral process. It explicitly details the prohibition of AI usage in voting equipment, mandates disclosures for AI-generated material, and establishes penalties for the distribution of deceptive deepfakes. Therefore, it is highly relevant to the Social Impact category due to its focus on the fairness and integrity of elections, the protection of candidates' reputations, and preventing misinformation. The relevance to Data Governance is moderate, as it discusses accuracy and transparency in published materials, but does not delve deeply into data management issues. System Integrity is also moderately relevant since it emphasizes the need for securing electoral processes against automation, but does not focus on inherent system security measures. The Robustness category is less relevant as there are no discussions about benchmarks or auditing measures in AI performance within this context.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The text directly relates to the Politics and Elections sector as it outlines regulations on the use of AI in electoral processes, aiming to protect the integrity and fairness of elections. The Government Agencies and Public Services sector is also relevant as it discusses government regulations regarding voting equipment. However, the other sectors like Judicial System, Healthcare, Private Enterprises, Labor, and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified are not applicable here as the content is primarily focused on election-related provisions and AI's role in that particular context. Thus, the scores reflect that restriction.
Keywords (occurrence): artificial intelligence (9) machine learning (1) deepfake (6) synthetic media (10) show keywords in context
Description: Education and workforce data ecosystem in the Commonwealth; Virginia Education and Workforce Data Governing Board and Office of Virginia Education and Workforce Data established. Establishes in the executive branch of state government the 10-member Virginia Education and Workforce Data Governing Board and establishes with the Virginia Information Technologies Agency a supporting Office of Virginia Education and Workforce Data to (i) govern, administer, and support the ecosystem of education a...
Collection: Legislation
Status date: Jan. 8, 2025
Status: Introduced
Primary sponsor: Rodney Willett
(2 total sponsors)
Last action: Left in Education (Feb. 4, 2025)
Data Governance
System Integrity (see reasoning)
This text primarily addresses the establishment of the Virginia Education and Workforce Data Governing Board and the Office of Virginia Education and Workforce Data, focused on education and workforce data management within the Commonwealth. While it does mention 'data governance' and aspects such as 'data sharing' and 'data security,' there is no specific mention of AI technologies or their implications. Therefore, its relevance to AI-related portions is low to moderate, mainly focused on governance rather than direct AI application. Consequently, the relevance to Social Impact is slightly elevated due to potential implications for fairness in data use, but still does not explicitly tie into AI. Data Governance is the most relevant given the focus on data management and governance, but it lacks AI-specific references. System Integrity sees some relevance due to the discussion of data security and transparency, while Robustness is marginally relevant due to the mention of benchmarks for data governance, but again, without direct ties to AI performance metrics.
Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)
The primary focus of the legislation is on data governance related to education and workforce management within government frameworks. There are mentions of data sharing and the establishment of data governance structures suggesting relevance to government agencies and public services. However, the focus on AI usage specifically in these contexts is not substantial. The reference to data analytics suggests some application in academic research and potentially informs policy-making, but the text does not explicitly address how AI fits into these sectors. Therefore, the relevance is moderate for Government Agencies and Public Services, slightly relevant for Academic and Research Institutions, and not particularly relevant for other sectors. The legislation does not directly address AI in political contexts, healthcare, labor, or international cooperation, placing 'Hybrid, Emerging, and Unclassified' as a potential category but with lower relevance. Thus, only Government Agencies and Public Services seems pertinent in this case, due to the role of government in education and workforce management.
Keywords (occurrence): artificial intelligence (2) machine learning (1) automated (2) show keywords in context
Description: Prohibits and imposes criminal penalty on disclosure of certain intentionally deceptive audio or visual media within 90 days of election.
Collection: Legislation
Status date: June 28, 2024
Status: Engrossed
Primary sponsor: Louis Greenwald
(14 total sponsors)
Last action: Reported from Senate Committee, 2nd Reading (Oct. 24, 2024)
Societal Impact (see reasoning)
The text predominantly addresses the use of intentionally deceptive audio or visual media, particularly in the context of elections. The connection to Social Impact is robust due to the potential for such media to influence public discourse, voter perception, and trust in democratic processes, which directly aligns with issues of misinformation and the role of AI in generating such media. Data Governance and System Integrity have limited relevance as the focus is more on the legal parameters surrounding disclosure and intent rather than the data management associated with AI systems. Robustness is not substantially relevant because the text does not discuss performance benchmarks or compliance standards for AI systems directly, rather it centers on the repercussions of deceptive practices. Overall, Social Impact is the most relevant category because it distinctly covers the implications of AI in public discourse and electoral integrity.
Sector:
Politics and Elections
Judicial system (see reasoning)
The text primarily pertains to the sector of Politics and Elections as it deals with the regulation of deceptive audio and visual media specifically around election periods. This regulation is crucial in ensuring fair electoral processes and protecting voters from misinformation. While there are mentions of technological providers, the primary impact and intent of the legislation are in the context of electoral integrity and voter protection, making it highly relevant to Politics and Elections. Other sectors such as Government Agencies, Healthcare, or Private Enterprises do not have specific connections with the legislation, as its focus remains strictly on electoral proceedings. Academic contexts or cooperative standards are also not mentioned here, so those sectors are deemed irrelevant.
Keywords (occurrence): artificial intelligence (2) show keywords in context
Description: Prohibit conduct involving computer-generated child pornography
Collection: Legislation
Status date: Jan. 13, 2025
Status: Introduced
Primary sponsor: Brian Hardin
(sole sponsor)
Last action: Notice of hearing for February 06, 2025 (Jan. 28, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly discusses computer-generated child pornography and the implications of its production, possession, and distribution using artificial intelligence. The inclusion of 'computer-generated' alongside 'artificial intelligence' directly aligns with AI-related legislation addressing potential harms and ethical concerns. Therefore, its relevance spans across Social Impact, Data Governance concerning the data inputs and outputs, System Integrity regarding the control and security of generated content, and Robustness as it relates to legislative measures ensuring compliance and oversight of AI technologies used in creating such material.
Sector:
Government Agencies and Public Services
Judicial system
Hybrid, Emerging, and Unclassified (see reasoning)
The legislation pertains to the topic of child pornography, primarily relevant in the contexts of protecting minors, law enforcement, and public safety. While it touches upon the production mechanism (AI-generated content), it primarily aligns with the regulations that could be pertinent to the Judicial System (handling of cases involving such content) and Government Agencies and Public Services (in terms of enforcement and potential involvement of social services). It is less relevant to other sectors like Healthcare or Private Enterprises, and thus, receives lower scores in those areas.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: No Use Of Ai For Rent Manipulation
Collection: Legislation
Status date: Jan. 29, 2025
Status: Introduced
Primary sponsor: Andrea Romero
(2 total sponsors)
Last action: HJC: Reported by Committee with Do Not Pass but with a Do Pass Recommendation on Committee Substitution, placed on temporary calendar (Feb. 18, 2025)
Societal Impact
Data Governance (see reasoning)
This bill explicitly addresses the manipulation of rent pricing using artificial intelligence, which has significant implications for social fairness and market dynamics. The legislation aims to mitigate potential harm caused by AI in rental markets, directly relating to the manipulation of consumer prices and competitive fairness. The prohibition of AI in this context highlights concerns about accountability for the outcomes produced by AI-driven pricing. Thus, it has a high relevance to the Social Impact category, particularly regarding fairness and consumer protection. Data Governance is also relevant as the bill touches on the management and usage of data by highlighting prohibited practices around AI coordination functions. System Integrity and Robustness are less relevant since the bill does not primarily focus on security, transparency, benchmarks, or compliance aspects of AI systems.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The bill is most relevant to the rental housing market, which may impact persons seeking rental agreements as well as rental property owners, connecting indirectly to the Economic sector. While it touches on issues related to governance, such as the coordination and competitive practices of rental owners utilizing AI, it does not specifically address AI use in broad public services or employment contexts in the way other sectors might relate to legislative efforts. The specifics of AI manipulation in pricing do not extend neatly into sectors such as Healthcare or the Judicial System. Therefore, the most relevant evaluations are focused upon the Private Enterprises and Labor sector, as the manipulation has direct implications for rental business practices alongside governance.
Keywords (occurrence): artificial intelligence (1) algorithm (1) show keywords in context
Description: Requiring that candidates, campaign finance entities, and specified other persons, or agents of candidates, campaign finance entities, or specified other persons, that publish, distribute, or disseminate, or cause to be published, distributed, or disseminated, to another person in the State campaign materials that use or contain synthetic media include a specified disclosure in a specified manner; and defining "synthetic media" as an image, an audio recording, or a video recording that has be...
Collection: Legislation
Status date: Jan. 27, 2025
Status: Introduced
Primary sponsor: Anne Kaiser
(11 total sponsors)
Last action: Hearing 2/11 at 1:00 p.m. (Jan. 28, 2025)
Societal Impact (see reasoning)
The text explicitly involves synthetic media, which is created using artificial intelligence technologies. The legislation calls for the disclosure of synthetic media used in campaign materials to ensure transparency and inform the public about alterations made to images, audio, and video. This relevance connects directly to the broader implications of AI on society, particularly concerning misinformation, public understanding, and deception in political contexts. Thus, it fits within the 'Social Impact' category very well. The other categories, such as Data Governance, System Integrity, and Robustness, do not apply strongly since the focus is primarily on the social and ethical implications rather than data handling, system security, or performance metrics.
Sector:
Politics and Elections (see reasoning)
This legislation is primarily focused on the political sector, regulating how synthetic media is used within election campaigns to ensure that voters are informed about the nature of the media they encounter. It specifically addresses the role of AI in crafting potentially misleading campaign materials, making it pertinent to the political context. Though there may be tangential relevance to government and public services because campaign laws affect public interactions with government operations, the legislation's focus on election materials distinctly categorizes it within 'Politics and Elections' rather than the broader government sector.
Keywords (occurrence): synthetic media (4) show keywords in context
Description: Health care credentialing and billing oversight; Hospital Oversight Fund established; Independent Credentialing Review Board established; Medicaid Billing Oversight Task Force established; reports. Establishes a Hospital Oversight Fund, Independent Credentialing Review Board, and Medicaid Billing Oversight Task Force to fund and carry out oversight of credentialing, as that term is defined in the bill, for health care providers and Medicaid billing. The bill establishes protections for whistl...
Collection: Legislation
Status date: Jan. 15, 2025
Status: Introduced
Primary sponsor: Bonita Anthony
(sole sponsor)
Last action: Committee Referral Pending (Jan. 15, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text prominently addresses automated systems and machine learning within the context of health care credentialing and billing oversight. Specifically, it discusses the implementation of data analytics systems and advanced analytics that involve machine learning techniques to monitor Medicaid billing patterns and enhance credentialing oversight. This indicates a clear relevance to the effects of AI within health care, qualifying it under 'Social Impact' due to its implications on individual providers and patients, as well as operational transparency and accountability. The presence of such technologies also necessitates careful consideration regarding data governance related to privacy and security measures, aligning the text with both 'Data Governance' and 'System Integrity.' 'Robustness' could also be considered due to the focus on compliance with auditing and oversight benchmarks for these AI systems. However, the primary emphasis is on the social effects and governance surrounding the deployment of AI in healthcare settings.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text is directly related to the healthcare sector by establishing new oversight mechanisms that involve AI in credentialing and billing processes. It mentions AI applications specifically in the context of analyzing billing patterns and credentialing, which are critical to healthcare operations. Additionally, the establishment of a health care provider credentialing database suggests a strong regulatory framework involving AI. Therefore, its relevance to the healthcare sector is clear and prominent.
Keywords (occurrence): machine learning (1) automated (1) show keywords in context
Description: Revise election laws regarding disclosure requirements for the use of AI in elections
Collection: Legislation
Status date: Dec. 10, 2024
Status: Introduced
Primary sponsor: Janet Ellis
(sole sponsor)
Last action: (S) Hearing (S) State Administration (Jan. 18, 2025)
Societal Impact
System Integrity
Data Robustness (see reasoning)
The text explicitly addresses AI in the context of elections, particularly focusing on the use of deepfakes and generative artificial intelligence. It outlines legislation to regulate AI-generated content to prevent misinformation and disinformation during elections, which directly impacts public trust and the integrity of the electoral process. The relevance of 'Social Impact' is significant due to the concern over misinformation affecting voter perception. 'Data Governance' is less relevant since the primary focus is on the regulation of AI content rather than data management. 'System Integrity' is also slightly relevant as it touches on the importance of maintaining the security and credibility of election communications. However, the focus remains on social consequences rather than systemic security measures. Overall, 'Robustness' aligns with the legislation’s intentions to ensure new standards and accountability for AI use in elections, but it may not strongly focus on performance benchmarks directly.
Sector:
Politics and Elections (see reasoning)
The text is focused on legislation that pertains directly to the intersection of AI and politics, specifically through the lens of election integrity and transparency. It addresses how AI-generated content should be disclosed in electoral communications, emphasizing the regulation of AI in political contexts. Thus, 'Politics and Elections' is extremely relevant as it specifically deals with the regulation of AI in electoral processes. There is limited relevance to other sectors, as the legislation does not primarily focus on government agencies, judicial aspects, healthcare, or other sectors. Therefore, other sector categories would score lower due to their lack of direct association with the content of the text.
Keywords (occurrence): artificial intelligence (7) deepfake (4) synthetic media (2) show keywords in context
Description: Limit government use of AI systems
Collection: Legislation
Status date: Feb. 14, 2025
Status: Engrossed
Primary sponsor: Braxton Mitchell
(sole sponsor)
Last action: (S) First Reading (Feb. 14, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly addresses the limitations on the use of AI systems by government entities, including prohibitions against certain applications of AI, such as cognitive behavioral manipulation and the potential for discrimination, which directly ties to social impact. It also discusses the requirement for human oversight of AI recommendations, indicating a concern for system integrity. Additionally, the text outlines requirements for disclosures of AI-generated materials, relating to data governance issues concerning transparency and accountability. However, it does not prioritize the development of benchmarks or standards specifically associated with robustness in AI, as that is not a focus of the legislation.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The legislation primarily addresses the use of AI by government entities, thereby aligning closely with the sector focused on government agencies and public services. The specific mention of limiting the government's use of AI for cognitive manipulation and requiring human review links it to concerns about ethical governance in public service delivery. While there are components that could loosely connect to other sectors like political processes and the judicial system, the primary focus on government utilization of AI systems makes it most relevant to government agencies and public services.
Keywords (occurrence): artificial intelligence (12) machine learning (1) show keywords in context
Description: Grid Modernization Roadmap
Collection: Legislation
Status date: Jan. 27, 2025
Status: Introduced
Primary sponsor: Meredith Dixon
(3 total sponsors)
Last action: SCONC: Reported by committee with Do Pass recommendation (Feb. 14, 2025)
System Integrity (see reasoning)
The text primarily addresses the development of a roadmap for grid modernization and the establishment of a grant program to support various projects. It does mention the application of artificial intelligence to identify methane leaks, which aligns with the focus on new technologies. However, the overall emphasis is on grid modernization without deep engagement in broader societal impacts (like bias, discrimination, or misinformation often related to AI systems). Therefore, the relevance to the Social Impact category is limited. In terms of Data Governance, while there is a focus on improving system efficiency and reliability, it does not specifically address data management principles. The inclusion of AI in identifying methane leaks introduces a level of accountability and functional transparency, thus touching on System Integrity, yet it lacks an explicit detailed discussion. Robustness is not significantly addressed since the text does not focus on performance benchmarks or oversight bodies apart from general project descriptions. Overall, while AI is mentioned, it does not dominate any of the categories, resulting in lower scores across the board.
Sector:
Government Agencies and Public Services (see reasoning)
The text discusses the modernization efforts of New Mexico's electric grid and the grant program for entities involved in this project. It includes provisions for municipalities, state agencies, and educational institutions, indicating a direct impact on Government Agencies and Public Services. While it discusses the potential engagement of educational institutions, it does not clearly address the academic research use of AI, nor does it mention healthcare or the judicial system. Private Enterprises are indirectly involved as they may be impacted by improved grid technologies, but the text does not focus on them specifically. The absence of direct links to sectors like Politics and Elections, Nonprofits, or international cooperation further limits the applicability of those sectors. The most relevant sector is Government Agencies and Public Services, given the strong emphasis on local and state government projects.
Keywords (occurrence): artificial intelligence (1) automated (1) show keywords in context