4827 results:
Description: Generally revise usage of artificial intelligence in certain health insurance
Summary: The bill regulates the use of artificial intelligence by health insurance issuers, ensuring compliance with medical necessity criteria and protecting against discrimination, while promoting transparency and oversight.
Collection: Legislation
Status date: Feb. 18, 2025
Status: Introduced
Primary sponsor: Jill Cohenour
(sole sponsor)
Last action: (H) Fiscal Note Requested (Feb. 18, 2025)
Societal Impact
Data Governance (see reasoning)
The text is predominantly centered on the regulation of artificial intelligence usage in health insurance contexts. It explicitly outlines restrictions and requirements for health insurance issuers that employ AI and algorithmic systems, particularly in utilization review or management related to medical necessity. This focus places it directly under the Social Impact category, as it addresses consumer protections and discrimination concerns related to AI utilization in healthcare. The Data Governance category also applies as it emphasizes the compliance of AI with legal standards and proper handling of patient data. System Integrity and Robustness are less directly relevant since the text does not specifically tackle security protocols or performance benchmarks for AI systems; instead, it focuses on compliance and operational standards in the specific context of health insurance. Thus, Social Impact and Data Governance are scored higher due to their pivotal role in shaping the AI landscape in healthcare, while System Integrity and Robustness receive lower scores for their limited applicability here.
Sector:
Healthcare (see reasoning)
This bill is highly relevant to healthcare as it explicitly discusses the usage of AI within health insurance practices. It outlines the responsibilities and obligations that health insurance issuers have regarding AI, including ensuring equitable treatment, appropriate data use, and disallowing AI-driven bias. Given that the text specifically addresses AI applications in health insurance utilization review, it receives a high score under the Healthcare sector. It touches on regulations affecting interactions with AI, but it does not address any political, judicial, or research-oriented aspects, hence the lower scores in other sectors.
Keywords (occurrence): artificial intelligence (17) algorithm (14) show keywords in context
Description: To direct the Secretary of Agriculture to establish centers of excellence for agricultural security research, extension, and education, and for other purposes.
Summary: The American Agricultural Security Act of 2024 aims to establish centers of excellence for agricultural security research, education, and extension to enhance the U.S. agricultural sector's resilience against threats.
Collection: Legislation
Status date: May 17, 2024
Status: Introduced
Primary sponsor: Don Bacon
(3 total sponsors)
Last action: Referred to the House Committee on Agriculture. (May 17, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text highlights the establishment of centers of excellence for research, extension, and education in agricultural security, with a specific inclusion of 'artificial intelligence' in the realm of 'digital agriculture.' This signals the recognition of AI's role in enhancing agricultural practices. Consequently, the relevance of Social Impact stems from potential societal benefits tied to AI in agriculture, including workforce development and community engagement. For Data Governance, while the text does not explicitly address data management or privacy associated with AI, the application of AI in agriculture would require careful data policies and governance. System Integrity is moderately relevant due to the emphasis on cybersecurity, which intersects with AI's capabilities in safeguarding agricultural data and processes. Robustness is less relevant as the text does not focus on performance benchmarks for AI systems directly. Overall, Social Impact and Data Governance receive higher relevance scores, whereas System Integrity is acknowledged as relevant due to the cybersecurity emphasis.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The mention of 'artificial intelligence' in the context of digital agriculture makes it relevant to the sector of Private Enterprises, Labor, and Employment, as AI technologies are poised to influence agricultural labor practices and industry standards. Government Agencies and Public Services see relevance through the involvement of governmental departments and educational institutions in establishing excellence centers. Additionally, there might be connections to Academic and Research Institutions due to the emphasis on research and education activities. However, the text does not delve deeply into political implications, the judicial framework around AI, or specific healthcare applications, resulting in lower scores for those sectors. Overall, Private Enterprises, Labor, and Employment has significant applicability, supported by Government Agencies and Public Services and Academic and Research Institutions receiving moderate scores.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: An act to add Title 23 (commencing with Section 3273.72) to Part 4 of Division 3 of the Civil Code, relating to social media platforms.
Summary: Senate Bill 771 mandates California social media platforms to limit harmful content and establishes liability for violations of personal rights, enhancing protections for vulnerable populations against hate and misinformation.
Collection: Legislation
Status date: Feb. 21, 2025
Status: Introduced
Primary sponsor: Henry Stern
(sole sponsor)
Last action: Re-referred to Com. on RLS. (March 25, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text pertains significantly to Social Impact as it addresses the consequences of AI and algorithmic content moderation on vulnerable populations, highlighting how social media algorithms can perpetuate hate speech and misinformation, leading to real-world harm. It also outlines civil liabilities for social media platforms if they fail to regulate such content responsibly. This focus on the societal implications of AI in social media clearly places it under the Social Impact category. Data Governance also scores highly because algorithms' performance in this context relates to the governance of content and user data, particularly as platforms may be held accountable if their algorithms contribute to civil rights violations. System Integrity scores moderately due to the text's mention of the need for transparency and accountability in how algorithms operate, although it doesn’t delve deeply into security or oversight measures. Robustness has limited relevance since the text does not discuss performance benchmarks or auditing of AI systems. Overall, the text heavily emphasizes societal impacts, with significant attention to content and data governance, while showing lesser relevance to system integrity and robustness.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation relates strongly to Government Agencies and Public Services, as it addresses the responsibility of social media platforms in ensuring the safety and rights of their users, including minors. It emphasizes the role of social media in public discourse and societal welfare, which may affect governmental policies surrounding education and child supervision. While there are references to the implications for specific communities and rights, the text does not explicitly deal with the judicial system or healthcare sectors. There is a clear connection to the Private Enterprises, Labor, and Employment sector regarding the obligations placed on social media companies, but it may not be the primary focus of this bill. Academic and Research Institutions may involve some relevance due to the educational implications regarding pupils and social media use, but this connection is weaker. Although there is an implication of broader ethical considerations for international standards and practices, this is indirectly mentioned, keeping the relevance to that sector very low. Overall, the most pertinent sectors include Government Agencies and Public Services and to a slightly lesser extent Private Enterprises.
Keywords (occurrence): artificial intelligence (3) algorithm (2) show keywords in context
Description: To criminalize unauthorized dissemination of intimate images that are digitally altered or created through the use of artificial intelligence.
Summary: This bill criminalizes the unauthorized dissemination of intimate images, including those altered or created by artificial intelligence, protecting individuals from harm and ensuring consent is prioritized.
Collection: Legislation
Status date: March 4, 2025
Status: Introduced
Primary sponsor: Judiciary Committee
(sole sponsor)
Last action: Referred to Joint Committee on Judiciary (March 4, 2025)
Societal Impact (see reasoning)
This legislation explicitly addresses the impact of AI on individuals by criminalizing the unauthorized dissemination of intimate images altered or created through AI. It recognizes the potential harms caused by AI-generated content, and aims to hold individuals accountable for misuse. The mention of psychological harm and emotional distress directly relates to social impacts stemming from AI misuse, making the Social Impact category highly relevant. Although data aspects are implied (e.g., concerning personal images), the primary focus is on dissemination and consent, which aligns best with Social Impact. Therefore, System Integrity and Robustness are not particularly relevant, as the focus of the legislation is more on the misuse and consequences of AI rather than the technical specifics of systems or benchmarks. Overall, the primary concern is societal harm and ethical implications of AI technology's capabilities, so the score reflects this alignment.
Sector:
Judicial system (see reasoning)
The legislation primarily pertains to issues of unauthorized image dissemination, which is relevant to the judicial system as it relates to criminal acts and defining legal protections against misuse of AI technology. It does not focus on political campaigns, healthcare, public service functions, or employment practices directly. However, considering this act directly addresses legal consequences for actions enabled by AI, and how these actions can affect individuals, it indirectly pertains to the Judicial System in overseeing and adjudicating such cases. The lack of direct implications for other sectors underlines the unique focus on legality and societal impact rather than broad regulatory frameworks. Thus, Judicial System is the most fitting sector while all others have limited or no applicability.
Keywords (occurrence): artificial intelligence (3) machine learning (1) algorithm (1) show keywords in context
Description: An Act to Ensure Human Oversight in Medical Insurance Payment Decisions
Summary: The bill mandates that health insurance carriers cannot deny claims based solely on artificial intelligence decisions, requiring human physician oversight in review processes starting January 1, 2026.
Collection: Legislation
Status date: March 6, 2025
Status: Introduced
Primary sponsor: Joseph Martin
(9 total sponsors)
Last action: Received by the Secretary of the Senate on March 6, 2025 and REFERRED to the Committee on HEALTH COVERAGE, INSURANCE AND FINANCIAL SERVICES pursuant to Joint Rule 308.2 (March 6, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text directly addresses the use of artificial intelligence in the context of medical insurance decisions, focusing on how AI can or cannot influence coverage determinations. This brings forth implications regarding social impact, particularly concerning health outcomes and consumer protections. The requirement for human oversight and physician involvement exemplifies the importance of data governance regarding the accuracy and fairness of AI-driven decisions. The legislation's provisions also speak to system integrity by ensuring transparency in how AI is used in the decision-making process. However, it does not emphasize robustness in the development of benchmarks or performance metrics for AI applications. Overall, the legislation is highly relevant to all four categories, especially focusing on social impact, data governance, and system integrity.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
The text pertains particularly to the healthcare sector, as it details regulations on AI usage in medical insurance specifically. It also indirectly relates to government agencies since it mandates reporting and oversight by the state's Bureau of Insurance. While one could stretch its relevance to other sectors (like private enterprises through insurance carriers), its primary focus remains solidly within healthcare regulations. The explicit references to healthcare providers and the specifics of managing medical claims denote a strong alignment with the healthcare sector, whereas other sectors receive limited relevance.
Keywords (occurrence): artificial intelligence (8) show keywords in context
Description: Creates the Louisiana Atmospheric Protection Act
Summary: The Louisiana Atmospheric Protection Act prohibits weather modification activities, establishes penalties for violations, and creates an Atmospheric Protection Fund. Its goal is to prevent harmful atmospheric interventions.
Collection: Legislation
Status date: April 4, 2025
Status: Introduced
Primary sponsor: Kimberly Coates
(2 total sponsors)
Last action: Under the rules, provisionally referred to the Committee on Natural Resources and Environment. (April 4, 2025)
Description: Artificial Intelligence Developer Act established; civil penalty. Creates operating standards for developers and deployers, as those terms are defined in the bill, relating to artificial intelligence, including (i) avoiding certain risks, (ii) protecting against discrimination, (iii) providing disclosures, and (iv) conducting impact assessments and provides that the Office of the Attorney General shall enforce the provisions of the bill. The provisions of the bill related to operating standar...
Summary: The Artificial Intelligence Developer Act establishes regulations for developers and deployers of high-risk AI systems in Virginia, aiming to prevent algorithmic discrimination and ensure transparency through disclosures and risk assessments, with civil penalties for violations.
Collection: Legislation
Status date: Jan. 9, 2024
Status: Introduced
Primary sponsor: Michelle Maldonado
(sole sponsor)
Last action: Continued to 2025 with substitute in Communications, Technology and Innovation by voice vote (Feb. 5, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text addresses various operating standards for developers and deployers of high-risk artificial intelligence systems. It discusses implications for algorithmic discrimination, risk management, impact assessments, and transparency, indicating a direct relevance to AI's social impact and system integrity. The legislation establishes a framework for addressing the risks AI poses to consumers, particularly regarding fairness and accountability, making it very relevant to the Social Impact category. Additionally, the focus on assessing and managing risks related to AI systems aligns with the System Integrity category. While data governance aspects are touched upon, the emphasis is mostly on operational standards and risk management rather than data collection and management regulations. Robustness focuses on performance and auditing standards, which are less covered in this text. Therefore, Social Impact and System Integrity categories garner higher relevance scores.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation strongly centers on the implications of AI systems in society, especially regarding consumer protections, algorithmic discrimination, and regulations on developers and deployers of these systems. Hence, it is highly relevant to the sector of Government Agencies and Public Services as it falls within the realm of public safety and consumer protection involving government oversight. Additionally, it has implications for Private Enterprises due to the expectations set on developers and deployers in the business context. While AI's role in the Judicial System and Healthcare sectors could connect, the text does not specifically address these areas, leading to lower relevance for those sectors. Therefore, the Government Agencies and Public Services sector scores high, while Private Enterprises receives a moderate score due to its implications in regulating business practices.
Keywords (occurrence): artificial intelligence (65) machine learning (2) foundation model (2) show keywords in context
Description: To require covered platforms to remove nonconsensual intimate visual depictions, and for other purposes.
Summary: The TAKE IT DOWN Act requires platforms to remove nonconsensual intimate visual depictions and sets penalties for intentional publication of such content, aiming to combat digital exploitation.
Collection: Legislation
Status date: Jan. 22, 2025
Status: Introduced
Primary sponsor: Maria Salazar
(10 total sponsors)
Last action: Referred to the House Committee on Energy and Commerce. (Jan. 22, 2025)
Societal Impact
Data Governance (see reasoning)
The text centers around the regulation of nonconsensual intimate visual depictions, particularly those that involve digital forgery or deepfakes created through AI technologies. This clearly ties into the Social Impact category as it addresses psychological and reputational harm caused by nonconsensual uses of AI-generated imagery. Furthermore, it encompasses accountability of technologies that could lead to exploitation, aligning with existing issues around fairness and bias. There are also elements that touch upon data governance, particularly in how identity and consent are managed and safeguarded within AI systems. However, the primary focus remains on individual and societal implications. System Integrity and Robustness categories are less relevant here, as the text does not lay out specific safeguards, compliance measures, or performance benchmarks for AI itself, rather it focuses on the ramifications of negative societal impacts stemming from misuse of such technologies.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The legislation's focus on the regulation of digital forgeries created by AI expands into the political discourse surrounding technology's role in public safety and individual rights, thus moderately connecting to Politics and Elections. It has strong relevance to the category of Government Agencies and Public Services, considering that government oversight and enforcement via the Federal Trade Commission is elaborated in the enactment and enforcement sections, indicating a direct impact on public service mechanics. The regulation doesn’t specifically address the Judicial System but aligns with broader legal implications. The healthcare sector, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and the Hybrid, Emerging, and Unclassified categories do not relate closely to the text, rendering them significantly less relevant. Overall, it prominently intersects with social, governmental, and legal frameworks.
Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context
Description: A BILL for an Act to provide for a legislative management study relating to the development of advanced technologies.
Summary: The bill mandates a legislative management study in North Dakota to analyze the development of advanced technologies, exploring funding sources and potential grant programs for innovation.
Collection: Legislation
Status date: Feb. 25, 2025
Status: Engrossed
Primary sponsor: Josh Christy
(12 total sponsors)
Last action: Reported back amended, do pass, amendment placed on calendar 16 0 0 (April 10, 2025)
Societal Impact
System Integrity
Data Robustness (see reasoning)
The text contains provisions related to the establishment of an advanced technology grant program, specifically highlighting the emphasis on artificial intelligence, machine learning, and similar technologies. This indicates a direct focus on the social implications and development aspects of AI, thus it has strong relevance to the categories. The text does not deal with data governance practices, systemic integrity concerns, or robustness measures explicitly but does indicate oversight and compliance considerations somewhat indirectly through the review process, making those categories less relevant.
Sector:
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The text mentions advanced technology in contexts that broadly touch the private sector but does not specifically outline provisions for particular sectors like healthcare or government agencies. Its focus on entrepreneurship and small business innovation may tie into the private enterprises sector but it is more overarching. Therefore, while relevant, the specific legislative considerations for sectors aren't the primary focus here.
Keywords (occurrence): machine learning (1) show keywords in context
Description: High-risk artificial intelligence; development, deployment, and use; civil penalties. Creates requirements for the development, deployment, and use of high-risk artificial intelligence systems, defined in the bill, and civil penalties for noncompliance, to be enforced by the Attorney General. The bill has a delayed effective date of July 1, 2026.
Summary: The bill regulates high-risk artificial intelligence systems in Virginia, defining algorithmic discrimination, establishing operational standards for developers and deployers, and imposing civil penalties for non-compliance. It aims to protect consumers from discriminatory outcomes.
Collection: Legislation
Status date: March 7, 2025
Status: Enrolled
Primary sponsor: Michelle Maldonado
(24 total sponsors)
Last action: Fiscal Impact Statement from Department of Planning and Budget (HB2094) (March 7, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This text establishes requirements and standards for the development, deployment, and use of high-risk artificial intelligence systems, emphasizing accountability for algorithmic discrimination, consumer protection, and operational standards. The references to 'high-risk artificial intelligence systems' and 'algorithmic discrimination' highlight the potential social impact and regulatory measures required to prevent discrimination and protect individuals. Furthermore, it outlines safety and responsibility frameworks for developers and deployers of AI, making it highly relevant to all categories specified. The need for documentation, risk management plans, and standards compliance directly impacts social welfare, data governance, system integrity, and the robustness of AI systems.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Hybrid, Emerging, and Unclassified (see reasoning)
The text mentions developers and deployers of high-risk AI, which could include various sectors like healthcare, public services, or private enterprises but does not specifically restrict itself to any single sector. The focus on algorithmic discrimination and consumer rights suggests relevance to various sectors, especially those directly interfacing with consumers (like healthcare and public services) and risk management in business environments. However, since the language is broad and does not focus exclusively on any one sector, the scores reflect general applicability rather than direct regulation within specific sectors.
Keywords (occurrence): artificial intelligence (138) machine learning (2) automated (1) algorithm (1) autonomous vehicle (1) show keywords in context
Description: Blockchain technology; regulation; computational power
Summary: House Bill 2342 prohibits local governments in Arizona from restricting individuals' use of computational power or running blockchain nodes in residences, asserting state-level authority over this regulation.
Collection: Legislation
Status date: Jan. 21, 2025
Status: Introduced
Primary sponsor: Teresa Martinez
(sole sponsor)
Last action: House Committee of the Whole action: Do Pass Amended (Feb. 27, 2025)
The text primarily emphasizes the regulation of computational power related to blockchain technology. While artificial intelligence (AI) is mentioned as a potential use of computational power, the focus is largely on the prohibition of local regulations concerning blockchain technologies and their computational needs. Hence, it is not considered to have a significant impact on social issues regarding AI, nor does it delve into data governance, system integrity, or robustness as they pertain specifically to AI. The references to AI are incidental rather than central to the legislative intent, leading to low relevance scores for these categories.
Sector: None (see reasoning)
The text does not specifically address any of the nine sectors in detail; however, it touches on aspects that could relate to technology regulations without making specific implications for any particular sector. The mention of artificial intelligence suggests a relation to technology, but it does not clearly connect to politics, government operations, or any specific industries such as healthcare or education. Therefore, all categories are scored low for relevance.
Keywords (occurrence): artificial intelligence (2) show keywords in context
Description: Relating to an automated artificial intelligence review of library material purchased by public schools; providing an administrative penalty.
Summary: The bill mandates public schools to use automated AI to review library materials for sexual content before purchase, requiring parental consent for certain materials and imposing penalties for non-compliance.
Collection: Legislation
Status date: March 11, 2025
Status: Introduced
Primary sponsor: Hillary Hickland
(sole sponsor)
Last action: Filed (March 11, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly discusses an automated artificial intelligence review process for evaluating library materials purchased by public schools. It details how AI will be used to assess whether materials should be flagged as sexually explicit or relevant. Therefore, this text strongly intersects with the social implications of AI technology in education and libraries, dealing with issues such as censorship, the role of AI in decision-making about educational content, and the concerns surrounding children's access to material. This lends significant relevance to the Social Impact category. The text also directly references oversight and transparency protocols related to the AI review process, implicating aspects of System Integrity, as it mandates that the AI system undergo human verification and audits. Meanwhile, Data Governance is relevant due to the emphasis on the accuracy and bias mitigation requirements in the use and management of data by the AI system. Robustness appears less relevant since the focus is primarily on the operational procedures rather than performance benchmarks or evaluations of AI technology. Based on this reasoning, scores will reflect the substantial relevance of the text to the Social Impact, Data Governance, and System Integrity categories.
Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)
The text's main application is within public education, focusing on how AI is deployed in libraries associated with public schools. Since it outlines the responsibilities of schools and the administrative oversight involved in the implementation of AI reviews, this primarily pertains to Government Agencies and Public Services. The sector of Academic and Research Institutions also has relevance, as it considers educational implications, guidelines around library materials, and the involvement of educators. However, there are only marginal mentions related to Politics and Elections and other sectors like Healthcare, Private Enterprises, or NGOs are not relevant to this particular bill. Therefore, the scores reflect the primary relevance to Government Agencies and Public Services, with a secondary yet notable connection to Academic and Research Institutions.
Keywords (occurrence): artificial intelligence (10) automated (12) show keywords in context
Description: Specifies that hurricane mitigation grants funded through My Safe Florida Home Program may be awarded only under certain circumstances; requires DFS to require that certain mitigation improvements be made as condition of reimbursing homeowner approved for grant; increases surpluses required for certain insurers applying for their original certificates of authority & maintaining their certificates of authority; specifies prohibitions for persons who were officers or directors of insolvent insu...
Summary: The bill establishes hurricane mitigation grants for homeowners, stipulating conditions for inspections and improvements. It also regulates insurers regarding financial stability and claim processes, aiming to enhance hurricane preparedness and property safety in Florida.
Collection: Legislation
Status date: Feb. 28, 2025
Status: Introduced
Primary sponsor: Insurance & Banking Subcommittee
(3 total sponsors)
Last action: CS Filed (April 10, 2025)
Description: An Act To Create New Section 37-13-215, Mississippi Code Of 1972, To Provide That, Beginning With The Entering Ninth-grade Class Of 2027-2028, A Public High School Student Shall, Before Graduation, Be Required To Earn One Unit Of Credit In A High-school Computer Science Course, Or One Unit Of Credit In An Industry-aligned Career And Technical Education (cte) With Embedded Computer Science Course; To Provide The State Graduation Requirements That May Be Satisfied By Either Of These Courses; To...
Summary: The MS Future Innovators Act mandates that Mississippi high school students must complete a computer science or CTE course with embedded computer science before graduation, starting with the 2027-2028 class.
Collection: Legislation
Status date: Feb. 11, 2025
Status: Engrossed
Primary sponsor: Chris Johnson
(2 total sponsors)
Last action: Referred To Education;Accountability, Efficiency, Transparency (Feb. 14, 2025)
Societal Impact (see reasoning)
The text explicitly mentions the inclusion of Artificial Intelligence (AI) in high school computer science curricula. This directly relates to the educational impact of AI and the necessity for young individuals to understand AI's implications for society. Therefore, relevance to the Social Impact category is strong. Data Governance is not applicable as the legislation does not focus on data management or privacy in AI systems. System Integrity is also not captured in the text as it does not address the security and transparency of AI systems, nor does it talk about regulations ensuring oversight. Robustness is not relevant since there is no focus on benchmarks, auditing, or regulatory compliance specific to AI systems. Overall, only the Social Impact aspect adequately applies due to the educational emphasis on emerging technologies like AI and its societal understandings.
Sector:
Academic and Research Institutions (see reasoning)
The text pertains to the educational sector by mandating high school students in Mississippi to learn about foundational computer science concepts, including AI. It does not mention any specific legislation on politics, government services, the judicial system, healthcare, business, academic research, international standards, or nonprofits, as its focus is solely on educational curricula. Hence, Education is the most relevant sector. Other sectors do not relate directly to the content of the legislation provided.
Keywords (occurrence): artificial intelligence (3) show keywords in context
Description: A BILL to be entitled an Act to amend Chapter 8 of Title 13 of the Official Code of Georgia Annotated, relating to illegal and void contracts generally, so as to prohibit certain agreements involving rental price-fixing as unenforceable contracts in general restraint of trade with respect to residential rental properties; to provide for a criminal penalty; to provide for statutory construction; to provide for a short title; to provide for an effective date and applicability; to provide for re...
Summary: The "End Rental Price-Fixing Act" prohibits price-fixing agreements among landlords regarding residential rental properties, classifying such contracts as unenforceable and imposing criminal penalties to protect market competition and residents' welfare.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Gabriel Sanchez
(5 total sponsors)
Last action: House Hopper (Feb. 27, 2025)
Societal Impact (see reasoning)
The text discusses illegal rental price-fixing and the utilization of computational processes, including machine learning and artificial intelligence, to manipulate rental prices. This is primarily related to the potential impacts of AI on market fairness and accountability, addressing how automated systems can exacerbate issues like price-fixing in the housing market. Thus, it has significant relevance to the category of Social Impact, as it directly addresses the implications of these AI technologies on individuals and society. The Data Governance category is slightly relevant due to its mention of processes that analyze and process data for price recommendations. However, there is minimal relevance to System Integrity since the text does not focus on security or transparency measures for AI systems and similarly for Robustness, which concerns performance benchmarks that are not discussed here. Overall, the key aspects relate to the social implications of AI in economic contexts.
Sector: None (see reasoning)
The text primarily pertains to housing and economic regulations concerning rental properties and does not explicitly address sectors such as Politics and Elections, Government Agencies and Public Services, Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, or Nonprofits and NGOs. Although there might be indirect implications for private enterprises due to potential changes in market practices, this text does not provide any direct indication of AI’s application within those specific sectors. Thus, the only relevant category is Private Enterprises, Labor, and Employment, due to the mention of coordination among landlords in relation to rental price-setting, which may impact labor and employment dynamics in the housing market, but it's minimally so. Hence, it rates low on relevance across the board.
Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context
Description: Concerning public safety protection from the risks of artificial intelligence systems.
Summary: The bill enhances public safety by protecting workers in artificial intelligence from retaliation for reporting safety risks, requiring developers to establish internal reporting processes and ensuring transparency on workers' rights.
Collection: Legislation
Status date: Feb. 11, 2025
Status: Introduced
Primary sponsor: Manny Rutinel
(2 total sponsors)
Last action: Introduced In House - Assigned to Judiciary (Feb. 11, 2025)
Societal Impact (see reasoning)
The text specifically addresses public safety protections related to artificial intelligence systems, particularly in the context of workers' rights to report concerns about compliance and risks associated with these systems. This aspect is crucial to societal dynamics and the interaction between individuals and AI technologies, indicating a very strong relevance to the Social Impact category. Furthermore, the bill outlines organizational responsibilities, but less about data management or systems integrity directly, leading to a determination of lower relevance in those areas. The focus on worker's rights to disclose risks underlines its significant alignment with societal impact rather than purely technical governance or performance standards.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text directly relates to the use of AI in contexts relevant to public safety, but it does not explicitly mention political activities, healthcare, judicial implications, or private enterprise impacts. However, it does invoke the responsibility of developers in the AI field, indicating some relevance to the broader sector of Government Agencies and Public Services due to the implications for accountability and safety in AI applications. The legislation aims to enhance public safety through the framework offered to workers about AI compliance, thereby affecting government roles indirectly as well. The remaining sectors do not show a significant application or relevance in this context.
Keywords (occurrence): artificial intelligence (5) foundation model (4) show keywords in context
Description: As introduced Bill 25-930 would require regulated entities to establish and make publicly available, a consumer health data privacy policy governing the collection, use, sharing, and sale of consumer health data with the consumer’s consent. It would establish additional protections and consumer authorizations for the sale of personal health data. It also establishes that regulated entities can only collect health data that is necessary for the purposes disclosed to the consumers and makes vio...
Summary: The Consumer Health Information Privacy Protection Act (CHIPPA) of 2024 mandates consumer consent for the collection and sharing of health data, ensuring transparency and accountability for entities handling such information.
Collection: Legislation
Status date: July 12, 2024
Status: Introduced
Primary sponsor: Phil Mendelson
(sole sponsor)
Last action: Referred to Committee on Health (Sept. 17, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The Consumer Health Information Privacy Protection Act (CHIPPA) clearly addresses the secure and responsible collection, use, and sharing of consumer health data, which naturally intersects with data governance. The legislation focuses on ensuring consent, data privacy, and transparency when it comes to managing health data, which is crucial in the context of AI collecting and processing personal health information. Furthermore, while the text contains some principles related to system integrity regarding ensuring consent and transparency, it does not explicitly mandate security protocols or oversight measures applicable to AI systems, leading to a lower relevance score for this category. The robustness category is less applicable as it does not address the performance benchmarks or auditing processes of AI systems directly, making it less relevant. The social impact category is more pertinent since the legislation seeks to protect consumers from potential harm from AI practices related to health data misuse and assures the ethical handling of personal information, which can influence societal trust in digital health platforms.
Sector:
Healthcare (see reasoning)
The CHIPPA Act specifically addresses consumer health data, which is inherently tied to the healthcare sector. The legislation outlines critical protections for consumer health data, requiring organizations to establish privacy policies and ensure informed consent. Its implications are significant for healthcare institutions and related entities that utilize AI technologies for processing health data. While it touches on aspects relevant to government agencies through the regulatory framework it sets, the primary focus remains on healthcare, thus making it most pertinent to that sector. Other sectors such as politics and elections or academic institutions are not directly addressed, and while the implications of data governance can influence various sectors, the clear focus of the legislation confines its primary relevance to healthcare.
Keywords (occurrence): machine learning (1) show keywords in context
Description: Amends the Medical Practice Act of 1987. Defines terms. Provides that a health facility, clinic, physician's office, or office of a group practice that uses generative artificial intelligence to generate written or verbal patient communications pertaining to patient clinical information shall ensure that the communications meet certain criteria. Provides that a communication that is generated by generative artificial intelligence and read and reviewed by a human licensed or certified health c...
Summary: The bill amends the Medical Practice Act to regulate the use of generative artificial intelligence in patient communications, requiring disclaimers and contact information if not reviewed by a human provider.
Collection: Legislation
Status date: Feb. 7, 2025
Status: Introduced
Primary sponsor: Laura Fine
(sole sponsor)
Last action: Referred to Assignments (Feb. 7, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text directly addresses the use of generative artificial intelligence (AI) in healthcare settings, focusing on regulations for health facilities and healthcare providers regarding how AI-generated communications involving patient clinical information must be managed. This has significant implications for patient interactions, accountability, and transparency in the healthcare industry, thereby connecting strongly with the Social Impact category. Since it aims to protect patients and ensure transparent communications, it is very relevant to data governance regarding accuracy and potential biases in AI-generated outputs. It also relates to System Integrity since it establishes standards for human oversight when AI is involved in patient communications. Lastly, it involves aspects of Robustness as it outlines compliance requirements and potential penalties for violations of these standards. Hence, it will receive higher scores in both Social Impact and Data Governance.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text explicitly pertains to the healthcare sector by addressing the regulations surrounding the use of generative AI in medical settings, focusing on how healthcare providers should manage AI-generated communications. Given that it establishes compliance processes and requirements for using AI responsibly in healthcare, it is extremely relevant to healthcare. Additionally, aspects related to government oversight and regulatory compliance support its relevance to Government Agencies and Public Services. However, it does not directly influence or address sectors like politics, human resources in organizations, or other areas outside healthcare, yielding lower relevance scores there.
Keywords (occurrence): artificial intelligence (6) automated (1) show keywords in context
Description: AN ACT relating to criminal justice; expanding certain prohibitions relating to pornography involving minors; providing penalties; and providing other matters properly relating thereto.
Summary: Assembly Bill 126 expands laws against child pornography to include manipulated and AI-generated images of minors, stipulating penalties for violations to enhance protections for minors.
Collection: Legislation
Status date: Feb. 4, 2025
Status: Introduced
Primary sponsor: Melissa Hardy
(sole sponsor)
Last action: Notice of eligibility for exemption. (Feb. 9, 2025)
Societal Impact (see reasoning)
The text explicitly addresses the use of artificial intelligence in the context of pornography involving minors. It discusses new legal definitions and penalties for manipulating images via AI, including the creation of fictional characters that appear as minors generated through AI. This connection to AI directly relates to concerns surrounding Social Impact, particularly regarding harms and implications of AI-generated content. It relates less to Data Governance, System Integrity, or Robustness as the main focus of the legislation is on the social and legal repercussions of AI usage rather than data management or performance benchmarks.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The legislation has a notable impact on the realm of law through its punitive measures against AI-generated pornography involving minors, thus significantly engaging with the Judicial System. It also relates to Government Agencies and Public Services as it outlines penalties enforced by the legal system, but it is primarily concentrated on prevention and penalties rather than the broader governance of AI in public services. The other sectors such as Healthcare, Private Enterprises, and International Cooperation are not relevant as they do not pertain to the main focus of the document.
Keywords (occurrence): artificial intelligence (31) automated (1) show keywords in context
Description: Revise laws related to use of name, voice, and likeness of individuals and penalties for unauthorized use
Summary: This bill grants individuals property rights to their name, voice, and likeness, addressing unauthorized commercial use and establishing penalties for violations while allowing transfers after death.
Collection: Legislation
Status date: Feb. 28, 2025
Status: Engrossed
Primary sponsor: Jill Cohenour
(sole sponsor)
Last action: (S) Scheduled for 3rd Reading (April 12, 2025)
Societal Impact
Data Governance (see reasoning)
The text addresses the unauthorized use of individuals' names, voices, and likenesses, and penalizes such activities, especially in digital environments where algorithms and artificial intelligence technologies may be employed. This includes the use of 'digital voice replicas' and conditions under which these technologies can operate. Because the legislation directly pertains to the social implications of AI technology, especially regarding identity and likeness exploitation, it falls under the category of Social Impact. The text is moderately relevant to Data Governance concerning the management of individual rights and consent related to their likeness and voice. However, it is less directly tied to System Integrity and Robustness, as it does not discuss security or performance benchmarks of the AI systems themselves.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text discusses provisions related to the use of an individual's likeness and voice, particularly in digital formats, which is relevant to several sectors. In the context of Politics and Elections, this legislation implicitly addresses potential concerns regarding digital representations during campaigns. However, it does not explicitly regulate AI's utilization in political or campaign processes, making it only slightly relevant. For Government Agencies and Public Services, it may touch upon how public entities must handle such representations but lacks explicit mention. The Private Enterprises, Labor, and Employment sector is moderately relevant, as businesses can be held accountable for unauthorized use of individuals' digital representations. Overall, the most pertinent sectors are Private Enterprises and Academic Institutions, but the relevance weakens significantly.
Keywords (occurrence): artificial intelligence (1) algorithm (3) show keywords in context