4825 results:
Description: Cell phone carrier; spam calls
Summary: House Bill 2342 prohibits cities and counties in Arizona from regulating the use of computational power or running blockchain nodes in residences, establishing statewide preemption on such activities.
Collection: Legislation
Status date: Jan. 21, 2025
Status: Introduced
Primary sponsor: Teresa Martinez
(sole sponsor)
Last action: House Committee of the Whole action: Do Pass Amended (Feb. 27, 2025)
The text primarily emphasizes the regulation of computational power related to blockchain technology. While artificial intelligence (AI) is mentioned as a potential use of computational power, the focus is largely on the prohibition of local regulations concerning blockchain technologies and their computational needs. Hence, it is not considered to have a significant impact on social issues regarding AI, nor does it delve into data governance, system integrity, or robustness as they pertain specifically to AI. The references to AI are incidental rather than central to the legislative intent, leading to low relevance scores for these categories.
Sector: None (see reasoning)
The text does not specifically address any of the nine sectors in detail; however, it touches on aspects that could relate to technology regulations without making specific implications for any particular sector. The mention of artificial intelligence suggests a relation to technology, but it does not clearly connect to politics, government operations, or any specific industries such as healthcare or education. Therefore, all categories are scored low for relevance.
Keywords (occurrence): artificial intelligence (2) show keywords in context
Description: Regulates the use of artificial intelligence in aiding decisions on rental housing and loans; requires a study on the impact of artificial intelligence and machine learning on housing discrimination and redlining.
Summary: This bill regulates the use of artificial intelligence in rental housing and lending decisions, requiring annual audits to assess discrimination risks, and mandates transparency for applicants regarding automated tools.
Collection: Legislation
Status date: Jan. 30, 2025
Status: Introduced
Primary sponsor: Linda Rosenthal
(sole sponsor)
Last action: referred to housing (Jan. 30, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text clearly discusses the regulation of artificial intelligence (AI) and machine learning (ML) in the context of housing decisions, particularly focusing on the potential for discrimination and redlining. This aligns closely with the Social Impact category as it addresses societal concerns related to fairness and bias in automated decision-making for housing. The requirement for disparate impact analysis and the notifications required for applicants also demonstrate a strong focus on accountability and consumer protection, further supporting its relevance to this category. The Data Governance category is also relevant, as it pertains to the collection and management of personal data within the automated decision tools mentioned. However, the emphasis on impact analysis and consumer rights is much stronger in the Social Impact category. The System Integrity category has some relevance since it relates to the use of automated decision tools, but there is less emphasis on security and oversight in the text compared to the other categories. Robustness is minimally relevant since the text does not establish benchmarks or specific performance criteria for AI systems, focusing instead on their application and societal impacts. Therefore, the Social Impact category is assigned a high score due to its focused consideration of the potential impacts of AI, while Data Governance also receives a moderate score due to its linkage to data management in AI systems.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The text touches on several key sectors. It is most relevant to Government Agencies and Public Services due to its focus on regulations that govern the use of AI by landlords and banks, with implications for public services related to housing and lending. Additionally, it has moderate relevance to Private Enterprises, Labor, and Employment, as it outlines how automated decision-making impacts both landlords and tenants. The judicial aspects are included as well since it specifies actions that can be taken against violations of this legislation, connecting it to the Judicial System sector. However, its strongest emphasis remains on the regulatory environment concerning housing and lending practices involving AI, thus linking closely to Government Agencies and Public Services. Other sectors such as Healthcare, Academic and Research Institutions, and Nonprofits and NGOs may not be applicable based on the contents of the text. Therefore, scores reflect a strong relevance to Government Agencies and a moderate relevance to Private Enterprises and the Judicial System.
Keywords (occurrence): artificial intelligence (1) machine learning (4) automated (22) algorithm (1) show keywords in context
Description: Creates the Artificial Intelligence Systems Use in Health Insurance Act. Provides that the Department of Insurance's regulatory oversight of insurers includes oversight of an insurer's use of AI systems to make or support adverse determinations that affect consumers. Provides that any insurer authorized to operate in the State is subject to review by the Department in an investigation or market conduct action regarding the development, implementation, and use of AI systems or predictive model...
Summary: The AI Use in Health Insurance Act establishes regulatory oversight by the Illinois Department of Insurance on insurers' use of AI for consumer decisions, ensuring transparency and review processes to prevent adverse consumer outcomes.
Collection: Legislation
Status date: Jan. 31, 2025
Status: Introduced
Primary sponsor: Laura Fine
(sole sponsor)
Last action: Referred to Assignments (Jan. 31, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text directly deals with the use of AI systems in health insurance, particularly concerning how these systems affect consumer outcomes. It aligns closely with the Social Impact category due to its focus on protecting consumers from adverse determinations made by insurers using AI, including accountability and review processes for decisions based solely on AI systems. For Data Governance, the text addresses the management and regulation of AI systems used by insurers, highlighting the requirements for compliance and oversight to ensure fair treatment of consumers. System Integrity is also relevant as it discusses the regulatory oversight mechanisms put in place to monitor AI usage and ensure transparency in decision-making processes. Robustness is less applicable since the text does not focus extensively on performance benchmarks or certifications for AI systems, hence receiving a lower score. Overall, the text is most relevant to Social Impact and Data Governance.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
This legislative proposal explicitly involves the regulation of AI in the health insurance sector. It addresses the implications of AI on consumer decision-making, detailing how decisions made with AI must be properly regulated and overseen. Given the focus on ensuring fair practices and protecting consumers in the context of insurance, the legislation is highly relevant to the Healthcare sector. While it touches upon broader implications which might have relevance to Government Agencies, the specific nature of the legislation focuses primarily on the interactions within the healthcare insurance landscape without directly addressing aspects like employment or general public service delivery. Hence, relevance is strongest in Healthcare and moderately relevant in the context of Government Agencies.
Keywords (occurrence): artificial intelligence (3) machine learning (4) algorithm (1) show keywords in context
Description: An act relating to the regulation of social media platforms and artificial intelligence systems
Summary: The bill mandates annual registration for social media and AI providers in Vermont, enforcing privacy and safety standards to protect consumers, particularly minors, and grants regulatory authority to the Attorney General.
Collection: Legislation
Status date: Feb. 26, 2025
Status: Introduced
Primary sponsor: Angela Arsenault
(2 total sponsors)
Last action: Read first time and referred to the Committee on Commerce and Economic Development (Feb. 26, 2025)
Societal Impact
Data Governance (see reasoning)
The text explicitly addresses aspects of AI within the realm of consumer protection and regulation, particularly focusing on accountability and privacy standards for artificial intelligence systems, which aligns closely with the Social Impact category. Furthermore, there are references to algorithmic discrimination and regulations on data handling, determining relevance to Data Governance as well. However, there is less emphasis on system security and auditing processes necessary for System Integrity and Robustness, leading to lower scores in those areas.
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system (see reasoning)
The text is highly relevant to Politics and Elections, given its focus on the regulation of platforms that influence public discourse, including social media and AI systems, which can affect political campaigning and public opinion. It also applies to Government Agencies and Public Services due to the regulatory role of the Attorney General alongside provisions for protecting consumers. The Judicial System is hinted at through references to consumer protections but is not the primary focus of the bill, resulting in a moderate score. There are no direct mentions of the remaining sectors, affirming their lower relevance scores.
Keywords (occurrence): artificial intelligence (12) show keywords in context
Description: Commission on Nightlife and Culture Walter “Heru” Peacock Confirmation Resolution of 2025
Summary: The bill confirms Walter “Heru” Peacock's appointment to the District's Commission on Nightlife and Culture to serve as a musician/producer representative, filling a vacancy until March 2028.
Collection: Legislation
Status date: Feb. 14, 2025
Status: Introduced
Primary sponsor: Phil Mendelson
(sole sponsor)
Last action: Referred to Committee on Business and Economic Development (March 4, 2025)
The text primarily revolves around the nomination and confirmation of Walter 'Heru' Peacock to the Commission on Nightlife and Culture. The AI-related portion of the text briefly mentions 'Machine Learning /Artificial Intelligence Data Labeling and Project Design' in the nominee's qualifications. While there is some technical expertise in AI and machine learning indicated in the qualifications of the nominee, the context of the resolution does not address issues like the social impact of AI, data governance surrounding AI systems, system integrity, or the robustness of AI practices. The mention of AI does not lead to a discussion or implications related to the defined categories, making them less relevant overall. As such, the scores reflect minimal direct relevance to the AI-themed categories.
Sector: None (see reasoning)
The text mentions the appointment of a musician and producer to the Commission on Nightlife and Culture, with no direct references to political campaigns (Politics and Elections), actions undertaken by government agencies (Government Agencies and Public Services), usage by non-profits (Nonprofits and NGOs), or the judiciary (Judicial System). While the candidate holds a position of involvement in community and cultural activities, it does not speak primarily to the sectors listed. Therefore, while there may be mild connections to broader societal engagements, it does not strongly align with any of the identified sectors, resulting in very low scores across all sectors.
Keywords (occurrence): artificial intelligence (2) machine learning (2) show keywords in context
Description: Schools; subject matter standards; computer science; updating references; permitting alternate diploma for certain students; repealer; effective date; emergency.
Summary: House Bill 1521 amends Oklahoma education standards to include updated computer science requirements, personal financial literacy, and allows alternate diplomas for certain students, enhancing educational opportunities while ensuring compliance with standard curriculum.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Dick Lowe
(2 total sponsors)
Last action: Policy recommendation to the Education Oversight committee; Do Pass, amended by committee substitute Common Education (Feb. 17, 2025)
The text predominantly discusses educational standards related to computer science but does not delve into social impacts of AI as it relates to community or societal issues. Although it mentions computer science and technology, it does not consider ethical considerations, accountability, or the broader societal implications that may arise from AI implementation in education. It also does not touch on data governance, such as data privacy or management within AI systems, nor does it address system integrity and robustness regarding AI functionalities. Therefore, it does not squarely fit into any of the established categories, leading to low relevance scores across the board.
Sector: None (see reasoning)
The text mentions computer science in the context of educational standards but does not specifically address the regulation or application of AI in any of the highlighted sectors. There is no mention of AI’s role in politics, public services, healthcare, or any other sector listed. The connection to academic and research institutions is nominal since it only highlights technology in schools rather than addressing deeper aspects of AI in academia. Therefore, all sectors receive low relevance scores.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: A BILL to be entitled an Act to amend Title 40 of the Official Code of Georgia Annotated, relating to motor vehicles and traffic, so as to provide for the operation of miniature on-road vehicles on certain highways; to provide for standards for registration of such vehicles; to provide for issuance of license plates for miniature on-road vehicles; to provide for an annual licensing fee for such vehicles; to provide for issuance of certificates of title by the Department of Revenue for such ve...
Summary: The bill permits the operation of miniature on-road vehicles on specific highways in Georgia, establishes registration and licensing requirements, and allows local authorities to regulate or prohibit their use.
Collection: Legislation
Status date: Feb. 6, 2025
Status: Introduced
Primary sponsor: Rob Clifton
(6 total sponsors)
Last action: House Committee Favorably Reported By Substitute (Feb. 21, 2025)
This text primarily addresses the operation and regulation of miniature on-road vehicles, without explicit mention of artificial intelligence or related technologies. The references to 'automated driving systems' in the definition of 'minimal risk condition' suggest some consideration of automated technologies, but they are not the focus of the bill. While there is potential relevance in terms of safety and operational standards for autonomous vehicles, it lacks a comprehensive approach to the categories of social impact, data governance, system integrity, or robustness in relation to AI specifically. Thus, the categories receive low scores.
Sector: None (see reasoning)
The text does not specifically address any sector related to AI. While the bill briefly touches on 'automated driving systems,' it does not provide enough substance to warrant a connection to any of the defined sectors. The main focus is on regulatory measures for miniature on-road vehicles within the realm of motor vehicles, not on AI applications in any sector. Therefore, all scores are low as none of the sectors are clearly relevant.
Keywords (occurrence): automated (1) autonomous vehicle (1) show keywords in context
Description: Requires publishers of books created wholly or partially with the use of generative artificial intelligence to disclose such use of generative artificial intelligence before the completion of such sale; applies to all printed and digital books consisting of text, pictures, audio, puzzles, games or any combination thereof.
Summary: This bill mandates that publishers disclose the use of generative artificial intelligence in all books sold in New York, ensuring transparency for consumers regarding AI-generated content.
Collection: Legislation
Status date: Jan. 10, 2025
Status: Introduced
Primary sponsor: Jonathan Rivera
(sole sponsor)
Last action: referred to consumer affairs and protection (Jan. 10, 2025)
Societal Impact (see reasoning)
The text explicitly pertains to generative artificial intelligence and requires disclosures related to its use in published works. This impacts society regarding transparency in AI usage, potentially addresses consumer protection regarding information about AI-created content, and hints at fairness for consumers understanding the nature of the products they are buying. Given these considerations, it is vital for the Social Impact category. For Data Governance, the text does not focus heavily on the collection, management, or secure handling of data; rather, it may suggest the need for accuracy in disclosures about the use of AI. System Integrity appears only tangentially relevant, as it doesn't address the security, oversight, or transparency directly in the operation of AI systems themselves, but rather their usage in publishing. Robustness is not relevant here, as there are no benchmarks, compliance issues, or auditing processes mentioned. Overall, Social Impact is strongly relevant due to its implications for consumer awareness and accountability in AI usage, while the relevance of the other categories is significantly less.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The legislation directly affects the publishing industry and how it utilizes generative AI technology. The disclosure requirement engages with consumers and the general public, indicating its significance. Political implications may exist, but they are not as tightly intertwined. The Government Agencies and Public Services sector does not feature directly because this is more about commercial publishing than government service delivery. The role in the Healthcare sector or Judicial System is also quite minimal, as these do not relate to the provisions about book publishing. Academic and Research Institutions might tangentially relate due to potential research into AI applications, however, the more immediate relevance pertains to the publishing sector. Thus, this text fits best into the Private Enterprises, Labor, and Employment sector, given its direct implications for businesses in the book publishing field.
Keywords (occurrence): artificial intelligence (2) machine learning (2) show keywords in context
Description: Artificial Intelligence Transparency Act established. Requires developers of generative artificial intelligence systems made available in the Commonwealth to ensure that any generative artificial intelligence system that produces audio, images, text, or video content includes on such AI-generated content a clear and conspicuous disclosure that meets certain requirements specified in the bill. The bill also requires developers of generative artificial intelligence systems to implement reasonab...
Summary: The Artificial Intelligence Transparency Act establishes regulations aimed at promoting transparency and accountability in AI systems' use, ensuring accurate consumer information and preventing fraudulent practices related to AI technologies in Virginia.
Collection: Legislation
Status date: Jan. 11, 2025
Status: Introduced
Primary sponsor: Rozia Henson
(sole sponsor)
Last action: Committee Referral Pending (Jan. 11, 2025)
Societal Impact
Data Governance (see reasoning)
The AI-related portions of the text explicitly discuss the requirement for developers of generative AI systems to ensure transparency and accountability by clearly disclosing when content is generated by AI. This addresses issues of consumer protection and the potential for misinformation, which directly relates to societal impacts of AI. The legislation also implies accountability for outputs of AI systems, which further aligns with challenges pertaining to social trust and the influence of AI-generated content on public discourse. Therefore, the Social Impact category is rated as very relevant. The text also emphasizes the importance of accurate disclosures and could relate to data governance indirectly, but mainly focuses on societal implications rather than on the secure management of data itself. System Integrity and Robustness are less emphasized, as the text does not delve into security measures, human oversight, or performance benchmarks; therefore both categories score lower.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily addresses the implications of generative AI in terms of consumer protection and misinformation which could be relevant across several sectors. Politics and Elections is not applicable since there's no mention of AI use in elections. However, Government Agencies and Public Services and Healthcare may be affected indirectly, but not directly considered. The impact of AI on Private Enterprises could also be noted due to implications on commercial practices. Academic and Research Institutions might be considered because of the transparency needed in AI, but the text focuses more on commercial than educational aspects. These non-direct references do not strongly emphasize their relevance in the context of the legislation.
Keywords (occurrence): artificial intelligence (25) foundation model (1) chatbot (2) show keywords in context
Description: An Act To Establish The Artificial Intelligence Regulation (air) Task Force; To Provide For The Appointment Of Members Of The Task Force, Including Ex-officio Members; To Specify The Task Force's Purpose And Duties As A Regulatory Sandbox; To Direct The Task Force To Study And Evaluate Artificial Intelligence Applications, Risks And Policy Recommendations; To Require That The Task Force Will Report Its Findings And Any Recommendations To The Legislature Annually By December 1; To Authorize Fu...
Summary: The bill establishes the Artificial Intelligence Regulation (AIR) Task Force in Mississippi to evaluate AI applications, risks, and policies, promote responsible use, and report findings annually to the Legislature.
Collection: Legislation
Status date: Jan. 30, 2025
Status: Engrossed
Primary sponsor: Jill Ford
(2 total sponsors)
Last action: Title Suff Do Pass As Amended (March 3, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This text clearly focuses on the establishment of a task force tasked with evaluating artificial intelligence applications and recommending policies regarding its use. Specifically, Section 1(a) emphasizes the need to responsibly oversee the use of AI tools and systems, which is directly relevant to the Social Impact category. The task force aims to ensure that AI technology aligns with state policies and goals while maintaining public trust, indicating the importance of societal impacts. Additionally, the task force's role in studying AI risks and proposed revisions to the state code indicates strong relevance to the System Integrity category as it involves regulatory oversight and ensuring ethical standards. Moreover, Section 6 indicates the need for accountability and transparency in AI technology usage, which ties into both System Integrity and Robustness as it seeks to implement best practices and compliance frameworks for AI technologies. Thus, Social Impact, System Integrity, and Robustness are all highly relevant categories in this context.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Nonprofits and NGOs (see reasoning)
The task force comprises members from various influential sectors, such as government, healthcare, education, and private enterprise, that will address the implications of AI technology in these areas, thereby making the Government Agencies and Public Services sector highly relevant. With specific mention of healthcare professionals as members of the task force, there is a notable relevance to the Healthcare sector as well. Additionally, the discussion of AI's societal and ethical implications may touch upon the Nonprofits and NGOs sector, especially concerning responsible AI deployment impacting communities. However, the other sectors mentioned do not see direct relevance from the text, particularly in context with the legislative focus on AI regulation and governance across these sectors.
Keywords (occurrence): artificial intelligence (14) automated (1) show keywords in context
Description: Requires bureau within DOC to study economic impact of automation, artificial intelligence & robotics on employment in state; specifies contents of study; requires bureau to consult with specified entities in conducting study; requires bureau to submit report to Governor & Legislature by specified date; requires bureau to conduct this study at specified intervals of time.
Summary: The bill mandates a statewide study on the economic impact of automation and AI on employment in Florida, focusing on job displacement, industry effects, and workforce policies, with reports every three years.
Collection: Legislation
Status date: Feb. 20, 2025
Status: Introduced
Primary sponsor: Leonard Spencer
(sole sponsor)
Last action: 1st Reading (Original Filed Version) (March 4, 2025)
Societal Impact (see reasoning)
The text focuses on studying the economic impact of artificial intelligence (AI) and automation on employment. It explicitly mentions how AI may lead to job displacement and requires an analysis of its impact on different demographics, wages, and industry sectors. There are clear connections to social implications through discussions of job loss, training, and policy recommendations aimed at workforce resilience. Thus, it aligns closely with 'Social Impact.' The 'Data Governance' category is not applicable as the text does not concern data collection practices nor accuracy issues in datasets. 'System Integrity' and 'Robustness' are similarly less relevant as there are no mechanisms for safety, transparency, or performance benchmarks outlined in the text.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation is highly relevant to the 'Private Enterprises, Labor, and Employment' sector because it addresses the impact of AI on jobs, employment practices, and the economy. It mentions specific industries and demographics affected by automation, making it pertinent. There is also relevance to 'Government Agencies and Public Services' because the study is conducted by a bureau within the Department of Commerce, linking it to government operations. Although not directly mentioned, the study relates to academic considerations through the inclusion of academic institutions in consultations, so it holds slight relevance to 'Academic and Research Institutions.' The other sectors like Politics and Elections, Judicial System, Healthcare, International Cooperation, Nonprofits, and Hybrid sectors have minimal or no connection to the content of this bill.
Keywords (occurrence): artificial intelligence (2) automated (1) show keywords in context
Description: Establishing the Privacy Protection and Enforcement Unit within the Division of Consumer Protection in the Office of the Attorney General; establishing a data broker registry; requiring certain data brokers to register each year with the Comptroller; imposing a tax on the gross income of certain data brokers for taxable years beginning after December 31, 2026; requiring the revenue from the data broker tax be used by Maryland Public Television to provide digital literacy support to students i...
Summary: The bill establishes a registry for data brokers, imposes a gross income tax on them, and creates a Privacy Protection and Enforcement Unit to safeguard individual privacy rights.
Collection: Legislation
Status date: Feb. 5, 2025
Status: Introduced
Primary sponsor: Jared Solomon
(sole sponsor)
Last action: Hearing 2/25 at 1:00 p.m. (Economic Matters) (Feb. 6, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text primarily discusses legislation regarding data brokers, which impacts how personal data is managed and protected. The Privacy Protection and Enforcement Unit's establishment emphasizes the importance of addressing AI-related issues, particularly in the context of cybersecurity and digital privacy, indicating a significant intersection with the relevant categories of Social Impact and Data Governance. The text discusses mechanisms to protect individuals from unfair practices, which is central to Social Impact, while also focusing on data regulation relevant to Data Governance.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Hybrid, Emerging, and Unclassified (see reasoning)
The legislation touches upon the use of AI in data management practices, particularly regarding the security and privacy of personal information, impacting several sectors. The Privacy Protection and Enforcement Unit is likely involved with Government Agencies and Public Services due to its governmental nature, and it emphasizes consumer rights which may affect Private Enterprises regarding their data practices. However, the act does not specifically target judicial processes, healthcare applications, or academic institutions, limiting its relevance to those sectors.
Keywords (occurrence): artificial intelligence (1) automated (1) show keywords in context
Description: Addressing technology used by employers in the workplace.
Summary: The bill regulates employer use of technology for electronic monitoring and automated decision-making in the workplace, ensuring employee privacy, consent, and data protection. It mandates transparency and imposes penalties for violations.
Collection: Legislation
Status date: Jan. 28, 2025
Status: Introduced
Primary sponsor: Shelley Kloba
(14 total sponsors)
Last action: Referred to Appropriations. (Feb. 21, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text addresses the use of AI and algorithmic decision-making in the workplace, specifically regarding automated decision systems and the implications of electronic monitoring, thus highlighting its relevance to issues of social impact, data governance, system integrity, and robustness. The act establishes requirements for notifying employees about monitoring practices, addresses the potential biases in algorithms, and mandates assessments of automated decision systems, which connects directly to the categories. The emphasis on minimizing harm and ensuring fairness resonates with social impacts, while the regulations on data usage and requirements for data integrity align closely with data governance. The need for oversight and assessments aligns with system integrity, and the relevance of benchmarking automated systems connects with robustness.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation has significant implications for several sectors. In the realm of Private Enterprises, Labor, and Employment, it specifically outlines the use of technology by employers and the rights of employees concerning automated decision-making processes. It also impacts Government Agencies and Public Services, given that government agencies may be involved in monitoring and ensuring compliance with these regulations. While it could have some relevance to Judicial System due to potential legal proceedings about privacy or employment rights, it is not as directly applicable. It does not pertain to the other sectors such as Healthcare, Politics and Elections, or Academic and Research Institutions in any significant way. Given these considerations, the highest relevancy is in Private Enterprises, Labor, and Employment.
Keywords (occurrence): machine learning (2) automated (22) algorithm (2) show keywords in context
Description: Relative to disclosure requirements for synthetic media in political advertising. Election Laws.
Summary: The bill mandates clear disclosure for synthetic media used in political advertising, requiring notifications that content is AI-generated, to enhance transparency and combat misinformation.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Bradley Jones
(5 total sponsors)
Last action: Senate concurred (Feb. 27, 2025)
Societal Impact
Data Governance (see reasoning)
The text has a strong emphasis on the disclosure requirements for synthetic media in political advertising, directly relating to the societal implications of AI technologies. The use of terms like 'artificial intelligence' and 'generative artificial intelligence' points to a significant concern around the impact of these technologies on public trust and political discourse, which is at the core of the social impact category. The legislation also ensures accountability from creators of synthetic media, which is crucial for social responsibility in the use of AI. Therefore, it scores high in Social Impact. Data governance is relevant because it involves managing the accuracy and transparency of AI-generated content, but it is not the central focus of the text. System integrity and robustness do not emerge as strong themes within the legislation, making scores lower for those categories. Overall, the emphasis is primarily on disclosure and accountability regarding societal trust and misinformation, which fits well with the social impact category.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
This text specifically addresses the regulation of AI in the context of political advertising, indicating a strong association with the Politics and Elections sector due to its focus on disclosure requirements for synthetic media used to influence voter decisions. While there are elements that could touch on Government Agencies and Public Services in terms of how it may affect political operations, the primary intent is clearly aligned with election laws and political advertising. Therefore, it scores highest in Politics and Elections, with a moderate score for Government Agencies since the legislation may indirectly affect them. The other sectors do not have direct relevance given the focus on political advertising and synthetic media. There is little to no mention of healthcare, judicial implications, business regulation, or academic institutions in regard to this specific legislation, leading to low scores for those sectors.
Keywords (occurrence): artificial intelligence (4) synthetic media (5) show keywords in context
Description: For legislation to establish a commission to investigate AI in education. Education.
Summary: The bill establishes a commission in Massachusetts to investigate and propose guidelines for ethical AI use in education, ensuring effective, safe, and equitable implementation while addressing data protection and potential risks.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Jacob Oliveira
(sole sponsor)
Last action: House concurred (Feb. 27, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text focuses extensively on the use of AI in education and the implications of automated decision systems, particularly in how they affect students and teaching practices. The investigation aims to ensure ethical use, protect student data, and assess the societal impact of AI integration in educational settings, making it highly relevant to all categories. Social impact is particularly strong since it addresses ethical guidelines and disparities in AI use in classrooms. Data governance is relevant due to the focus on student data protection and compliance with regulations. System integrity is considered due to the emphasis on transparency, accountability, and auditability of AI systems. Robustness is included as the legislation discusses ensuring compliance with new benchmarks and evaluation systems for AI in education, which speaks to the need for continuous oversight and improvement of AI applications in this space.
Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)
This legislation is directly related to the education sector, as it pertains to AI's utilization in educational settings and decision-making processes within school systems. It addresses both theoretical and practical aspects of AI and its societal implications, fulfilling the requirements to be categorized strictly under the Academic and Research Institutions sector. The involvement of educators and researchers emphasizes its connection to academic discourse on AI deployment. Furthermore, the establishment of specific recommendations for ethical AI usage directly ties it to educational policies.
Keywords (occurrence): artificial intelligence (1) machine learning (1) automated (15) show keywords in context
Description: For legislation to prohibit algorithmic rent setting. Housing.
Summary: This bill prohibits landlords in Massachusetts from using algorithmic devices to set or adjust rental prices for residential units, aimed at preventing unfair rental practices.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Cindy Friedman
(2 total sponsors)
Last action: House concurred (Feb. 27, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly deals with algorithmic rent setting, which falls under the purview of Social Impact due to its relevance in addressing potential discrimination and fairness in housing, as it prohibits the use of AI in setting rental prices. It also touches on issues of accountability for businesses using AI to manage rental pricing. The Data Governance category is relevant as well, as the legislation might imply standards for handling nonpublic competitor data and ensuring that these datasets used in AI do not perpetuate biases. System Integrity is moderately relevant because it concerns the transparency of algorithmic processes in housing markets and the control over such AI application. Robustness is less relevant as it primarily refers to performance benchmarks rather than the context of this legislation, which is focused on prohibiting algorithmic decision-making in a specific industry. Overall, the strongest relevance is in Social Impact and Data Governance.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation primarily addresses algorithmic decision-making in the real estate and housing sector, therefore it relates to Private Enterprises, Labor, and Employment since it impacts the business practices of landlords. Given that the legislation regulates the use of AI within a market system, it is moderately relevant to Government Agencies and Public Services as it may influence how housing services are managed and regulated at state levels. However, it doesn’t explicitly target government operations beyond that scope, leading to a lower score for that sector. Healthcare, Politics and Elections, Judicial System, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified do not apply in this context, so they receive a score of 1.
Keywords (occurrence): algorithm (1) show keywords in context
Description: Relative to artificial intelligence health communications and informed patient consent. Financial Services.
Summary: The bill mandates transparency in AI-generated health communications, requiring patient consent and disclosures about AI use in healthcare and insurance claims, aiming to ensure informed patient rights and reduce bias.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Bradley Jones
(4 total sponsors)
Last action: Senate concurred (Feb. 27, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text heavily focuses on the use of artificial intelligence in healthcare communications and patient consent, directly influencing how AI may impact society (Social Impact) and necessitating careful data management practices (Data Governance). Moreover, it emphasizes accountability and transparency in the usage of AI algorithms within insurance claims processes, which ties into the integrity of AI systems (System Integrity). However, it does not specifically address performance benchmarks for AI (Robustness). Therefore, the relevance is strong in Social Impact, Data Governance, and System Integrity, while less relevant in Robustness.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The legislation explicitly relates to the use of AI in the healthcare sector, focusing on informed consent and the disclosure of AI tools in claims processing, making it extremely relevant to Healthcare. There is also a significant implication for Government Agencies and Public Services due to its regulatory nature in overseeing AI use in the healthcare system. However, it doesn't pertain to other sectors such as Politics and Elections, Judicial System, Private Enterprises, Labor, and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, or Hybrid, Emerging, and Unclassified, resulting in lower relevance scores for those categories.
Keywords (occurrence): artificial intelligence (3) automated (1) show keywords in context
Description: Adopt the Artificial Intelligence Consumer Protection Act
Summary: The Artificial Intelligence Consumer Protection Act aims to regulate high-risk AI systems in Nebraska, ensuring they do not contribute to algorithmic discrimination and mandating transparency and risk assessments from developers and deployers.
Collection: Legislation
Status date: Jan. 22, 2025
Status: Introduced
Primary sponsor: Eliot Bostar
(sole sponsor)
Last action: Notice of hearing for February 06, 2025 (Jan. 28, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text primarily concerns the establishment of the Artificial Intelligence Consumer Protection Act, which explicitly focuses on issues relating to algorithmic discrimination and consumer protections associated with AI systems. This indicates a strong relevance to social issues arising from AI, such as discrimination, consumer safety, and fairness, making 'Social Impact' highly relevant. The text also encompasses aspects of data management related to AI, such as the responsibilities of developers regarding data bias and transparency, indicating moderate to high relevance for 'Data Governance.' Meanwhile, the strong focus on the performance and compliance of high-risk AI systems lends high relevance to 'System Integrity.' Lastly, while 'Robustness' pertains to performance benchmarks and auditing, those aspects are not as explicitly addressed in the text compared to the other categories, resulting in lower relevance here.
Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Hybrid, Emerging, and Unclassified (see reasoning)
This legislation relates to multiple sectors directly affected by AI, particularly those concerning consumer rights and protections in public services, healthcare, and employment. The regulation of AI in these contexts indicates relevance to healthcare due to its mention of AI systems affecting health services, and to government agencies since it discusses responsibilities that developers have towards consumers—a public concern. However, while the act addresses algorithmic impacts, it does not distinctly classify under 'Politics and Elections,' 'Judicial System,' 'Private Enterprises,' 'Academic Institutions,' 'International Cooperation,' 'Nonprofits,' or 'Hybrid sectors' within the text's context as these are not explicitly discussed, resulting in lower relevance within those sectors. The primary focus remains around consumer protection across multiple operational sectors.
Keywords (occurrence): artificial intelligence (130) show keywords in context
Description: A BILL for an Act to create and enact a new section to chapter 15-11 and a new chapter to title 54 of the North Dakota Century Code, relating to the state information technology research center, advanced technology review committee, compute credits grant program, and advanced technology grant fund; to provide for a transfer; and to provide an appropriation.
Summary: The bill establishes a state information technology research center and an advanced technology review committee in North Dakota, creating grant programs to support research and development in advanced technologies. It allocates funds and promotes collaboration across various sectors to enhance data science and technology within the state.
Collection: Legislation
Status date: Jan. 13, 2025
Status: Introduced
Primary sponsor: Josh Christy
(10 total sponsors)
Last action: Rereferred to Appropriations (Feb. 3, 2025)
Societal Impact
Data Robustness (see reasoning)
The text relates to the establishment of a state information technology research center focusing on advancements in various advanced technologies, including AI and machine learning. The development of a compute credits grant program also highlights the focus on funding initiatives that support advanced technology solutions, which explicitly includes AI applications. The legislation does not directly tackle issues like bias, accountability, or other societal impacts (that would fit under Social Impact), nor does it focus on data governance or the security of AI systems. However, it deals broadly with advancing the capabilities and oversight of new technologies, linking to the economic development aspects concerning AI integration and innovation within state services, thus having relevance across various categories.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
This bill predominantly addresses the function of state institutions in enhancing research, development, and application of advanced technologies, particularly within the context of the state’s information technology operations. It does not delve directly into political campaign uses of AI or specific legal implications in the judicial system. However, it engages government agencies and public services through the referencing of state research centers, making it relevant to that sector. There is a moderate aspect of private sector engagement through grant provisions and considerations for startups, giving it slight relevance in that area as well.
Keywords (occurrence): artificial intelligence (1) machine learning (3) show keywords in context
Description: Requires employers and employment agencies to notify candidates for employment if machine learning technology is used to make hiring decisions prior to the use of such technology.
Summary: This bill mandates employers and employment agencies in New York to inform job candidates when machine learning tools are used in hiring, detailing the tools and data utilized, ensuring transparency and candidate rights.
Collection: Legislation
Status date: Jan. 14, 2025
Status: Introduced
Primary sponsor: Linda Rosenthal
(6 total sponsors)
Last action: referred to labor (Jan. 14, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text focuses on regulations surrounding the use of automated decision-making in employment, particularly through machine learning technologies which may lead to concerns about fairness, transparency, and accountability in hiring processes. This is highly relevant to the Social Impact category, as it addresses potential biases and discrimination faced by candidates, as well as the transparency required from employers. In terms of Data Governance, the act emphasizes the proper collection and use of data related to candidates, further bolstering its relevance. System Integrity is somewhat relevant due to the implications of oversight and accountability for AI decision-making tools, but it is less direct than the other two categories. Robustness is not particularly relevant, as this text does not address performance benchmarks for AI systems, instead focusing on the procedural aspects of their use in hiring. Overall, the strong presence of AI-related language and the direct impacts of AI on individuals in hiring practices make Social Impact and Data Governance the most relevant categories.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
This legislation specifically addresses the use of AI in the context of employment and hiring processes, indicating a clear relevance to the Private Enterprises, Labor, and Employment sector. It ensures that employers notify candidates of AI use, thereby promoting ethical standards in hiring practices. The implications could extend to Government Agencies and Public Services as they may also employ similar technologies or regulations, but the primary focus is on private sector employment practices. Other sectors like Healthcare or Judicial System are not applicable here as the text does not relate to those areas directly.
Keywords (occurrence): artificial intelligence (1) machine learning (1) automated (9) show keywords in context