4162 results:
Description: Amends the Freedom of Information Act. Provides that, for a public body that is a HIPAA-covered entity, "private information" includes electronic medical records and all information, including demographic information, contained within or extracted from an electronic medical records system operated or maintained by the public body in compliance with State and federal medical privacy laws and regulations, including, but not limited to, the Health Insurance Portability and Accountability Act and...
Collection: Legislation
Status date: Aug. 11, 2023
Status: Passed
Primary sponsor: Sara Feigenholtz
(4 total sponsors)
Last action: Public Act . . . . . . . . . 103-0554 (Aug. 11, 2023)
Data Governance (see reasoning)
The text primarily revolves around amendments to the Freedom of Information Act (FOIA), clarifying the definition of private information, especially concerning electronic medical records and compliance with medical privacy laws like HIPAA. As such, this legislation directly pertains to data privacy and access, but there are limited explicit references to AI technologies. Although certain phrases suggest the automation of data processing, the text lacks depth in addressing the complexity of AI's role, its implications for social structures or data practices, or technological mechanisms such as algorithms or automated decision-making. Thus, it touches on AI but does not engage directly with its social implications, governance, or robust framework.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text focuses on the legislative framework surrounding public records and privacy laws, primarily related to the healthcare sector and the management of medical data. However, while it addresses potential database automation and privacy impacts, it does not explicitly delineate how AI could be utilized or governed in these contexts. Thus, relevant sectors relate predominantly to healthcare and governmental oversight of public information but lack broader implications for AI in politics or other sectors.
Keywords (occurrence): automated (2) show keywords in context
Description: Prohibits the manufacture, modification, sale, transfer, equipping, use, or operation of a robotic device or an uncrewed aircraft equipped or mounted with a weapon within the state.
Collection: Legislation
Status date: May 21, 2024
Status: Introduced
Primary sponsor: Clyde Vanel
(sole sponsor)
Last action: referred to codes (May 21, 2024)
System Integrity (see reasoning)
The text primarily discusses prohibitions regarding robotic devices and uncrewed aircraft that are equipped or mounted with weapons. It explicitly mentions the use of artificial intelligence in the operation of robotic devices (in section 1, part 6). The most relevant category here is 'System Integrity', as it is crucial to ensure that AI systems (like those involved in robotics and automation in weaponry) operate safely and securely. The text indirectly hints at social impact through the potential ethical concerns of armed robotic devices affecting society, but it doesn't engage directly on consumer protections or public trust issues related to AI. Therefore, while there are connections to both the Social Impact and System Integrity categories, the strongest link is to System Integrity. The relevance to Data Governance and Robustness is minimal since the text does not engage with data management issues or performance benchmarks for AI systems. Hence, the score for System Integrity will be higher than the others.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The text pertains to legislation regarding robotic devices and uncrewed aircraft, focusing specifically on their weaponization. Therefore, the sectors most relevant would include Government Agencies and Public Services since the act discusses law enforcement and public safety issues related to these devices. It may also have a tangential connection to the Judicial System due to legal accountability outlined in the act for violations. There are no notable connections to the other sectors as the text is primarily focused on legislation aimed at controlling technology rather than its application in various sectors like Healthcare or Private Enterprises. Therefore, Government Agencies and Public Services will receive a high score while Judicial System will receive a moderate score. Other sectors will score low due to lack of relevance.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Establishing the Technology Advisory Commission to study and make recommendations on technology and science developments and use in the State; and requiring the Commission to submit a report on its activities and recommendations to the Governor and the General Assembly by December 31 each year.
Collection: Legislation
Status date: March 18, 2024
Status: Engrossed
Primary sponsor: Terri Hill
(25 total sponsors)
Last action: Referred Education, Energy, and the Environment (March 18, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text introduces the Technology Advisory Commission, specifically focusing on the study and recommendations related to technology and science in the state. It highlights concepts related to 'Algorithmic Decision Systems' and 'Responsible Artificial Intelligence' emphasizing ethical, transparent, and accountable design, development, and deployment of AI technologies. Given this focus, the legislation has clear relevance to potential social impacts of AI, responsible governance of data management, the integrity of the systems put in place, and the robustness of AI technologies in accordance with the established standards. Hence, I find that Social Impact is very relevant due to the emphasis on the ethical use of AI and addressing physical harm; Data Governance is moderately relevant due to aspects of processing and management; System Integrity is very relevant because of the need for oversight and security; and Robustness is also moderately relevant as it deals with standards and benchmarks for AI performance.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The proposed commission has broad implications across various sectors. The mention of interdisciplinary members from educational institutions and government agencies suggests relevance to Government Agencies and Public Services, as it will advise on responsible AI use in public technology applications. The focus on Algorithmic Decision Systems indicates a consideration for Judicial System challenges where AI may be influential in legal decision-making processes. Given that AI technologies are increasingly influencing various sectors, it also touches upon aspects of Private Enterprises, Labor, and Employment. However, it does not strongly align itself with other sectors like Healthcare or International Cooperation. Thus, Government Agencies and Public Services scores high; Judicial System moderately due to its implications on legal frameworks; Private Enterprises scores moderately but with less emphasis; and the remaining sectors receive low scores.
Keywords (occurrence): artificial intelligence (5) machine learning (5) automated (1) algorithm (1)
Description: Creates the Digital Forgeries Act. Provides that an individual depicted in a digital forgery has a cause of action against any person who, without the consent of the depicted individual, knowingly distributes a digital forgery, creates a digital forgery with intent to distribute, or solicits the creation of a digital forgery with the intent to distribute: (i) in order to harass, extort, threaten, or cause physical, emotional, reputational, or economic harm to an individual falsely depicted; (...
Collection: Legislation
Status date: Jan. 17, 2024
Status: Introduced
Primary sponsor: Mary Edly-Allen
(sole sponsor)
Last action: Referred to Assignments (Jan. 17, 2024)
Societal Impact (see reasoning)
The Digital Forgeries Act explicitly addresses AI in its definition and context concerning digital forgeries. The Act aims to protect individuals from harm caused by unauthorized digital content generated using AI, emphasizing the potential for AI systems to produce material that inaccurately represents people. It also recognizes the need to establish consent and accountability regarding AI-driven content to protect people from harm, placing it squarely within the realm of social implications that arise from AI technologies. These aspects clearly align with the Social Impact category. The legislation does not focus on the management of data or system security, hence it is not applicable to the Data Governance or System Integrity categories. While the Act references AI in terms of defining digital forgeries, it does not explicitly foster new metrics or benchmarks for AI performance and thus lacks alignment with the Robustness category.
Sector:
Judicial system (see reasoning)
The Digital Forgeries Act predominantly addresses the implications of AI-generated content within the realm of privacy and individual rights rather than a specific sector like politics, healthcare, or business. However, there is an implicit connection to the Judicial System, as the act establishes legal remedies for those harmed and provides a framework for civil action related to digital forgeries. It does not directly focus on AI applications in government functions, healthcare, or employment contexts. Therefore, the relevance remains low for most sectors, with a notable relevance to the Judicial System due to the enforcement of rights and remedies. The only connections that are worth higher than marginal relevance here are towards the Judicial System.
Keywords (occurrence): artificial intelligence (4) show keywords in context
Description: An Act amending Titles 18 (Crimes and Offenses) and 61 (Prisons and Parole) of the Pennsylvania Consolidated Statutes, in sexual offenses, further providing for the offense of unlawful dissemination of intimate image; in minors, further providing for the offense of sexual abuse of children and for the offense of transmission of sexually explicit images by minor; and making editorial changes to replace references to the term "child pornography" with references to the term "child sexual abuse m...
Collection: Legislation
Status date: June 10, 2024
Status: Engrossed
Primary sponsor: Tracy Pennycuick
(19 total sponsors)
Last action: Signed in Senate (Oct. 9, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
This legislation contains explicit references to 'Artificial Intelligence' and its role in the criminalization of the unlawful dissemination of intimate images and depictions, particularly in the context of sexually explicit material generated by AI. It establishes definitions that clarify the implications of AI technology in potentially harmful contexts and establishes legal consequences for its misuse. As such, the relevance to the Social Impact category is significant due to the societal issues associated with AI-generated intimate depictions. The role of accountability for developers and the implications for minors further bolster this relevance. Similarly, it pertains to Data Governance due to mentions of the need to manage the data used to generate such imagery responsibly. System Integrity also comes into play with measures for human oversight and security as it relates to accountability for AI systems being misused. Lastly, the discussion of defining standards for AI-generated content aligns with the Robustness category, although it is not as strongly detailed within the text. Therefore, I would assign high relevance to Social Impact (5) and Data Governance (4), moderate to System Integrity (3) and Robustness (3).
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system (see reasoning)
This legislation addresses the implications of AI in laws pertaining to sexual offenses, particularly in how AI technology can facilitate harmful behaviors like the unlawful dissemination of intimate images. This directly impacts the Political and Elections sector due to discussions around the use of AI in potentially influencing electoral processes when targeting young individuals. It also indirectly impacts the Government Agencies and Public Services sector as it presents a need for regulation by government entities concerning AI and its application in law enforcement. While it mentions elements relevant to the Judicial System in terms of enforcement and legal definitions, it does not explicitly address judicial applications of AI. Sectors such as Healthcare, Private Enterprises, Labor, Education, and others are less relevant on the face of this text as they do not prominently feature discussions about AI applications. Overall, the most relevant sectors would be categorized as Politics and Elections (3) due to potential implications for governance and regulation, and Government Agencies and Public Services (4) for the necessity of oversight in applying this legislation. The Judicial System sees moderate relevance (3) due to enforcement aspects, while other sectors rank lower.
Keywords (occurrence): artificial intelligence (11) automated (1) show keywords in context
Description: As introduced Bill 25-930 would require regulated entities to establish and make publicly available, a consumer health data privacy policy governing the collection, use, sharing, and sale of consumer health data with the consumer’s consent. It would establish additional protections and consumer authorizations for the sale of personal health data. It also establishes that regulated entities can only collect health data that is necessary for the purposes disclosed to the consumers and makes vio...
Collection: Legislation
Status date: July 12, 2024
Status: Introduced
Primary sponsor: Phil Mendelson
(sole sponsor)
Last action: Referred to Committee on Health (Sept. 17, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The Consumer Health Information Privacy Protection Act (CHIPPA) clearly addresses the secure and responsible collection, use, and sharing of consumer health data, which naturally intersects with data governance. The legislation focuses on ensuring consent, data privacy, and transparency when it comes to managing health data, which is crucial in the context of AI collecting and processing personal health information. Furthermore, while the text contains some principles related to system integrity regarding ensuring consent and transparency, it does not explicitly mandate security protocols or oversight measures applicable to AI systems, leading to a lower relevance score for this category. The robustness category is less applicable as it does not address the performance benchmarks or auditing processes of AI systems directly, making it less relevant. The social impact category is more pertinent since the legislation seeks to protect consumers from potential harm from AI practices related to health data misuse and assures the ethical handling of personal information, which can influence societal trust in digital health platforms.
Sector:
Healthcare (see reasoning)
The CHIPPA Act specifically addresses consumer health data, which is inherently tied to the healthcare sector. The legislation outlines critical protections for consumer health data, requiring organizations to establish privacy policies and ensure informed consent. Its implications are significant for healthcare institutions and related entities that utilize AI technologies for processing health data. While it touches on aspects relevant to government agencies through the regulatory framework it sets, the primary focus remains on healthcare, thus making it most pertinent to that sector. Other sectors such as politics and elections or academic institutions are not directly addressed, and while the implications of data governance can influence various sectors, the clear focus of the legislation confines its primary relevance to healthcare.
Keywords (occurrence): machine learning (1) show keywords in context
Description: As enacted, enacts the "Modernization of Towing, Immobilization, and Oversight Normalization (MOTION) Act." - Amends TCA Title 4; Title 5; Title 6; Title 7; Title 39; Title 47; Title 48; Title 55; Title 56; Title 62; Title 66 and Title 67.
Collection: Legislation
Status date: May 31, 2024
Status: Passed
Primary sponsor: Jake McCalmon
(22 total sponsors)
Last action: Comp. became Pub. Ch. 1017 (May 31, 2024)
The text primarily addresses revisions to parking regulations in Tennessee and does not explicitly mention AI, algorithms, or any related technology associated with the categories of Social Impact, Data Governance, System Integrity, or Robustness. The single mention of an 'automatic license plate reader' describes a tool that utilizes an algorithm but does not engage with any concepts directly related to AI ethics or governance as outlined in the categories. Overall, the core content of the act focuses on parking enforcement rather than the implications of AI technology.
Sector: None (see reasoning)
The text does not address specific sectors such as Politics and Elections, Government Agencies and Public Services, or any others that involve the use or regulation of AI technology. Instead, it proposes amendments relevant to parking enforcement and vehicle management, which do not inherently involve AI applications in any sector. The mention of an 'automatic license plate reader' does not align with the broader discussions typically associated with the defined sectors.
Keywords (occurrence): automated (1) show keywords in context
Description: To amend the Energy Independence and Security Act of 2007 to direct research, development, demonstration, and commercial application activities in support of supercritical geothermal and closed-loop geothermal systems in supercritical various conditions, and for other purposes.
Collection: Legislation
Status date: June 7, 2024
Status: Introduced
Primary sponsor: Frank Lucas
(2 total sponsors)
Last action: Subcommittee Hearings Held (July 23, 2024)
Data Governance
System Integrity (see reasoning)
The text does include references to AI through terms such as 'machine learning algorithms,' showing a connection to the use of AI in optimizing and enhancing geothermal research and applications. However, the primary focus of the legislation appears to be on geothermal energy rather than extensive AI-related social impacts or regulatory frameworks. The mention of machine learning suggests some relevance to the Data Governance and System Integrity categories, but it is not the primary thrust of the bill. Furthermore, without broader implications on data governance or system integrity, the scoring for Social Impact and Robustness may remain lower.
Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)
The focus of this legislation is primarily on geothermal energy research and development, with only tangential mentions of AI. The references to AI are included within the context of enhancing geothermal technology rather than addressing specific sectors significantly. Therefore, while there is a slight connection to the Government Agencies and Public Services sector due to its regulatory nature, the overall impact on sectors like Healthcare or Private Enterprises does not seem relevant. The understanding is limited to applications within the energy sector, and its broader impact does not clearly translate to other sectors.
Keywords (occurrence): machine learning (1) show keywords in context
Description: This Act creates a new elections crime: use of deep fake technology to influence an election. Under this statute it would be a crime to distribute within 90 days of an election a deep fake that is an audio or visual depiction that has been manipulated or created with generative adversarial network techniques, with the intent of harming a party or candidate or otherwise deceiving voters. It is not a crime, nor is there a penalty, if the altered media contains a disclaimer stating This audio/v...
Collection: Legislation
Status date: June 27, 2024
Status: Enrolled
Primary sponsor: Cyndie Romer
(10 total sponsors)
Last action: Passed By House. Votes: 40 YES 1 NO (June 30, 2024)
Societal Impact (see reasoning)
The text primarily addresses the implications of deep fake technology in the context of elections. It outlines laws regarding the distribution of deep fakes that could mislead voters, highlighting accountability and consumer protection in electoral processes. This connection with misinformation, deception in public discourse, and implications for trust in democratic institutions aligns closely with the 'Social Impact' category. The legislation indirectly addresses potential biases arising from AI-generated content affecting political representation, which reinforces its relevance to addressing social impact. In contrast, the other categories, such as Data Governance, System Integrity, and Robustness, do not relate as strongly to the core focus of the legislation on election integrity and the social consequences of misinformation. The measures proposed do not delve into data handling practices, system security, or benchmarking the technologies used, which are the crux of the other categories, and therefore score lower on relevance.
Sector:
Politics and Elections
Judicial system (see reasoning)
The legislation specifically addresses how deep fake technology can be used in political contexts to influence election outcomes, making it extremely relevant to the 'Politics and Elections' sector. It establishes clear legal ramifications for the distribution of misleading media that could alter voter perception or election integrity. While the implications of AI in relation to public service delivery or nonprofit operations may tangentially touch on the legislation, the primary emphasis on election integrity and the role of misleading information directly aligns the text with the political sector. Consequently, sectors such as Government Agencies and Public Services or Nonprofits and NGOs do not receive higher scores, as the focus remains squarely on electoral processes and misinformation rather than broader social governance or nonprofit applications.
Keywords (occurrence): deepfake (11) synthetic media (5) show keywords in context
Description: To provide for the establishment of a program to certify artificial intelligence software used in connection with producing agricultural products.
Collection: Legislation
Status date: Dec. 14, 2023
Status: Introduced
Primary sponsor: Randy Feenstra
(4 total sponsors)
Last action: Referred to the House Committee on Agriculture. (Dec. 14, 2023)
Data Governance
System Integrity
Data Robustness (see reasoning)
The text is highly relevant to the category of Robustness because it establishes a program to certify artificial intelligence software used in agriculture, which implies a focus on performance benchmarks and compliance standards. The mention of adherence to the AI Risk Management Framework indicates an emphasis on operational robustness and safety in AI systems within agricultural applications. Similarly, it relates to System Integrity as certification inherently involves measures for ensuring the accuracy and reliability of AI software, fostering accountability and oversight. However, its relevance to Social Impact is limited because while the legislation addresses AI's application in agriculture, it does not focus on its broader societal effects or ethical implications. Data Governance is somewhat relevant as the certification process might involve data management concerns but is not the primary focus of the text. Overall, the text strongly emphasizes certification related to performance standards and operational integrity for specific AI applications.
Sector:
Government Agencies and Public Services (see reasoning)
The sector relevance of the text is primarily tied to the Agriculture sector because it explicitly addresses the use of artificial intelligence within agricultural practices. It emphasizes the certification of AI software in performing tasks related to agricultural products, which is critical for enhancing agricultural operations. While it might slightly touch upon regulatory concerns that could involve Government Agencies and Public Services, the primary focus remains on agriculture, giving less relevance to other sectors listed. Therefore, Agriculture should receive a high score, while other sectors receive lower scores due to their minimal impact or relevance to the text.
Keywords (occurrence): artificial intelligence (4) automated (1) show keywords in context
Description: An act to add Section 12100.1 to the Public Contract Code, relating to public contracts.
Collection: Legislation
Status date: Aug. 31, 2024
Status: Enrolled
Primary sponsor: Steve Padilla
(3 total sponsors)
Last action: Enrolled and presented to the Governor at 3 p.m. (Sept. 11, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text revolves around the procurement and risk management standards for automated decision systems (ADS) or automated decision tools (ADT) used by state agencies in California. It explicitly mentions the development of regulations concerning artificial intelligence (AI) and details about the risk management for AI systems in public contracts. This directly taps into issues relating to social impacts of AI systems on various critical sectors, making it very relevant to the category of Social Impact. Furthermore, there are clear processes described for data governance through requirements for risk assessment, security controls, and compliance with privacy laws, thus making it similarly relevant for Data Governance. The inclusion of clauses mandating risk assessments and monitoring for system integrity outlines the need for control over these AI systems, which positions this text strongly in the System Integrity category. The legislation also discusses ensuring that AI systems are effective and compliant with established standards, thus making it pertinent to the Robustness category as well. All these reasons lead us to conclude that the text warrants high relevance across all categories mentioned.
Sector:
Government Agencies and Public Services
Judicial system
Healthcare
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs (see reasoning)
The text refers to the implications of automated decision systems in various social domains, including access to employment, education, and even housing. This indicates a substantial relevance to sectors such as Government Agencies and Public Services, where AI may be utilized in delivering public services and managing government operations. The clear delineation of AI roles in areas such as health care, criminal justice, and higher education also suggests relevance to those sectors, particularly as the implications of AI in these areas can be critical. However, the text does not explicitly touch on all sectors, leaving Healthcare, Private Enterprises, and International Cooperation as less relevant. Judicial System relevance is also moderate but based on its focus on how decisions derived from AI affect legal outcomes and due process. Overall, it’s fair to assign high relevance to Government Agencies and Public Services, and moderately high to Judicial System and Healthcare due to the broader implications.
Keywords (occurrence): artificial intelligence (5) machine learning (1) automated (5) show keywords in context
Description: To direct the Secretary of Agriculture to establish centers of excellence for agricultural security research, extension, and education, and for other purposes.
Collection: Legislation
Status date: May 17, 2024
Status: Introduced
Primary sponsor: Don Bacon
(3 total sponsors)
Last action: Referred to the House Committee on Agriculture. (May 17, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text highlights the establishment of centers of excellence for research, extension, and education in agricultural security, with a specific inclusion of 'artificial intelligence' in the realm of 'digital agriculture.' This signals the recognition of AI's role in enhancing agricultural practices. Consequently, the relevance of Social Impact stems from potential societal benefits tied to AI in agriculture, including workforce development and community engagement. For Data Governance, while the text does not explicitly address data management or privacy associated with AI, the application of AI in agriculture would require careful data policies and governance. System Integrity is moderately relevant due to the emphasis on cybersecurity, which intersects with AI's capabilities in safeguarding agricultural data and processes. Robustness is less relevant as the text does not focus on performance benchmarks for AI systems directly. Overall, Social Impact and Data Governance receive higher relevance scores, whereas System Integrity is acknowledged as relevant due to the cybersecurity emphasis.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The mention of 'artificial intelligence' in the context of digital agriculture makes it relevant to the sector of Private Enterprises, Labor, and Employment, as AI technologies are poised to influence agricultural labor practices and industry standards. Government Agencies and Public Services see relevance through the involvement of governmental departments and educational institutions in establishing excellence centers. Additionally, there might be connections to Academic and Research Institutions due to the emphasis on research and education activities. However, the text does not delve deeply into political implications, the judicial framework around AI, or specific healthcare applications, resulting in lower scores for those sectors. Overall, Private Enterprises, Labor, and Employment has significant applicability, supported by Government Agencies and Public Services and Academic and Research Institutions receiving moderate scores.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Enacts into law major components of legislation necessary to implement the state public protection and general government budget for the 2024-2025 state fiscal year; establishes the crime of assault on a retail worker (Part A); establishes the crime of fostering the sale of stolen goods as a class A misdemeanor (Part B); adds to the list of specified offenses that constitutes a hate crime (Part C); authorizes the governor to close correctional facilities upon notice to the legislature (Part D...
Collection: Legislation
Status date: Jan. 17, 2024
Status: Introduced
Primary sponsor: Budget
(sole sponsor)
Last action: SUBSTITUTED BY A8805C (April 18, 2024)
The provided text primarily focuses on implementing state legislation aimed at public protection and adjustments to the penal code. The text does not engage with AI systems, their impacts on society, or legislation directly related to AI governance or integrity. As such, it appears to be completely unrelated to AI-specific issues, making it irrelevant for all categories concerning AI.
Sector: None (see reasoning)
The text outlines various legal amendments and public protection measures but lacks any discussion or reference to AI-related use cases or regulations within specific sectors. This absence of AI content likewise renders the text non-relevant to the identified sectors, leading to a score of 1 across all sectors.
Keywords (occurrence): automated (2) show keywords in context
Description: As enacted, enacts the "Tennessee Artificial Intelligence Advisory Council Act." - Amends TCA Title 4.
Collection: Legislation
Status date: May 29, 2024
Status: Passed
Primary sponsor: Patsy Hazlewood
(3 total sponsors)
Last action: Effective date(s) 05/21/2024 (May 29, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text discusses the creation of the Tennessee Artificial Intelligence Advisory Council, which focuses on guiding the state's use of artificial intelligence to improve government services and leverage AI for economic benefits. The emphasis on ethical use, economic implications, and transparency aligns closely with the impact of AI on society and individuals (Social Impact), as well as the governance and accuracy in the handling of AI and its data (Data Governance). Furthermore, there are references to expectations of governance frameworks and evaluation of AI risks, which speak to systemic integrity. The document does not delve deeply into benchmarking performance or compliance standards, thus Robustness is less relevant in comparison to the other categories.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions (see reasoning)
The bill is highly relevant to several sectors, particularly the Government Agencies and Public Services as it directly pertains to the use of AI in government. The focus is on how AI can improve the efficiencies of state and local government services and goes deeper into workforce development and economic implications of AI, suggesting a somewhat relevant connection to Private Enterprises, Labor, and Employment. There are also touchpoints on education and research related to AI, which lend some relevance to Academic and Research Institutions. However, other sectors like Politics and Elections, Healthcare, and Judicial System do not apply directly to the text’s content, thus receiving a lower score.
Keywords (occurrence): artificial intelligence (22) show keywords in context
Description: An act to add Section 38760 to the Vehicle Code, relating to vehicles.
Collection: Legislation
Status date: Aug. 28, 2024
Status: Enrolled
Primary sponsor: Matt Haney
(2 total sponsors)
Last action: Senate amendments concurred in. To Engrossing and Enrolling. (Ayes 65. Noes 4.). (Aug. 28, 2024)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text focuses on autonomous vehicles and their regulation, particularly in the context of incident reporting. Key terms related to AI, such as 'autonomous mode', indicate relevance to AI's social impact due to the implications on safety, liability, and discrimination, particularly against vulnerable road users. The reporting and oversight requirements suggest a framework for accountability and safety in AI operations, affecting individuals and society as a whole. Understanding how AI technologies can cause potential harm or benefit to users aligns with the Social Impact category, thereby indicating a strong relevance. The Data Governance category is also relevant, as it discusses the collection and management of data related to incidents, including mandates for transparent reporting. System Integrity is considered relevant because the provisions describe specifications for operational performance and the requirement for manual override in problematic situations. However, the focus is primarily anecdotal and regulatory, without delving into internal security measures for the AI systems themselves, which limits its relevance in this category. The Robustness category is less applicable here since the text does not specifically address performance benchmarks for AI systems, and instead focuses on reporting mechanisms, thus limiting its relevance in this category as well.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text primarily addresses legislation concerning autonomous vehicles, directly relevant to several sectors. In the Politics and Elections sector, there is limited relevance, as the text does not discuss AI's role in elections or political campaigns. However, Government Agencies and Public Services is highly relevant as it speaks to how the DMV and other agencies must manage reports and data for the incident reporting of autonomous vehicles. The Judicial System is slightly relevant as it touches on accountability, though it primarily focuses on vehicle regulation rather than judicial applications. The Healthcare sector is not applicable, as there is no mention of healthcare applications. Within the Private Enterprises, Labor, and Employment sector, it reflects the implications for manufacturers and their operational obligations, but does not strongly address employment or corporate governance perspectives. Academic and Research Institutions have minor relevance as the legislation does not engage educational contexts specifically, even though innovations may come from research. International Cooperation and Standards does not receive an ample mention in this text, thus scoring low. Nonprofits and NGOs have little relevance unless involved in advocacy or disability issues related to the legislation, while Hybrid, Emerging, and Unclassified could apply given the innovative nature of autonomous vehicles, yet again lacks a strong basis here.
Keywords (occurrence): automated (2) autonomous vehicle (41) show keywords in context
Description: Relative to prohibiting the unlawful distribution of misleading synthetic media.
Collection: Legislation
Status date: Dec. 11, 2023
Status: Introduced
Primary sponsor: Linda Massimilla
(11 total sponsors)
Last action: Refer for Interim Study: Motion Adopted Voice Vote 03/14/2024 House Journal 8 P. 5 (March 14, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The legislation centers on the unlawful distribution of misleading synthetic media and explicitly links the definition of synthetic media to artificial intelligence algorithms. This directly relates to the 'Social Impact' category as it addresses potential harm from misleading AI-generated content and its implications for public trust and election integrity. It also connects to 'Data Governance' since unauthorized usage of AI to create misleading content can involve management of data rights and personal consent. The aspect of accountability and penalties in the bill aligns with 'System Integrity,' as it seeks to establish clear rules for AI systems that could mislead individuals in significant ways, which involves transparency and control. The robustness of these measures signifies a compliance effort with standards in AI content distribution. Overall, this legislation addresses both the societal consequences of AI media and accountability within AI governance.
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system (see reasoning)
The text is closely related to the sector of politics and elections, as it explicitly speaks about misleading synthetic media that can influence election outcomes. It addresses the deployment of AI in creating media that could harm electoral integrity, reflecting legislative intent in regulating AI's role in politics. Furthermore, it implicates government agencies and public services as the enforcement and compliance measures would likely involve public bodies. However, less direct relevance to other sectors such as healthcare or private enterprises suggests that while the bill intersects with several sectors, its core focus remains on political implications and public governance.
Keywords (occurrence): artificial intelligence (1) synthetic media (22) show keywords in context
Description: To provide for the future information technology needs of Massachusetts
Collection: Legislation
Status date: Jan. 10, 2024
Status: Introduced
Last action: New draft substituted, see H4642 (May 15, 2024)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text discusses the FutureTech Act, which encompasses various aspects of information technology development in Massachusetts. The AI-related sections specifically mention funding for AI projects and the implementation of AI and machine learning systems for state agencies. This clearly indicates a focus on the societal impacts of AI (e.g., enhancing public services and efficiency) and implies a need for oversight and governance regarding data usage in these systems. Thus, there is a significant relevance to both the Social Impact and Data Governance categories, suggesting strong considerations around the implications of AI on community dynamics and the responsibilities tied to managing information technology and data. The calls for security and efficiency also indicate a relevant connection to System Integrity, while benchmarks related to technology performance hint at potential relevance to Robustness, although possibly to a lesser extent. Overall, the legislation is deeply connected to the role and implications of AI in state governance and service provision.
Sector:
Government Agencies and Public Services
Healthcare
Hybrid, Emerging, and Unclassified (see reasoning)
The text addresses various sectors, including government operations and public services directly. It discusses the implementation of AI and machine learning within state agencies, enhances user experience across governmental services, and promotes transparency and efficiency in public service delivery. The work with municipal fiber broadband infrastructure also implicates government efficiency and citizen engagement. However, it does not strongly center on sectors like healthcare or judicial systems, which can lead to lower scores in those areas. The focus on AI applications broadly affects multiple facets of society, including economic aspects related to labor and employment, which could also connect to Private Enterprises. Given these observations, there are strong connections to Government Agencies and Public Services, with moderate to slight relevance to other sectors.
Keywords (occurrence): machine learning (1) chatbot (1) show keywords in context
Description: Requesting The Hawaii Professional Chapter Of The Society Of Professional Journalists To Recommend A Process That Individuals Can Utilize To Evaluate And Identify Whether Or Not News Sources Adhere To Ethical And Objective Standards.
Collection: Legislation
Status date: April 4, 2024
Status: Passed
Primary sponsor: Chris Lee
(8 total sponsors)
Last action: Certified copies of resolution sent, 05-31-24 (May 31, 2024)
Societal Impact (see reasoning)
The text discusses the relationship between AI advancements and the spread of misinformation, highlighting the need for ethical standards in news sourcing. This directly ties into the social impact of AI, specifically regarding misinformation and its effects on public understanding. Although the text touches on certain aspects of data governance indirectly, the main focus is on the implications of AI in journalism and the importance of ethical practices. Therefore, Social Impact is given a high relevance score while Data Governance, System Integrity, and Robustness scores remain low as they do not align directly with the core focus of the resolution.
Sector:
Government Agencies and Public Services
Nonprofits and NGOs (see reasoning)
The text primarily relates to the media sector and the dissemination of information. It discusses how AI relates to societal issues around misinformation, which can impact political discourse and public trust. However, it does not focus explicitly on legislative measures across sectors such as healthcare or judicial systems. The emphasis is drawn mainly towards media literacy and journalistic responsibility rather than a specific sector-based approach, such as Private Enterprises or Academic Institutions. Thus, while there are faint implications for governmental roles, the scores reflect the text's central focus on media ethics and standards impacted by AI without deep engagement with other sector-specific applications.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Creates the Second Chance Public Health and Safety Act and amends the Freedom of Information Act, the Civil Administrative Code of Illinois, and the Unified Code of Corrections. Contains declarations and findings. Creates the Department of Returning Resident Affairs and sets forth its powers in relation to returning residents (residents who have been detained, are defendants in criminal prosecutions, are incarcerated, or have been incarcerated) and other matters. Provides that the Department ...
Collection: Legislation
Status date: Jan. 18, 2023
Status: Introduced
Primary sponsor: Justin Slaughter
(23 total sponsors)
Last action: Rule 19(a) / Re-referred to Rules Committee (March 27, 2023)
The text describes the creation of the Department of Returning Resident Affairs and outlines measures to support returning residents, focusing on their successful reintegration. However, it does not explicitly address AI technology or its implications within the framework of returning resident affairs. The social impact primarily surrounds rehabilitation, welfare, and community support rather than the implications of AI. Additionally, there's no indication of regulation or oversight related to data management, system security, or benchmarking, which would have aligned with the Data Governance, System Integrity, or Robustness categories. Thus, none of the categories are highly relevant to the text provided.
Sector:
Government Agencies and Public Services (see reasoning)
The legislation pertains primarily to social services and rehabilitation for returning residents rather than directly to the sectors defined. While there are implications for government agencies and public services, it does not strategically address AI applications in these areas. The mention of a hotline and community organization partnerships may relate to governmental functions, but the focus on AI is notably absent. Therefore, the text holds slight relevance to the Government Agencies and Public Services sector, while remaining irrelevant to the remaining sectors.
Keywords (occurrence): automated (3) show keywords in context
Description: Requires hospitals and nursing homes to provide access to certain interpreter services for deaf and hard of hearing.
Collection: Legislation
Status date: May 6, 2024
Status: Introduced
Last action: Introduced, Referred to Assembly Aging and Human Services Committee (May 6, 2024)
Societal Impact (see reasoning)
The text introduces a requirement for hospitals and nursing homes to provide access to interpreter services, specifically mentioning 'artificial intelligence-assisted interpreting services'. This highlights a significant consideration of AI in the context of accessibility and healthcare. The focus on patient interaction with AI-assisted services ties it primarily to the 'Social Impact' category, as it addresses how AI can positively affect individuals who are deaf or hard of hearing. The relevance to 'Data Governance' is minimal as there is no detailed mention of data management practices. 'System Integrity' and 'Robustness' are also not applicable since the text does not discuss security, transparency, or performance metrics for AI systems.
Sector:
Healthcare (see reasoning)
The legislation pertains directly to healthcare settings, mandating that hospitals and nursing homes provide accessible services for specific patient needs. The mention of AI in this context points towards its use in healthcare facilities, making it highly relevant. Other sectors such as politics, education, or non-profits do not find relevance in this text, as it does not address their specific regulatory or operational concerns.
Keywords (occurrence): artificial intelligence (2) show keywords in context