4942 results:


Summary: The bill recognizes ITA International LLC for their innovative use of SBIR-developed technology, enhancing government and industry solutions, particularly in the Department of Defense, and improving efficiency through data analytics.
Collection: Congressional Record
Status date: Nov. 12, 2024
Status: Issued
Source: Congress

Category:
Societal Impact
Data Governance (see reasoning)

The text discusses the contributions of ITA International in developing analytic algorithms and AI techniques, specifically in their work with decision support tools for the Navy and the implications for government operations. As it highlights the benefits of AI in decision making and efficiency within defense operations, this ties significantly to the category of Social Impact as it demonstrates how AI can improve outcomes within the military context, impacting readiness and resource allocation. Data Governance is also relevant as it addresses the management of data through analytic algorithms, though less focused on regulatory aspects. System Integrity is touched upon through mentions of security and operational effectiveness, but less explicitly. Robustness is less applicable as it does not cover benchmarking or audits. The strongest relevance is with Social Impact due to the direct mention of AI applications in government.


Sector:
Government Agencies and Public Services (see reasoning)

The text centers around ITA International's application of AI in government, specifically within the Department of Defense. This strong focus on the transition of technology to government use directly aligns with the sector of Government Agencies and Public Services, highlighting how AI can improve efficiency and decision making in governmental operations. While there is a broad discussion of private enterprise methods, the primary emphasis remains on government applications, thus less relevance to sectors like Healthcare, Private Enterprises, or Academic Institutions. Therefore, Government Agencies and Public Services is rated the highest, with minor relevance to others only noted indirectly.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Description: Requires artificial intelligence companies to conduct safety tests and report results to Office of Information Technology.
Summary: The bill mandates artificial intelligence companies in New Jersey to conduct annual safety tests on their technologies and report the findings to the Office of Information Technology, ensuring compliance with safety standards and bias assessments.
Collection: Legislation
Status date: Oct. 7, 2024
Status: Introduced
Primary sponsor: Troy Singleton (sole sponsor)
Last action: Introduced in the Senate, Referred to Senate Commerce Committee (Oct. 7, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

This text emphasizes the need for safety tests to assess AI technologies, which aligns with multiple aspects of AI's societal impact, data governance, system integrity, and robustness. The expectation of annual reporting and establishing minimum requirements suggests a strong focus on not only the social implications of AI (such as biases and cybersecurity threats) but also the effectiveness and safety of AI systems in general. Consequently, the relevance to 'Social Impact' is significant due to its implications on fairness, safety, and accountability. 'Data Governance' faces high relevance because it requires scrutiny over data sources, biases, and legal compliance, crucial for maintaining the integrity of AI data. Furthermore, the legislation directly addresses the safety and integrity of AI systems, suitable for 'System Integrity.' Lastly, the structured testing and reporting measures align highly with 'Robustness,' aimed at developing benchmarks for AI performance and safety. Therefore, the text resonates with all four categories, particularly emphasizing the necessity of regulated AI development and deployment.


Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment
Hybrid, Emerging, and Unclassified (see reasoning)

The text lays out regulations concerning AI technologies, which indicates relevance across several sectors. In 'Politics and Elections,' it doesn't directly address issues related to political processes, thus scoring lower. 'Government Agencies and Public Services' stands relevant because it involves oversight from state authorities, suggesting implications for public sector technology management. 'Judicial System' is moderate as compliance testing may be indirectly related to legal review but not explicitly stated. 'Healthcare' makes no direct mention, resulting in a low score. 'Private Enterprises, Labor, and Employment' applies because the legislation affects business practices among AI firms; hence, some relevance persists. For 'Academic and Research Institutions,' although AI research might be influenced, it's not central in this text, leading to lower relevance. 'International Cooperation and Standards' doesn’t apply as there's no mention of international collaboration. 'Nonprofits and NGOs' is also not relevant due to a lack of specific mention. Lastly, 'Hybrid, Emerging, and Unclassified' holds a degree of relevance due to the evolving nature of AI applicability, but it ranks lower than the others. Ultimately, significant importance is placed on government oversight of AI usage in public services, impacting 'Government Agencies and Public Services' and 'Private Enterprises, Labor, and Employment' significantly.


Keywords (occurrence): artificial intelligence (23) machine learning (1) show keywords in context

Summary: The bill outlines the committee meetings scheduled for September 18, 2024, addressing various issues like food programs, tax reforms, cybersecurity, and public safety, among others.
Collection: Congressional Record
Status date: Sept. 17, 2024
Status: Issued
Source: Congress

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

This text discusses several legislative items in Congress that include provisions related to the use of artificial intelligence (AI) by the Federal Government. Given that AI is mentioned in the context of procurement, development, and use by government entities, it directly relates to questions of social impact, data governance, system integrity, and robustness of AI systems. The legislative focus seems to imply efforts to ensure AI systems are safe, responsible, and agile. Consequently, this would apply broadly across multiple AIrelated categories.


Sector:
Government Agencies and Public Services
Judicial system (see reasoning)

The presence of provisions related to the use of AI by the Federal Government inherently links this text to the sector of Government Agencies and Public Services. Additionally, there are mentions of implications for cybersecurity, which connect to Judicial System concerns due to the nature of security and law enforcement. The discussions on AI suggest potential applications across various sectors, but the most direct relevance is to government services and operations.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Summary: The bill honors Aditi Muthukumar for winning Rookie of the Year in the 2024 Congressional App Challenge for her app, SafeSpace, which supports youth mental health.
Collection: Congressional Record
Status date: Nov. 20, 2024
Status: Issued
Source: Congress

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text mentions a trained machine learning model used in Aditi Muthukumar's app, which pertains to AI. Therefore, the relevance of the categories can be assessed based on how they relate to this application of AI. Social Impact is relevant because the app addresses mental health resources for young people, which affects societal well-being. Data Governance is also relevant due to the use of machine learning, which typically involves data management practices to ensure the accuracy and safety of the information being processed. System Integrity may have some relevance as the app must maintain security regarding user data, but it’s not explicitly mentioned. Robustness is less relevant here as the text does not discuss performance benchmarks or compliance standards concerning the app. Hence, Social Impact and Data Governance score higher while System Integrity scores moderately and Robustness scores lower due to lack of mention of relevant points.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

The text does not refer directly to any of the specified sectors such as politics, health care, or education systems but the app has implications for mental health support in a general sense. However, the mention of an app designed for mental health resources suggests a relevance to Government Agencies and Public Services due to public health implications. There's a relationship to Academic and Research Institutions given the context of the app being developed for a STEM-focused competition, although it is less direct. Therefore, Government Agencies and Public Services may score higher, while Academic and Research Institutions receive a moderate score. All other sectors do not have enough relevance to score.


Keywords (occurrence): machine learning (1) show keywords in context

Description: An Act providing for civil liability for fraudulent misrepresentation of candidates; and imposing penalties.
Summary: The bill establishes civil liability and penalties for fraudulent misrepresentation of political candidates, particularly through artificially generated content. It aims to protect electoral integrity by regulating deceptive campaign advertisements.
Collection: Legislation
Status date: May 29, 2024
Status: Introduced
Primary sponsor: Tarik Khan (28 total sponsors)
Last action: Laid on the table (Sept. 23, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text directly discusses the fraudulent use of AI-generated content for political misrepresentation, indicating its significant societal impacts, especially in elections. This fits well within the realm of social impact as it raises concerns about misinformation, accountability of content creators, and potential harm to trust in political communications. It also touches upon consumer protections regarding AI-generated media in campaign advertisements. Furthermore, the text addresses the ethical implications of using AI in political campaigns, highlighting the importance of transparency and fairness, which further solidifies its relevance to social impact. Data governance could also be relevant because of the emphasis on the proper use and disclosure of synthetic content; however, it is primarily focused on the consequences of misuse rather than data management principles. System integrity and robustness are less applicable here as they revolve around operational security and performance standards rather than the specific issues the text aims to resolve - namely, misinformation and its penalties. Overall, this piece primarily exemplifies social impact, with some relevance to data governance.


Sector:
Politics and Elections (see reasoning)

The text explicitly addresses the implications of using artificial intelligence in political campaign advertisements, particularly regarding fraudulent misrepresentation of candidates. This makes it highly relevant to the Politics and Elections sector as it sets legal frameworks for the use of AI in electoral contexts. There is no mention or discussion of AI use in government agencies, healthcare, or other sectors, which places little to no relevance there. However, the text does touch upon accountability, principles of fairness, and transparency, all of which are essential in electoral processes, hence reinforcing its strong categorization within Politics and Elections. Other sectors such as Judicial System (related to the enforcement of laws regarding AI misuse) and Private Enterprises, Labor, and Employment could arguably have slight relevance when considering the responsibilities of political committees as covered persons, but they are not the primary focus of this legislation. Thus, the text is predominantly pertinent to the Politics and Elections sector.


Keywords (occurrence): artificial intelligence (4) machine learning (1) automated (1) show keywords in context

Description: License plate reader systems; civil penalty. Provides requirements for the use of license plate reader systems, defined in the bill, by law-enforcement agencies. The bill limits the use of such systems to scanning, detecting, and recording data about vehicles and license plate numbers for the purpose of identifying a vehicle that is (i) associated with a wanted, missing, or endangered person or human trafficking; (ii) stolen; (iii) involved in an active law-enforcement investigation; or (iv) ...
Summary: The bill regulates the use of license plate reader systems by law enforcement in Virginia, establishing guidelines for data usage, privacy protections, and civil penalties for violations, ensuring accountability.
Collection: Legislation
Status date: Feb. 13, 2024
Status: Engrossed
Primary sponsor: Scott Surovell (3 total sponsors)
Last action: Constitutional reading dispensed (40-Y 0-N) (Feb. 13, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly discusses the use of automated high-speed cameras and computer algorithms for license plate reader systems employed by law enforcement, indicating a direct relevance to AI under the terminology of Automated systems and Algorithms. The legislation stipulates requirements for the operation of these systems, highlighting accountability, data handling, and compliance with certain standards, connecting to the categories of System Integrity (due to the regulations around oversight and access) and Social Impact (considering implications for society regarding data privacy, surveillance, and the potential for misuse). Data Governance is also relevant as the bill mandates control over data management and security measures. However, there is less emphasis on developing new AI benchmarks or performance audits directly outlined in the text, which makes Robustness less relevant.


Sector:
Government Agencies and Public Services
Judicial system (see reasoning)

The text directly relates to Government Agencies and Public Services, as it defines how government law enforcement agencies use technology (license plate readers) to gather and manage data for public safety purposes. It discusses data retention, access management, and compliance with laws, connecting directly to the operations of public services. While aspects of AI regulation could touch on Judicial System in terms of evidence admissibility, the text does not explicitly focus on the judicial implications. Other sectors like Healthcare, Political and Elections, and Nonprofits and NGOs do not find direct relevance in the context provided by this text.


Keywords (occurrence): automated (1) show keywords in context

Description: Prohibits collecting of certain costs associated with offshore wind projects from ratepayers.
Summary: This bill prohibits collecting certain costs from ratepayers associated with offshore wind projects in New Jersey, aiming to protect consumers from indirect financial burdens linked to these renewable energy initiatives.
Collection: Legislation
Status date: May 2, 2024
Status: Introduced
Primary sponsor: Paul Kanitra (sole sponsor)
Last action: Introduced, Referred to Assembly Telecommunications and Utilities Committee (May 2, 2024)

Category: None (see reasoning)

The text primarily focuses on prohibiting the recovery of certain costs related to offshore wind energy projects. It does not explicitly mention AI or technologies related to AI like algorithms, automated systems, or machine learning. Therefore, there are no relevant sections that would pertain to the regulation or impact of AI on society, data governance, system integrity, or performance benchmarks. The legislation is centered around financial obligations rather than technological implications.


Sector: None (see reasoning)

The text does not address any specific sector involving AI. As it deals with financial regulations associated with energy systems (specifically wind energy projects), it does not connect with sectors that utilize or regulate AI, such as government services, healthcare, or industries impacted by AI technologies. Thus, none of the defined sectors are applicable to the content.


Keywords (occurrence): algorithm (1) show keywords in context

Description: Authorizing a school district to receive grant funds for specified purposes; requiring grant recipients to select an artificial intelligence platform that meets certain requirements; revising eligibility requirements for a New Worlds Scholarship account; deleting responsibilities for the Department of Education relating to the New Worlds Reading Initiative; creating the Lastinger Center for Learning at the University of Florida; requiring that the progress monitoring system provide prekinderg...
Summary: The bill amends Florida education laws to enhance grant funding for artificial intelligence tools in schools, revise New Worlds Scholarship eligibility, and establish the Lastinger Center for Learning, aiming to improve student literacy and achievement.
Collection: Legislation
Status date: Jan. 18, 2024
Status: Introduced
Primary sponsor: Appropriations (3 total sponsors)
Last action: Laid on Table, refer to CS/HB 1361 (Feb. 7, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text relates to the integration of artificial intelligence (AI) platforms within educational structures, emphasizing its impact on learning and teacher workload reduction. This impact reflects the Social Impact category, specifically addressing how AI can enhance educational outcomes and the responsibilities of educators and administrators concerning new technologies. The Data Governance category is relevant due to potential issues arising from the use of AI for data collection and student progress monitoring. The System Integrity category is relevant because the legislation mandates that AI platforms maintain specific compliance and operational standards. Robustness is somewhat less relevant, as the focus is not predominantly on performance benchmarks or oversight measures, but rather on implementation in an educational context. Therefore, the score reflects how the text relates to each category: the primary focus on social impact with substantial implications on data governance, system integrity, and lesser implications on robustness.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

The sector of 'Academic and Research Institutions' receives the highest relevance score due to the text's focus on educational institutions, the deployment of AI technologies in learning environments, and references to a specific university initiative (Lastinger Center for Learning). The relevance to 'Government Agencies and Public Services' is also notable as the legislation involves state education departments and public services in education. 'Private Enterprises, Labor, and Employment' could be slightly relevant since the AI platforms could involve private-sector technologies, but the direct connection to labor practices is tenuous. Other sectors like Healthcare or Nonprofits have minimal relevance as they are not directly addressed in the context of this bill. Overall, the primary scores represent the direct connection of the bill to educational sectors and entities.


Keywords (occurrence): artificial intelligence (3) automated (1) show keywords in context

Description: An Act to repeal 165.88 (3m) (d); to amend 165.88 (4); and to create 165.88 (3p) of the statutes; Relating to: grants to schools to acquire proactive firearm detection software and making an appropriation. (FE)
Summary: The bill allocates $4 million in grants to schools for acquiring proactive firearm detection software, aiming to enhance school safety through collaboration with local law enforcement agencies.
Collection: Legislation
Status date: April 15, 2024
Status: Other
Primary sponsor: Calvin Callahan (26 total sponsors)
Last action: Failed to concur in pursuant to Senate Joint Resolution 1 (April 15, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text primarily focuses on legislation for grants to schools for the acquisition of proactive firearm detection software that employs artificial intelligence (AI). This directly relates to the Social Impact category, as AI's role in enhancing school safety can influence how technology affects societal interactions and perceptions. Furthermore, the requirement for grants to ensure the software is developed in the United States and the guidelines for its operation contribute to considerations of data governance, specifically regarding the management of data derived from potentially sensitive environments like schools. The System Integrity category may also be relevant due to the mentioned requirements for the software's integration and safety protocols, but it is not the primary focus. The category of Robustness seems less relevant since the text does not establish benchmarks or performance standards for the AI system discussed.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The legislation has implications for both Government Agencies and Public Services due to the involvement of the Department of Justice in awarding grants and oversight, potentially impacting public safety. It also touches on the Private Enterprises, Labor, and Employment sector, as software companies may be involved in developing the required proactive firearm detection software. However, the primary focus is on schools and law enforcement, which are directly involved in the delivery of public services. The role of AI in this context does not align closely with sectors such as Healthcare, Academic and Research Institutions, or Nonprofits and NGOs, making them less relevant. The Politics and Elections sector does not apply as there are no elements of political campaigns or electoral processes mentioned in the text.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Description: An act to add Chapter 5.9 (commencing with Section 11549.63) to Part 1 of Division 3 of Title 2 of the Government Code, relating to artificial intelligence.
Summary: The Generative Artificial Intelligence Accountability Act mandates California state agencies to assess risks of generative AI, ensure transparency in its use, and communicate AI-generated content clearly to the public, promoting fairness and accountability.
Collection: Legislation
Status date: Sept. 29, 2024
Status: Passed
Primary sponsor: Bill Dodd (5 total sponsors)
Last action: Chaptered by Secretary of State. Chapter 928, Statutes of 2024. (Sept. 29, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The Generative Artificial Intelligence Accountability Act explicitly addresses numerous aspects of AI, such as the use of generative AI (GenAI), bias, transparency, and accountability. The legislation acknowledges the positive potential of AI while stressing the need for protective measures to guard against risks, such as bias and misinformation, which align closely with the topics covered under Social Impact and System Integrity categories. Data governance is also relevant, given the requirements for careful oversight and management of how data is utilized by AI systems. Finally, the legislation emphasizes the need for processes to augment AI performance and safety, aligning it with the Robustness category.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
Academic and Research Institutions
Nonprofits and NGOs
Hybrid, Emerging, and Unclassified (see reasoning)

The text of the bill involves the use of AI in state governance, the implications for vulnerable communities, and the potential risks associated with deployment within critical infrastructure, which closely relates to Government Agencies and Public Services. Moreover, the bill also discusses enhancing public trust and smart governance through the use of AI, which solidifies its relevance to this sector. It does not directly address the legislative processes of Politics and Elections, nor does it specifically mention the Judicial System or Healthcare, thus they received lower scores. Focus on collaboration with academic institutions suggests a connection to Academic and Research Institutions, while the extensive regulatory considerations imply potential relevance to all sectors but in varying degrees.


Keywords (occurrence): artificial intelligence (16) automated (2) show keywords in context

Summary: The bill secures a $1.5 billion award for GlobalFoundries to build a new chip factory in Malta, NY, enhancing domestic semiconductor production and creating jobs, bolstering U.S. national security and economic stability.
Collection: Congressional Record
Status date: Nov. 20, 2024
Status: Issued
Source: Congress

Category:
Societal Impact
Data Robustness (see reasoning)

The text discusses the significance of semiconductor production, especially highlighting its importance for national security and economic stability, with a specific mention of artificial intelligence as one of the key sectors relying on these chips. However, it focuses more on economic implications and job creation rather than the social impacts of AI or data governance and system integrity strictly in relation to AI. Therefore, it is moderately relevant to the categories concerning Social Impact and Robustness due to the relation to manufacturing and AI's role in future industries, while relevance to System Integrity and Data Governance is less prominent as the text does not address technological controls or data management within AI systems directly.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text is highly relevant to Government Agencies and Public Services as it discusses the legislative effort tied to the Chips and Science Act, which aims to strengthen U.S. semiconductor manufacturing and its implications on national economic security. It also relates to Private Enterprises, Labor, and Employment because it emphasizes job creation and the transformation of the labor force due to the semiconductor industry. While there are mentions of AI, they do not specifically pertain to the political or judicial sectors. Thus, it scores moderately for Government Agencies and Public Services and Private Enterprises, Labor, and Employment, and lower for other sectors.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Summary: The bill involves various executive communications from multiple U.S. government departments, transmitting regulations and reports to Congress for review, mainly relating to agriculture, housing, energy, and environmental policies.
Collection: Congressional Record
Status date: Dec. 3, 2024
Status: Issued
Source: Congress

Category:
Societal Impact (see reasoning)

The text appears to involve various executive communications, primarily focused on regulations and final rule submissions across multiple departments. The one relevant section explicitly mentions 'Advancing the Responsible Acquisition of Artificial Intelligence in Government'. This suggests a direct connection to AI in government operations, but the remaining text does not relate to the societal impacts, data governance, system integrity, or robustness associated with AI technologies. The presence of the related terms justifies a consideration under the Social Impact category, mainly due to its implications for responsible AI acquisition policies within the government context, signaling potential impacts on governance and ethical considerations. For the other categories, they lack direct references to AI or its operational framework, leading to scores that reflect this discrepancy.


Sector:
Government Agencies and Public Services (see reasoning)

The text contains references to executive communications and rules pertaining to a wide array of governmental functions, with the notable exception being the mention of AI in government acquisition. The sector that best aligns with the presence of this term is 'Government Agencies and Public Services', as it deals directly with how government agencies are integrating and regulating AI for public services. Other sectors do not have significant mentions in the text, resulting in primarily low ratings for those areas. However, the clear reference to AI's engagement with government systems justifies a higher score under this specific sector.


Keywords (occurrence): artificial intelligence (1)

Description: To require covered platforms to remove nonconsensual intimate visual depictions, and for other purposes.
Summary: The TAKE IT DOWN Act mandates platforms to remove nonconsensual intimate visual depictions, addressing the exploitation of individuals, especially through deepfakes, and imposing penalties for violations.
Collection: Legislation
Status date: July 10, 2024
Status: Introduced
Primary sponsor: Maria Salazar (21 total sponsors)
Last action: Referred to the House Committee on Energy and Commerce. (July 10, 2024)

Category:
Societal Impact (see reasoning)

The TAKE IT DOWN Act specifically addresses the misuse of technology related to deepfakes, a form of AI-generated content. It emphasizes the need for accountability on digital platforms to prevent and mitigate the psychological and reputational harm caused by nonconsensual intimate visual depictions. This indicates a significant concern for the social impact of AI and the role it plays in the privacy and security of individuals. It does not deeply involve issues of data governance outside of consent and privacy aspects, nor does it address system integrity or robustness of AI systems directly. Therefore, the most fitting category is 'Social Impact.'


Sector: None (see reasoning)

The legislation pertains primarily to the use of deepfake technology within the context of individual rights and protections rather than any specific sector such as healthcare or government. However, it has implications for the technology sector, especially regarding the regulation of platforms that deal with user-generated content. Importantly, it addresses the impact of AI technologies on personal privacy within public and online spaces but does not engage with broader sector-specific applications. Therefore, it does not strongly fit within the predefined sectors.


Keywords (occurrence): deepfake (4) show keywords in context

Summary: H.R. 9639 asserts Congressional authority under Article I, Section 8 of the Constitution to legislate on a single subject: Artificial Intelligence.
Collection: Congressional Record
Status date: Sept. 17, 2024
Status: Issued
Source: Congress

Category:
Societal Impact (see reasoning)

The text explicitly mentions 'Artificial Intelligence,' making this legislation directly relevant to the category of Social Impact, as it implies a broader discourse on AI's implications on society. Despite this focus on AI, no other categories like Data Governance, System Integrity, or Robustness are mentioned or supported within the sparse text. This limits the relevance to mainly one category regarding social implications as outlined in the legislation.


Sector: None (see reasoning)

The reference to 'Artificial Intelligence' within the legislative context could potentially imply impacts across several sectors; however, the lack of additional context makes it difficult to align it specifically with areas like Politics and Elections or Government Agencies. The text simply does not provide enough substance to assess relevance in sectors outside of a general AI reference. Thus, the scores reflect a low relevance overall, with only a slight nod towards the Political and Elections sector due to the legislative framework.


Keywords (occurrence): artificial intelligence (1)

Description: Concerns social media privacy and data management for children and establishes New Jersey Children's Data Protection Commission.
Summary: This bill mandates social media platforms in New Jersey to enhance privacy and data management for children, requiring risk assessments and establishing the New Jersey Children's Data Protection Commission to oversee these standards.
Collection: Legislation
Status date: Jan. 9, 2024
Status: Introduced
Primary sponsor: Herbert Conaway (2 total sponsors)
Last action: Introduced, Referred to Assembly Science, Innovation and Technology Committee (Jan. 9, 2024)

Category:
Societal Impact
Data Governance (see reasoning)

The text primarily addresses the privacy and data management concerns for children on social media platforms. It includes provisions for conducting Data Protection Impact Assessments, which inherently ties into the concept of assessing risks associated with AI algorithms used in social media, such as profiling and targeted advertising systems. This legislation emphasizes accountability for social media platforms in protecting children while navigating the intersection of AI technology and data handling, which hints at potential social impacts. Therefore, it is closely aligned with the Social Impact and Data Governance categories. Although it does cover security measures, the focus is primarily on data privacy rather than system integrity or performance benchmarks, yielding lower relevance in those areas. Hence, the scores reflect this differentiation.


Sector:
Government Agencies and Public Services
Healthcare (see reasoning)

The text discusses regulations concerning social media platforms that children are likely to access, hence it is relevant to sectors involving children and digital interactions. This primarily aligns with Government Agencies and Public Services because it establishes a commission and sets legal requirements for online services likely used by children, which are inherently governmental in nature. The focus on data management and privacy for minors may also be tangentially relevant to the Healthcare sector, particularly concerning children's well-being, but this is less direct. Overall, the legislation reflects significant relevance to Government Agencies and Public Services, while other sectors are relevant to a lesser degree.


Keywords (occurrence): automated (1) show keywords in context

Description: As introduced, requires each department of the executive branch to develop a plan to prevent the malicious and unlawful use of artificial intelligence for the purpose of interfering with the operation of the department, its agencies and divisions, and persons and entities regulated by the respective department; requires each department to report its plan, findings, and recommendations to each member of the general assembly no later than January 1, 2025. - Amends TCA Title 2; Title 4; Title 8;...
Summary: The bill mandates Tennessee executive branch departments to create plans by January 2025 to prevent the malicious use of artificial intelligence, ensuring its responsible application across agencies.
Collection: Legislation
Status date: Jan. 30, 2024
Status: Introduced
Primary sponsor: Raumesh Akbari (sole sponsor)
Last action: Assigned to General Subcommittee of Senate State and Local Government Committee (March 19, 2024)

Category:
Societal Impact
System Integrity
Data Robustness (see reasoning)

The text explicitly addresses the utilization and regulation of artificial intelligence within state departments to prevent malicious and unlawful use. This indicates a strong relevance to the Social Impact category, as it seeks to mitigate risks to individuals and entities that could arise from AI applications. The text does not delve into data management or system integrity concerns but does imply a need for robustness in AI planning and implementation, suggesting that there are overarching performance concerns that may fit into the Robustness category, albeit not directly stated.


Sector:
Government Agencies and Public Services
Academic and Research Institutions (see reasoning)

The legislation pertains predominantly to state executive agencies and their operation with AI, thus indicating considerable relevance to the Government Agencies and Public Services sector. It doesn't make any specific provisions for political use, judicial application, or private sector implications. This limits its relevance to other sectors while maintaining a moderate importance to the Academic and Research Institutions sector given that findings and recommendations could influence research and educational policies around AI. The remaining sectors receive a lower relevance score as they do not appear directly connected to the content of this Act.


Keywords (occurrence): artificial intelligence (3) machine learning (1) neural network (1) show keywords in context

Description: An Act amending Title 18 (Crimes and Offenses) of the Pennsylvania Consolidated Statutes, in computer offenses, providing for artificial intelligence; and imposing a penalty.
Summary: This bill mandates the use of watermarks on artificial intelligence-generated content to indicate its origin and imposes penalties for non-compliance, aiming to enhance transparency and accountability in digital media.
Collection: Legislation
Status date: Nov. 6, 2024
Status: Introduced
Primary sponsor: Johanny Cepeda-Freytiz (15 total sponsors)
Last action: Referred to CONSUMER PROTECTION, TECHNOLOGY AND UTILITIES (Nov. 6, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text specifically addresses the use of artificial intelligence in creating content and establishes penalties for failing to watermark AI-generated materials. This directly relates to social impact as it deals with accountability in AI-generated content and potential misinformation. It also implicates data governance due to the requirement for watermarks and the definitions provided for transparency. System integrity is somewhat relevant since it discusses the security of identity and likeness, but it is not the primary focus. Robustness is less relevant here as the main goal is more about legal compliance rather than benchmarking AI performance.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)

The text primarily pertains to the impact of AI in the creative industry regarding content generation and the necessity for watermarks, thereby fitting best with the Private Enterprises, Labor, and Employment sector due to its implications for businesses involved in content creation. It also relates to Government Agencies and Public Services because the law is delivered via legislative processes, impacting how public and private entities interact with AI-generated materials. There's less direct relevance to other sectors, such as Healthcare or the Judicial System.


Keywords (occurrence): artificial intelligence (10) show keywords in context

Summary: The bill addresses the youth mental health crisis, emphasizing bipartisan efforts to mitigate the negative impacts of social media on children, including introducing legislation to regulate social media use.
Collection: Congressional Record
Status date: Nov. 19, 2024
Status: Issued
Source: Congress

Category:
Societal Impact (see reasoning)

The text primarily discusses youth mental health and addresses the impact social media has on this issue. It emphasizes the effects of social media algorithms on mental health, which could relate to the discussion of social impact due to the societal consequences of social media use. It also introduces legislation tied to algorithm design and its implications for youth, directly connecting to the accountability of system developers and the psychological harm caused by these systems. However, there is minimal explicit focus on data governance, system integrity, or robustness regarding AI's more technical aspects, despite the mention of algorithms. Therefore, relevance to these categories is assigned lower scores overall.


Sector:
Government Agencies and Public Services
Healthcare
Private Enterprises, Labor, and Employment
Nonprofits and NGOs
Hybrid, Emerging, and Unclassified (see reasoning)

The text is heavily focused on the issues surrounding youth mental health and social media, which may not specifically relate to any defined sector. It touches upon legislation designed to address mental health concerns amplified by social media usage but does not explicitly mention regulations for specific sectors like healthcare or government efficiency. The discussions primarily reflect a national concern that encompasses multiple sectors but does not fit neatly into one, hence leading to lower scores for sector-specific relevance.


Keywords (occurrence): algorithm (1) show keywords in context

Description: Concerning fabricated intimate or sexually explicit images and depictions.
Summary: Substitute House Bill 1999 establishes penalties for disclosing fabricated intimate or sexually explicit images, particularly involving minors, and amends relevant criminal laws to enhance protections against such depictions.
Collection: Legislation
Status date: March 14, 2024
Status: Passed
Primary sponsor: Tina Orwall (16 total sponsors)
Last action: Effective date 6/6/2024. (March 14, 2024)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text specifically addresses the topic of fabricated intimate images, referencing the use of AI in the digitization process. This involves the creation or alteration of images using artificial intelligence and highlights the legal implications of such actions. Thus, it is directly related to issues concerning AI in a significant way, particularly in terms of societal impacts, data governance, system integrity, and robustness in the context of safeguarding minors from harmful content. Nevertheless, the main focus is on legal accountability rather than technical robustness or integrity standards, so some categories score higher than others based on direct relevance.


Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)

The text predominantly focuses on legal aspects related to fabricated images, which has implications across various sectors. However, the strongest relevance is likely with the chapter addressing the justice system, given the context of legal measures against unauthorized intimate images, particularly those involving minors. Some references to educational and research contexts are noted but are not the primary focus. Therefore, while multiple sectors might touch on themes in the bill, the direct implications and obligations surrounding its content primarily connect to the judicial system.


Keywords (occurrence): artificial intelligence (3) automated (3) show keywords in context

Description: A BILL to be entitled an Act to amend Title 40 of the Official Code of Georgia Annotated, relating to motor vehicles and traffic, so as to provide for the operation of miniature on-road vehicles on certain highways; to provide for standards for registration of such vehicles; to provide for issuance of license plates for miniature on-road vehicles; to provide for an annual licensing fee for such vehicles; to provide for issuance of certificates of title by the Department of Revenue for such ve...
Summary: This bill provides regulations for operating miniature on-road vehicles in Georgia, including registration standards, temporary permit procedures, and local authority restrictions, enhancing the state's vehicular laws.
Collection: Legislation
Status date: Feb. 29, 2024
Status: Engrossed
Primary sponsor: J Collins (6 total sponsors)
Last action: Senate Read Second Time (March 11, 2024)

Category:
System Integrity (see reasoning)

The text primarily deals with the operation and regulation of miniature on-road vehicles, providing guidelines for registration, licensing, and operation within certain legal frameworks. However, it includes a mention of 'automated driving systems' in the context of defining 'minimal risk conditions' where vehicles might operate autonomously under certain failure scenarios. This reference suggests a minimal intersection with AI, particularly in understanding the implications of autonomous driving systems on safety, regulatory standards, and vehicle operation. Most aspects of the proposed legislation are administrative and regulatory rather than directly addressing the profound societal impacts or data governance issues related to AI as a broader concept. Thus, the categories of 'Data Governance' and 'Robustness' may not be fully applicable due to the lack of comprehensive focus or mandates on data management and performance benchmarks specific to AI systems. The category 'System Integrity' may be viewed as relevant considering the emphasis on operational standards and maintenance systems, although the emphasis is limited. Therefore, the relevance scores reflect these considerations.


Sector: None (see reasoning)

The text does not explicitly mention any of the sectors related to AI use or regulation. It primarily discusses motor vehicle regulation rather than AI applications in contexts like politics, government services, healthcare, or others listed. The closest applicable sector could be 'Government Agencies and Public Services', considering the involvement of the Department of Revenue and the electronic permit issuance system, but it is still tangential. Therefore, relevance to the sectors remains limited, with most categories receiving score indicating no meaningful connection to the text.


Keywords (occurrence): automated (1) autonomous vehicle (1) show keywords in context
Feedback form