4837 results:


Summary: The bill allows national banks to establish remote service units (automated facilities for banking functions) and deposit production offices to enhance banking access while ensuring effective risk management and oversight processes.
Collection: Code of Federal Regulations
Status date: Jan. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category:
System Integrity (see reasoning)

The text primarily addresses the establishment and operation of remote service units (RSUs) by national banks, referencing automated facilities to conduct banking functions. While this could link to aspects of automation and system integrity, the focus is more on risk management and compliance with established practices rather than directly addressing broader societal impacts, data governance, or robustness considerations. Therefore, the relevance to AI-related impacts is minimal, primarily touching upon the concept of automation in a financial setting.


Sector:
Government Agencies and Public Services (see reasoning)

The text describes the operational aspects of national banks and the use of automated systems within the banking sector. While it references the oversight of risk management systems, it does not directly address AI usage in a way that specifically pertains to the operation of government services or other sectors such as healthcare or judicial systems. The mention of 'automated' systems connects it moderately to the banking sector but does not elaborate on specific regulations or practices regarding AI. Therefore, the relevance is limited to a moderate sense for operations in financial services.


Keywords (occurrence): automated (3) show keywords in context

Description: Prohibits motor vehicle insurers from discrimination on the basis of socioeconomic factors in determining algorithms used to construct actuarial tables, coverage terms, premiums and/or rates.
Summary: The bill prohibits motor vehicle insurers in New York from using socioeconomic factors, such as income and education, to determine insurance rates and coverage, aimed at preventing discrimination.
Collection: Legislation
Status date: Jan. 11, 2023
Status: Introduced
Primary sponsor: Crystal Peoples-Stokes (12 total sponsors)
Last action: referred to insurance (Jan. 3, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text explicitly addresses algorithms in the context of determining pricing and coverage for motor vehicle insurance, which directly relates to AI usage in decision-making processes. The prohibition on using socioeconomic factors aligns with the principles of fairness and bias mitigation in AI systems, linking it to potential discrimination caused by algorithmic processes. This aspect ties in with the Social Impact category as there are implications for distributional justice, yet the core concern is around data governance and biases in algorithmic decision-making, which strongly correlates with the Data Governance category. System Integrity is relevant as it addresses the accuracy and fairness of the algorithms used, and aspects of Robustness may be considered to an extent since the algorithms need to align with new benchmarks for fairness. Overall, the focus on algorithmic discrimination and the implications of socioeconomic factors point predominantly to the importance of data governance.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

The legislation primarily pertains to the insurance sector and how motor vehicle insurers must operate within the guidelines that prohibit discrimination through their algorithmic processes. It does not directly address the other sectors such as Politics and Elections, Government Agencies and Public Services, or Healthcare. Although aspects of governance and regulatory compliance intersect slightly, the main focus is clearly on the Private Enterprises, Labor, and Employment sector given its regulation of insurance companies and the fair treatment of clients based on algorithm-generated rates. The references to algorithms and data manipulation make it less relevant to sectors like Academic and Research Institutions, or Nonprofits and NGOs, leading to a clear categorization within Private Enterprises, Labor, and Employment.


Keywords (occurrence): algorithm (1) show keywords in context

Description: Prohibits property/casualty insurers from discriminating based on race, color, creed, national origin, disability, age, marital status, sex, sexual orientation, education background or educational level attained, employment status or occupation, income level, consumer credit information or score, ownership or interest in real property, location, type of residence, including but not limited to single-family home, multi-family home, apartment, housing subsidized by state and/or federal programs...
Summary: The bill prohibits property and casualty insurers in New York from discriminating against individuals based on various personal characteristics when determining insurance rates, premiums, or coverage decisions.
Collection: Legislation
Status date: May 30, 2023
Status: Introduced
Primary sponsor: Jamaal Bailey (sole sponsor)
Last action: REFERRED TO INSURANCE (Jan. 3, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text discusses legislation prohibiting discrimination in insurance practices based on several demographic and socio-economic factors, touching explicitly on how algorithms used for actuarial calculations must not incorporate discriminatory practices. This relates closely to social impact, as it aims to prevent harm and promote fairness within the insurance sector. The implications of using algorithms for determining rates directly intersect with issues of discrimination, thus addressing significant societal concerns around equity and inclusion. The references to algorithm adjustments reinforce the relevance to data governance as well, as they highlight the need for accurate and fair data use in insurance underwriting processes. System integrity may also be reflected in this legislation as it addresses norms for algorithmic transparency and the integrity of data used in decision-making. Robustness is less applicable as it focuses more on performance benchmarks rather than the discrimination focus of this text.


Sector:
Private Enterprises, Labor, and Employment (see reasoning)

This legislation directly impacts the insurance sector by outlining regulations that prevent discrimination based on various demographics and socio-economic factors. It emphasizes the necessity of fair practices in underwriting, which is crucial within private enterprises, especially in insurance. There are no direct indications that the legislation addresses or affects politics, healthcare, academic institutions, or international standards. The focus remains predominantly on private enterprise regulations, making the score for other sectors lower.


Keywords (occurrence): algorithm (1) show keywords in context

Description: Concerning reasonable exceptions to insurance rates for consumers whose credit information is influenced by extraordinary life circumstances.
Summary: This bill allows consumers whose credit ratings are affected by extraordinary life circumstances to request reasonable exceptions to their insurance rates, ensuring fair access to personal insurance coverage.
Collection: Legislation
Status date: Feb. 10, 2023
Status: Introduced
Primary sponsor: David Hackney (4 total sponsors)
Last action: House Rules "X" file. (Feb. 20, 2024)

Category: None (see reasoning)

The text discusses reasonable exceptions to insurance rates, specifically how credit history, which is determined using algorithms, can impact personal insurance premiums. While there is a mention of ‘algorithm’ in relation to calculating insurance scores and other models, the primary focus is on consumer protection and insurance regulation rather than the broader impacts of AI on society, the secure management of AI data, system integrity of AI applications, or the robustness of AI development metrics. Thus, while the mention of algorithms shows some technical relevance, it's not substantial enough to score highly in any of the categories, particularly since the text is more about the effects of credit histories on insurance rather than direct effects or governance of AI systems.


Sector: None (see reasoning)

The text does not specifically address AI within the context of the political sector, government operations, judicial use, healthcare, business practices, or educational institutions. The regulation surrounding how insurance uses algorithms derived from credit histories is mentioned, emphasizing protecting consumer rights regarding insurance premiums rather than direct applications of AI within those defined sectors. As such, the relevance to these sectors is minimal, reflected by low scores across all categories.


Keywords (occurrence): algorithm (2) show keywords in context

Summary: The bill outlines a methodology for Public Housing Authorities to compare costs of maintaining public housing versus tenant-based assistance, detailing requirements for conversion plans and cost calculations over time.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text discusses the methodology for comparing costs associated with public housing and tenant-based assistance, focusing mainly on financial aspects. There are no explicit mentions of AI, automation, or related technologies. While the text does contain analytical methods and economic calculations that might involve algorithms (e.g., discounted cash flows, operating costs projections), these do not relate directly to AI systems or misconduct. Therefore, it does not effectively fit into any of the categories that primarily cover AI's social impact, data governance or the technical integrity of AI systems, which revolves around their direct interactions with citizens or regulatory frameworks.


Sector: None (see reasoning)

The text mainly pertains to the housing sector, especially public housing administration and financial management, but does not explicitly make any connection to typical sector guidelines like policy-making in politics, judicial systems applications, or public health administration. Therefore, it does not significantly relate to any of the predefined sectors that are mainly focused on AI applications or implications within those areas.


Keywords (occurrence): algorithm (2) show keywords in context

Description: Creates the Automated License Plate Recognition System Data Act. Provides that a law enforcement agency may use recorded automated license plate recognition system (ALPR) data and historical ALPR system data only for a legitimate law enforcement purpose. Provides that ALPR system data collected by law enforcement and historical ALPR system data collected by law enforcement may not be used, shared, sold, traded, or exchanged for any other purpose. Requires law enforcement agencies using an aut...
Summary: The ALPR System Data Act regulates the use and retention of Automated License Plate Recognition (ALPR) data by law enforcement in Illinois, ensuring it is used solely for legitimate law enforcement purposes and protected from unauthorized access or disclosure.
Collection: Legislation
Status date: Feb. 7, 2023
Status: Introduced
Primary sponsor: Laura Murphy (sole sponsor)
Last action: Rule 3-9(a) / Re-referred to Assignments (March 31, 2023)

Category:
Societal Impact
Data Governance (see reasoning)

The text discusses the usage and regulation of Automated License Plate Recognition (ALPR) systems, which employ algorithms to convert images of license plates into data. This represents a direct application of technology that incorporates AI concepts like algorithmic processing and automated data interpretation. Given its focus on the legal framework surrounding data collection, usage, and privacy, it pertains closely to the categories of Social Impact and Data Governance. However, the focus on the regulatory framework suggests less emphasis on broader systemic integrity or benchmark performance, pointing to lower relevance in those areas.


Sector:
Government Agencies and Public Services (see reasoning)

The text's provisions are focused primarily on the use of ALPR systems within law enforcement, which is aligned with the Government Agencies and Public Services sector. It regulates how these technologies can be used specifically by law enforcement agencies, prohibiting data misuse and ensuring policies are established to protect sensitive information. Additionally, it touches upon the implications for privacy and civil liberties, but does not directly address the Judicial System or other sectors, hence lower relevance in those contexts.


Keywords (occurrence): automated (9) show keywords in context

Summary: The bill establishes regulations for importing nonroad and stationary engines, vehicles, and equipment, emphasizing compliance with EPA emission standards and necessary documentation to prevent unauthorized importations.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily addresses regulations concerning the importation of nonroad and stationary engines, vehicles, and equipment, particularly in relation to compliance with emission standards mandated by the EPA. It does not address the social impacts of AI technology, such as its effects on discrimination, consumer protections, or misinformation. Additionally, while the text emphasizes documentation and regulation for engines and vehicles, it lacks any mention of the management or governance of data that AI systems might utilize. There is no discussion on the integrity, security, or performance benchmarks of AI systems, as the focus remains on emissions compliance rather than AI technologies. Overall, this text bears minimal relevance to the suggested categories concerning AI legislation.


Sector: None (see reasoning)

The text does not include any aspects that address AI-related uses within the various specified sectors. It is focused exclusively on the regulatory requirements for the import of engines and vehicles concerning environmental standards, which may involve equipment but does not relate to sectors like politics, healthcare, or academic institutions. There is no mention of AI applications or regulation across the sectors outlined. Therefore, the relevance to each sector is quite low.


Keywords (occurrence): automated (1)

Description: To impose a tax on certain trading transactions to invest in our families and communities, improve our infrastructure and our environment, strengthen our financial security, expand opportunity and reduce market volatility.
Summary: The "Tax on Wall Street Speculation Act" imposes a tax on specific trading transactions, aiming to fund community investments, enhance infrastructure, and reduce market volatility and financial insecurity.
Collection: Legislation
Status date: June 14, 2023
Status: Introduced
Primary sponsor: Barbara Lee (18 total sponsors)
Last action: Referred to the House Committee on Ways and Means. (June 14, 2023)

Category: None (see reasoning)

The text does not directly address any AI technologies or their implications, which leads to a lack of relevance across the categories. While it discusses trading transactions and taxation, none of the specific language related to AI, algorithms, or related technologies is present, indicating that this bill primarily concerns fiscal policies rather than technological advancements or their societal impacts.


Sector: None (see reasoning)

The bill is centered on financial transactions and taxation rather than addressing AI applications in any sector. There are no mentions of AI utilization in political, judicial, or other contexts listed in the categories. Given that the legislation does not engage with AI in any meaningful way, it shows no relevance to any of the specified sectors.


Keywords (occurrence): algorithm (1) show keywords in context

Description: An Act amending Title 18 (Crimes and Offenses) of the Pennsylvania Consolidated Statutes, in sexual offenses, providing for the offense of unlawful dissemination of artificially generated depiction; and, in minors, further providing for the offense of sexual abuse of children and for the offense of transmission of sexually explicit images by minor.
Summary: This Pennsylvania bill establishes penalties for unlawfully disseminating artificially generated sexual depictions, especially involving minors, and strengthens laws against child sexual abuse and explicit image transmission by minors.
Collection: Legislation
Status date: April 28, 2023
Status: Introduced
Primary sponsor: Ryan Mackenzie (13 total sponsors)
Last action: Referred to JUDICIARY (April 28, 2023)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text addresses the unlawful dissemination of artificially generated depictions, which directly pertains to the impact of AI in the context of sexual offenses. This raises concerns regarding potential harm to individuals, particularly minors, and the psychological impact such depictions may have on society. It emphasizes the accountability of those who create and distribute such content and introduces legal definitions crucial for understanding the implications of AI in this realm. Therefore, the relevance to Social Impact is very high. Data Governance is relevant as it connects to the management of data involved in the creation of these depictions, particularly regarding consent and privacy. System Integrity involves ensuring the underlying technology (AI) used to create such depictions is regulated and secure. Robustness is applicable as the legislation could necessitate benchmarks for the technology used in generating these depictions, although this aspect isn't as explicit. Overall, Social Impact receives the highest score due to the focus on harm, while Data Governance also ranks highly for its implications on privacy and accurate data management. System Integrity scores moderately due to its connection to the oversight of AI systems used in this context, and Robustness is less pertinent as it addresses performance benchmarks which aren't a primary concern of the text.


Sector:
Government Agencies and Public Services
Judicial system (see reasoning)

Given the nature of the Act, it is primarily related to protecting minors and regulating the dissemination of harmful content, hence it is highly relevant to the sectors of Healthcare and Government Agencies. It addresses societal harm (Social Impact) and the operational jurisdiction of government agencies in enforcing these laws. The act does not specifically mention applications in academic contexts or political strategies, thus the relevance to Academic and Research Institutions or Politics and Elections is low. The Judicial System is relevant as it relates to the legal framework set up to address offenses and may influence how AI is utilized or regulated in legal contexts. Private Enterprises, Labor, and Employment are slightly relevant due to potential implications on companies creating AI technologies. Overall, the sectors relevant to this legislation revolve around protecting individuals and minors from harm, ensuring proper governance in technology use, but their focus on AI usage in courts does not make them the forefront sector concerns.


Keywords (occurrence): artificial intelligence (6) show keywords in context

Summary: The bill outlines the criteria and application process for organizations to become HUD-approved housing counseling agencies, ensuring they meet specific qualifications, experience, and compliance standards to provide effective housing counseling services.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text provided focuses primarily on the criteria for the approval of housing counseling agencies by HUD and does not explicitly address AI-related topics. While it does mention the use of an 'automated housing counseling client management system,' this reference is insufficient to categorize the text under any of the AI-related categories. Social Impact, Data Governance, System Integrity, and Robustness discuss broader issues relating to AI's influence on society and the management of data within AI systems, which are not relevant to the text. No aspects of the text engage with AI principles in meaningful ways that relate to these categories.


Sector: None (see reasoning)

The text is primarily centered around regulations pertaining to housing counseling agencies rather than the sectors defined. It does mention the automated systems used, but this is more about operational compliance than direct interaction with the specific sectors. It does not delve into how AI is regulated or used in the context of politics, public services, healthcare, or other sectors mentioned. There is no mention of AI's role in politics, health, or any private enterprise context that otherwise might establish relevance.


Keywords (occurrence): automated (1) show keywords in context

Summary: The bill establishes record preservation and confidentiality requirements for alternative trading systems, ensuring the protection of subscriber information and regulatory compliance in trading operations.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily discusses recordkeeping and preservation requirements for alternative trading systems. It focuses on safeguarding trading information and ensuring confidentiality, without explicit mention of Artificial Intelligence (AI) or related technologies. The absence of terms relating to AI means that the legislation does not directly address social impact aspects like discrimination or consumer protection through AI, nor does it cover data governance, system integrity, or robustness as they relate to AI. Therefore, the text lacks relevance to the predetermined categories regarding AI legislation.


Sector: None (see reasoning)

The text is focused entirely on the alternative trading systems and does not specifically address the use or regulation of AI in politics, government, healthcare, or any other specified sector. While it outlines recordkeeping and operational requirements for trading systems, it does not engage with themes pertinent to any specific sector relating to AI. Consequently, the relevance to the sectors is minimal, with only slight applicability in the context of governance as it deals with procedural oversight.


Keywords (occurrence): automated (1) show keywords in context

Description: Urging the establishment of a United States Commission on Truth, Racial Healing, and Transformation.
Summary: The bill urges the establishment of a U.S. Commission on Truth, Racial Healing, and Transformation to acknowledge historical injustices, promote racial equality, and address enduring racial inequities.
Collection: Legislation
Status date: May 17, 2023
Status: Introduced
Primary sponsor: Barbara Lee (131 total sponsors)
Last action: Referred to the House Committee on the Judiciary. (May 17, 2023)

Category:
Societal Impact (see reasoning)

The text mentions the 'rapidly expanding use of artificial intelligence and social media', which indicates a recognition of AI's role in contemporary social dynamics and implications for democracy. This specific mention is sufficient to establish relevance to all categories. However, the primary focus of the text is on societal issues, especially racial healing and transformation, rather than directly on AI systems or their governance. Consequently, the category 'Social Impact' will be evaluated as the most relevant due to its focus on AI's societal implications, while 'Data Governance', 'System Integrity', and 'Robustness' have only indirect relevance based on the societal context surrounding AI rather than addressing these governance factors explicitly.


Sector: None (see reasoning)

The text concerns a commission aimed at addressing historical injustices and fostering racial healing. There is a mention of AI's roles, but it does not detail the use of AI in sectors like politics, government, education, or healthcare, nor does it address its applications in those fields. Therefore, while the mention of AI suggests some relevance to multiple sectors, it does not directly pertain to any of them in a substantial way. Each sector receives a score of 2 for slight relevance due to the mention of AI, but they lack explicit connections or applications outlined in the document.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Description: Microtransit rideshare pilot program established, microtransit rideshare account established, report required, and money appropriated.
Summary: The bill establishes a microtransit rideshare pilot program in Minnesota to improve mass transit access in underserved areas, utilizing algorithm-based software and potentially autonomous vehicles. It aims to enhance flexibility and efficiency in public transportation.
Collection: Legislation
Status date: Jan. 17, 2023
Status: Introduced
Primary sponsor: Duane Quam (sole sponsor)
Last action: Introduction and first reading, referred to Transportation Finance and Policy (Jan. 17, 2023)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The bill establishes a microtransit rideshare program that includes the usage of algorithm-based software and autonomous vehicles, which directly pertains to AI technologies. The focus on flexible routing based on real-time data and automated driving systems indicates the relevance of AI in transportation. The bill also addresses data management for user information and operational efficiencies, relevant for assessing the program's impact on public services and societal implications. However, it does not delve deeply into broader social implications beyond transportation efficiency and does not pose significant challenges to data governance or system integrity beyond standard operational concerns.


Sector:
Government Agencies and Public Services (see reasoning)

The text focuses on the use of AI in transportation through the microtransit rideshare program, and mentions entities such as the Department of Transportation and the University of Minnesota that will be involved in its implementation. The legislation addresses the use of autonomous vehicles and algorithm-based software which indicates a significant impact on government-funded transportation services and public administration. However, it does not extensively touch on issues such as electoral processes or health sectors.


Keywords (occurrence): automated (1) algorithm (5) show keywords in context

Description: Social media; Oklahoma Social Media Transparency Act of 2023; industry requirements; shadow banning; algorithms; effective date.
Summary: The Oklahoma Social Media Transparency Act of 2023 mandates social media companies to disclose and consistently apply content moderation standards, prohibit shadow banning of political candidates, and ensure user rights regarding censorship and algorithmic transparency.
Collection: Legislation
Status date: Feb. 6, 2023
Status: Introduced
Primary sponsor: Terry O'Donnell (sole sponsor)
Last action: Second Reading referred to Rules (Feb. 7, 2023)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text primarily addresses legislation related to social media regulation, particularly focusing on algorithmic transparency and user protections. It explicitly mentions algorithms used for post-prioritization and shadow banning, influencing how content is displayed on social media platforms. This strongly relates to the category of Social Impact, as the act's provisions aim to prevent discrimination and unfair treatment in the dissemination of information, thereby impacting users and society at large. Data Governance is also relevant due to the requirements for user notifications, data access for deplatformed users, and regulations regarding algorithms used by social media platforms. System Integrity is relevant as it mandates consistent application and transparency in algorithmic decision-making, ensuring that these systems can be audited and controlled. Robustness is less relevant; although the act discusses algorithm standards, it focuses more on transparency and user rights instead of developing compliance benchmarks or auditing processes for AI performance. Therefore, Social Impact and Data Governance are most relevant, followed by System Integrity.


Sector:
Government Agencies and Public Services (see reasoning)

The legislation has a significant focus on social media platforms, which fits primarily into the Government Agencies and Public Services sector. It concerns how government regulations interact with digital platforms to ensure transparency and accountability, particularly involving the algorithms that govern content prioritization and censorship. The act’s stipulations about user access to information and notifications directly relate to how public services engage with citizens through social media. While it does touch upon issues of electoral processes through its provisions related to political candidates, it doesn't primarily center on the direct use of AI in politics; therefore, the relevance to the Politics and Elections sector is limited. The categorization of algorithm transparency and its potential biases aligns with the Judicial System if it emerges in legal contexts but is not a central focus of this legislation, as it deals more broadly with social media than just judicial implications. Thus, the most relevant sector is Government Agencies and Public Services due to the act's aim to enhance transparency and accountability in social media.


Keywords (occurrence): algorithm (4) show keywords in context

Summary: The bill outlines minimum technical standards for money and credit handling in Class II gaming systems, ensuring secure and accurate credit acceptance, redemption, and data integrity throughout operations.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text primarily discusses technical standards for money and credit handling in Class II gaming systems and does not directly address the impact of AI on society, the management of data in AI systems, the integrity of AI systems, or the performance benchmarks for AI. While the mention of algorithms in the context of financial operations or electronic random number generation could be tangentially related to algorithmic fairness or security, the focus remains distinctly on regulatory standards specific to financial handling processes. The connection to AI is minimal because there is no reference to AI technologies or their social implications. Therefore, each category receives low relevance scores.


Sector: None (see reasoning)

This text does not specifically address the use of AI in any sector such as politics, healthcare, or any public services. It focuses solely on technical standards for gaming systems, credit handling, and data integrity in financial transactions. Consequently, each sector receives low relevance scores as the legislation does not pertain to AI applications nor regulations in the identified sectors.


Keywords (occurrence): algorithm (1) show keywords in context

Summary: The bill establishes definitions and designations for National Market System (NMS) securities, detailing terms related to trading orders, quotations, and market data to enhance transparency and efficiency in securities transactions.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text outlines various definitions and regulations pertaining to the National Market System (NMS) but does not explicitly address the impact of AI on society. It focuses on the technical terms and procedural aspects of automated trading and market systems rather than societal implications of AI. Therefore, it is not relevant to the Social Impact category. The text lacks discussions related to data management practices particularly in AI contexts, and thus, the Data Governance category is also rated low. Regarding System Integrity, while the text discusses automated traders and their interactions, it does not delve into security or transparency concerns that would fit within the mandate of this category. The Robustness category is deemed irrelevant as the text does not mention performance benchmarks or auditing for AI systems at all. Overall, none of the categories apply significantly to the content of this legislation.


Sector: None (see reasoning)

The text concerns the designation of NMS securities and various definitions related to the operation of automated trading centers. It does not specifically mention any political or electoral processes, nor does it touch upon government services, healthcare, or any employment context. The legislation has no relevance to the judicial system, academic institutions, or international cooperation since it strictly relates to market securities and automated trading processes. Consequently, it does not fit neatly into any predefined sector, leading to a low score across all sectors.


Keywords (occurrence): automated (11) algorithm (1) show keywords in context

Summary: This bill updates regulations governing funds transfers, ensuring compensation through interest payments, defining key terms, and clarifying rights and responsibilities of banks, senders, and beneficiaries in these transactions.
Collection: Code of Federal Regulations
Status date: Jan. 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text predominantly focuses on the provisions and definitions surrounding funds transfers as per the Uniform Commercial Code, specifically Article 4A. There is minimal mention of AI-related terms or concepts. The only relevant mention pertains to 'algorithms' within the context of security procedures. However, this mention does not delve into the implications of AI; rather, it treats algorithms as a part of security mechanisms without insight into broader societal or ethical considerations of AI's impact, governance, or integrity. The emphasis on financial transactions and regulatory frameworks does not align with AI-centric issues, making the relevance of each category limited to one mention of algorithms that does not engage significantly with the implications of AI technology in terms of social impact, data governance, system integrity, or robustness.


Sector: None (see reasoning)

The text does not specifically cover sectors like politics, healthcare, or government services. Instead, it addresses the regulation of funds transfer mechanisms, focusing on banking processes and financial transactions. While the mention of 'algorithms' might suggest notions of system verification that could relate to financial integrity, this mention does not translate into clear applications or concerns within any specific sector. As such, the overall relevance scores would reflect that while there are touches upon these themes, they do not entail substantive engagement or regulation of AI in relevant sectors.


Keywords (occurrence): automated (1) show keywords in context

Description: An act to amend Sections 1798.90.5, 1798.90.51, 1798.90.52, 1798.90.53, and 1798.90.55 of, and to add Section 1798.90.56 to, the Civil Code, relating to personal information.
Summary: Assembly Bill No. 1463 mandates stricter retention and access rules for automated license plate recognition (ALPR) data, aiming to protect privacy and limit data use, especially against federal immigration enforcement.
Collection: Legislation
Status date: June 1, 2023
Status: Engrossed
Primary sponsor: Josh Lowenthal (2 total sponsors)
Last action: In committee: Set, second hearing. Hearing canceled at the request of author. (July 2, 2024)

Category:
Societal Impact
Data Governance
System Integrity (see reasoning)

The text discusses automated license plate recognition (ALPR) systems and their regulation concerning the retention and use of personal information. While the focus is largely on data management and privacy policies, it also touches on broader social implications, including how the misuse of ALPR data may harm individuals (particularly marginalized communities) and raise concerns about privacy and surveillance. Thus, it has relevance to Social Impact. Data Governance is very relevant due to the emphasis on policies for data retention, security, and compliance with privacy laws. System Integrity is moderately relevant as it addresses the security procedures and practices surrounding the collection and management of ALPR data. Robustness is slightly relevant since the text references audits and compliance mechanisms, though it does not explicitly discuss performance benchmarks for AI systems.


Sector:
Government Agencies and Public Services (see reasoning)

The legislation primarily pertains to law enforcement and public agency use of ALPR technology, indicating a strong relevance for Government Agencies and Public Services. There are implications for privacy and civil liberties that could connect to the Judicial System, but the text does not specifically address the judicial use of ALPR data. There is limited direct relevance to sectors like Healthcare, Private Enterprises, or Academic and Research Institutions, as they are not the primary focus of the legislation. Overall, the relevant sectors are primarily Government Agencies and Public Services, with less weight for Judicial System.


Keywords (occurrence): automated (7) show keywords in context

Description: A resolution expressing the sense of the Senate that the United States should negotiate strong, inclusive, and forward-looking rules on digital trade and the digital economy with like-minded countries as part of its broader trade and economic strategy in order to ensure that the United States values of democracy, rule of law, freedom of speech, human and worker rights, privacy, and a free and open internet are at the very core of digital governance.
Summary: The bill urges the U.S. Senate to negotiate comprehensive digital trade rules with allied countries, emphasizing democracy, human rights, and internet freedom in digital governance.
Collection: Legislation
Status date: March 30, 2023
Status: Introduced
Primary sponsor: Todd Young (6 total sponsors)
Last action: Referred to the Committee on Finance. (text: CR S1105-1106) (March 30, 2023)

Category:
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)

The text addresses the need for strong rules on digital trade and governance, which incorporates aspects relevant to AI, especially regarding regulations for emerging technologies that include artificial intelligence. The emphasis on privacy protections, worker rights, and the inclusive development of digital trade suggests that AI systems should operate within frameworks that promote ethical considerations and safeguards against misuse. However, the text does not delve deeply into specifics about AI beyond mentioning it briefly as part of future technologies, which could dilute its direct relevance to the established categories. Therefore, the scores vary across categories based on the text's implications around social impact, governance, integrity, and robustness of AI systems.


Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment
International Cooperation and Standards
Hybrid, Emerging, and Unclassified (see reasoning)

The resolution broadly addresses how digital trade intersected with various sectors such as the economy, worker rights, and trade negotiations. However, it does not explicitly focus on how AI operates within each of these sectors, leading to moderate feelings across various sectors. Notably, the resolution hints at how emerging technologies like AI impact trade and governance, but it lacks specific mention or regulation regarding these technologies within the sectors revisited. The scores reflect this moderate level of relevance.


Keywords (occurrence): artificial intelligence (1) show keywords in context

Summary: The bill establishes regulations for high permeability hemodialysis systems, classifying them and their accessories for treating renal failure, managing fluid overload, and ensuring safe blood purification processes.
Collection: Code of Federal Regulations
Status date: April 1, 2023
Status: Issued
Source: Office of the Federal Register

Category: None (see reasoning)

The text focuses primarily on the technical specifications and classifications of hemodialysis systems, which do not explicitly address issues related to AI technology or its implications. While there are mentions of automated processes in relation to the dialysate delivery system and controls, these do not delve into broader issues concerning social impacts or legal governance associated with AI. Thus, relevance to the categories of Social Impact, Data Governance, System Integrity, and Robustness is limited. The text lacks explicit references to AI, algorithms, or automated decision-making that directly resonate with the defined categories.


Sector:
Healthcare (see reasoning)

The text is centered around the classification and regulatory concerns of medical devices, specifically concerning high permeability hemodialysis systems. There is no direct mention of AI application within healthcare; rather, the focus remains on traditional medical device regulation. Although the delivery system encompasses some control mechanisms which may be automated, they are not indicative of AI usage. Thus, the sectors of Politics and Elections, Government Agencies and Public Services, Judicial System, Healthcare, Private Enterprises, Labor and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified receive low relevance scores. AI's role, if suggested, is minimal and does not warrant inclusion in any specific sector categorization.


Keywords (occurrence): automated (1) show keywords in context
Feedback form