4826 results:
Description:
Summary:
Collection:
Status date:
Status:
Primary sponsor:
( total sponsors)
Source:
Last action: ()
Sector: None (see reasoning)
Keywords (occurrence): artificial intelligence () machine learning () neural network () deep learning () automated () deepfake () synthetic media () large language model () foundation model () chatbot () recommendation system () algorithm () autonomous vehicle ()
Description:
Summary:
Collection:
Status date:
Status:
Primary sponsor:
( total sponsors)
Source:
Last action: ()
Sector: None (see reasoning)
Keywords (occurrence): artificial intelligence () machine learning () neural network () deep learning () automated () deepfake () synthetic media () large language model () foundation model () chatbot () recommendation system () algorithm () autonomous vehicle ()
Description:
Summary:
Collection:
Status date:
Status:
Primary sponsor:
( total sponsors)
Source:
Last action: ()
Sector: None (see reasoning)
Keywords (occurrence): artificial intelligence () machine learning () neural network () deep learning () automated () deepfake () synthetic media () large language model () foundation model () chatbot () recommendation system () algorithm () autonomous vehicle ()
Description:
Summary:
Collection:
Status date:
Status:
Primary sponsor:
( total sponsors)
Source:
Last action: ()
Sector: None (see reasoning)
Keywords (occurrence): artificial intelligence () machine learning () neural network () deep learning () automated () deepfake () synthetic media () large language model () foundation model () chatbot () recommendation system () algorithm () autonomous vehicle ()
Description:
Summary:
Collection:
Status date:
Status:
Primary sponsor:
( total sponsors)
Source:
Last action: ()
Sector: None (see reasoning)
Keywords (occurrence): artificial intelligence () machine learning () neural network () deep learning () automated () deepfake () synthetic media () large language model () foundation model () chatbot () recommendation system () algorithm () autonomous vehicle ()
Description:
Summary:
Collection:
Status date:
Status:
Primary sponsor:
( total sponsors)
Source:
Last action: ()
Sector: None (see reasoning)
Keywords (occurrence): artificial intelligence () machine learning () neural network () deep learning () automated () deepfake () synthetic media () large language model () foundation model () chatbot () recommendation system () algorithm () autonomous vehicle ()
Description:
Summary:
Collection:
Status date:
Status:
Primary sponsor:
( total sponsors)
Source:
Last action: ()
Sector: None (see reasoning)
Keywords (occurrence): artificial intelligence () machine learning () neural network () deep learning () automated () deepfake () synthetic media () large language model () foundation model () chatbot () recommendation system () algorithm () autonomous vehicle ()
Description: For legislation relative to the use of artificial intelligence and other software tools in healthcare decision-making. Advanced Information Technology, the Internet and Cybersecurity.
Summary: The bill establishes regulations for the use of artificial intelligence in healthcare decision-making, ensuring patient data protection, non-discrimination, and that decisions remain under human oversight by qualified professionals.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Michael Moore
(2 total sponsors)
Last action: House concurred (Feb. 27, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text explicitly discusses the use and regulation of artificial intelligence in healthcare decision-making processes, focusing on the implications for patient data, algorithmic decision-making, compliance with legal standards, anti-discrimination measures, and oversight for AI tools in healthcare. It outlines mandates for the use of AI in utilization review and requires that these systems are compliant with state and federal laws, address accuracy, and provide accountability. Therefore, the legislation is considerably relevant to Social Impact due to its potential influence on healthcare quality and patient equity, to Data Governance due to the emphasis on patient data handling and algorithm bias, to System Integrity due to the requirements for oversight and compliance monitoring, and to Robustness due to the need for accuracy and reliability in AI tools.
Sector:
Government Agencies and Public Services
Healthcare (see reasoning)
The text is centered around the use of artificial intelligence in the healthcare sector, focusing on healthcare providers and utilization review organizations. It highlights the articulated role of AI in decision-making within this sector, including the management of health services and the frameworks necessary to govern the application of AI in healthcare. This directly aligns the text with the Healthcare sector but also indirectly touches on Government Agencies and Public Services for the oversight mechanisms involved. Given its focused application within healthcare, the relevance is highest for this sector.
Keywords (occurrence): artificial intelligence (14) algorithm (15) show keywords in context
Description: Making improvements to transparency and accountability in the prior authorization determination process.
Summary: The bill aims to enhance transparency and accountability in the prior authorization process for health care services and prescription drugs by establishing strict timelines, requiring human oversight in decision-making, and detailing the use of artificial intelligence.
Collection: Legislation
Status date: Jan. 21, 2025
Status: Introduced
Primary sponsor: Tina Orwall
(6 total sponsors)
Last action: Executive session scheduled, but no action was taken in the Senate Committee on Ways & Means at 1:30 PM. (Feb. 28, 2025)
Societal Impact
Data Governance (see reasoning)
The text extensively discusses the role of artificial intelligence in the prior authorization process for healthcare services and insurance. It highlights the need for transparency, accountability, and human oversight in AI decision-making. Given this emphasis on societal implications, particularly concerning healthcare outcomes and the responsible use of AI technology, the relevance to the 'Social Impact' category is significant. The legislation seeks to protect patients from the potential harms of algorithm-driven decisions without oversight, clearly aligning with the principles of this category. The updates and requirements also relate to the ethical use of AI, fairness, and discrimination, enhancing its relevance. In terms of 'Data Governance', the text's discussions about compliance with state and federal laws, accuracy in decision-making, and ensuring AI systems do not discriminate also offer substantial connections to how data is managed and utilized ethically. However, 'System Integrity' and 'Robustness' are less evident as they do not focus on security, performance benchmarks, or auditing. The primary focus remains on ethical decision-making and transparency in the healthcare context.
Sector:
Healthcare (see reasoning)
The text is largely relevant to the 'Healthcare' sector as it specifically addresses prior authorization processes within health insurance practices using AI technologies. It discusses the implications of AI in clinical decision-making and the necessary human oversight, thereby ensuring medical professionals remain central to these processes. The provisions for utilizing AI and algorithms in health-related decisions are specified, marking this legislation as pertinent to the healthcare industry. There are no explicit connections to the other sectors identified, such as politics, judicial systems, or academic institutions. The focus is solely on the healthcare setting and its relevant regulatory frameworks.
Keywords (occurrence): artificial intelligence (41) machine learning (6) automated (3) foundation model (3) show keywords in context
Description: For legislation to establish the Massachusetts Information Privacy and Security Act. Economic Development and Emerging Technologies.
Summary: The bill, titled "Massachusetts Information Privacy and Security Act," aims to enhance economic development in Massachusetts through strengthened data privacy protections for individuals, regulating the collection and sale of personal information.
Collection: Legislation
Status date: Feb. 27, 2025
Status: Introduced
Primary sponsor: Barry Finegold
(sole sponsor)
Last action: House concurred (Feb. 27, 2025)
Data Governance
System Integrity (see reasoning)
The legislation primarily addresses data privacy and security, particularly in the context of personal information. While it does not explicitly mention AI, it implies the presence of AI technologies through references to automated processing (Section 154) and profiling (Section 170). The management and protection of personal data are crucial as AI systems often rely on large datasets. Data privacy is inherently linked to how AI models are trained and utilized, making it relevant for Data Governance. However, there's less explicit focus on the social implications and effectiveness of AI systems. Therefore, the scores reflect strong ties to Data Governance but moderate relevance overall to other categories.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text contains significant references related to the management and protection of personal data, essential for sectors like Government Agencies and Public Services, where data privacy is paramount. There is also a possible connection to Private Enterprises as businesses are affected by these regulations in handling consumer data. However, it doesn't directly discuss the application of AI in specific government or private settings, resulting in lower scores for the specific sectors.
Keywords (occurrence): machine learning (1) automated (4) show keywords in context
Description: Establishing the Privacy Protection and Enforcement Unit within the Division of Consumer Protection in the Office of the Attorney General; establishing a data broker registry; requiring certain data brokers to register each year with the Comptroller; and imposing a tax on the gross income of certain data brokers for taxable years beginning after December 31, 2026.
Summary: The "Building Information Guardrails Data Act of 2025" establishes a data broker registry and imposes a gross income tax on certain data brokers while creating a Privacy Protection Unit to enforce consumer rights.
Collection: Legislation
Status date: Feb. 3, 2025
Status: Introduced
Primary sponsor: Katie Hester
(6 total sponsors)
Last action: Hearing 3/05 at 1:00 p.m. (Budget and Taxation) (Feb. 5, 2025)
Societal Impact
Data Governance (see reasoning)
The text primarily establishes measures to enhance privacy protection related to data brokers and outlines frameworks for the registry and taxation of such entities. The presence of phrases such as 'Artificial Intelligence' and 'cybersecurity' indicates a consideration of the implications of AI systems on consumer data privacy and security, therefore this text is relevant to the categories of Social Impact and Data Governance. However, it does not directly address security measures of AI systems or standards for their robustness, making System Integrity and Robustness less relevant.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The text refers largely to data brokers and consumer protection activities within the context of data collection and privacy. This relates particularly to Government Agencies and Public Services due to the involvement of the Attorney General and state office overseeing the enforcement of these laws. It does not specifically mention sectors like healthcare or politics, hence lower relevance scores for these areas. Other sectors like Private Enterprises, Labor, and Employment could be linked but are not directly mentioned, leading to a moderate relevance. Any connections to Judicial System and other sectors are weak as they aren't explicitly referenced in how AI impacts those realms.
Keywords (occurrence): artificial intelligence (1) automated (1) show keywords in context
Description: Relating to use of artificial intelligence in utilization review conducted for health benefit plans.
Summary: This bill regulates the use of artificial intelligence in health benefit plan utilization reviews, ensuring it complements, rather than replaces, physician decision-making and protects patient rights.
Collection: Legislation
Status date: March 7, 2025
Status: Introduced
Primary sponsor: Suleman Lalani
(sole sponsor)
Last action: Filed (March 7, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text primarily addresses the use of artificial intelligence in healthcare, particularly in the context of utilization reviews for health benefit plans. It outlines specific requirements for how AI algorithms must be designed and used, ensuring fairness, transparency, and the avoidance of discrimination, which falls under the category of 'Social Impact.' Additionally, the legislation describes the requirements for accuracy, review, and oversight of AI systems, linking to 'Data Governance' and 'System Integrity.' Therefore, these categories are highly relevant as the text emphasizes accountability and safety concerning AI's role in healthcare processes.
Sector:
Healthcare (see reasoning)
The text is directly related to the healthcare sector, as it specifically defines how AI should be utilized by utilization review agents within health benefit plans, assessing clinical cases based on AI algorithms. The focus is on regulatory compliance and the ethical use of AI in healthcare settings. Given the explicit mention of healthcare applications and requirements in the context of AI utilization, a high score in this sector is warranted.
Keywords (occurrence): artificial intelligence (3) algorithm (12) show keywords in context
Description: Establishes the crime of aggravated harassment by means of electronic or digital communication; provides for a private right of action for the unlawful dissemination or publication of deep fakes, which are digitized images which are altered to incorporate a person's face or their identifiable body part onto an image and such image depicts a pornographic or lewd sex act or graphic violence.
Summary: The bill establishes aggravated harassment via electronic communication and allows individuals to take legal action against unauthorized dissemination of deep fakes, protecting personal privacy and reducing harmful online behavior.
Collection: Legislation
Status date: March 7, 2025
Status: Introduced
Primary sponsor: Jessica Scarcella-Spanton
(sole sponsor)
Last action: REFERRED TO CODES (March 7, 2025)
Societal Impact
Data Governance (see reasoning)
The text explicitly defines and addresses the creation and dissemination of deep fakes, focusing on the implications of these technologies for harassment and privacy violations. This indicates a clear relevance to the Social Impact category, as it discusses potential harms (psychological, material) caused by AI-driven technologies like deepfakes. The mention of 'machine learning’ and ‘artificial intelligence' in the context of digitization also suggests implications for Data Governance regarding how data related to individuals can be manipulated or misused. However, the text does not deeply engage in integrity issues of AI systems nor benchmarks for robustness. Thus, while there is some discussion about the integrity related to harassment through misinformation, the core focus on social and data rules remains paramount, leading to higher scores in those categories compared to the others.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The text specifically highlights the use of AI technologies in the production of deep fakes, which can have significant implications for privacy and individual rights. Various sections discuss legal actions individuals can take regarding unauthorized image use, indicating a concern for the protections of individuals in digital communication contexts. The mentions of machine learning and artificial intelligence suggest relevance to the Government Agencies and Public Services sector as it implies regulatory frameworks necessary to protect citizens as well. Other sectors have marginal relevance as they don't directly discuss judicial systems, healthcare, or specific applications in those sectors. Hence, the scores reflect strong relevance to sectors dealing with individual rights and government regulatory oversight.
Keywords (occurrence): artificial intelligence (1) machine learning (2) deepfake (15) show keywords in context
Description: The purpose of this bill is to include in the high-technology property valuation statute the hosting and processing of electronic data as part of a data center operation and high-performance data computing to process data and perform complex computation and solve algorithms at high speeds in connection with digital, blockchain, and/or artificial intelligence technologies.
Summary: This bill amends property valuation laws in West Virginia to classify certain electronic data processing and high-technology properties, such as data centers, under specialized valuation rules to assess their tax value based on salvage value.
Collection: Legislation
Status date: March 7, 2025
Status: Introduced
Primary sponsor: Daniel Linville
(3 total sponsors)
Last action: To House Finance (March 7, 2025)
Societal Impact
System Integrity (see reasoning)
The text explicitly mentions artificial intelligence technologies in the context of property valuation for assets used in high-performance data computing, particularly for electronic data processing services. This ties directly to the potential social ramifications of how AI technologies and high-tech property are valued and taxed, especially as these technologies affect economies, job markets, and industry dynamics. These references allow for a strong connection to the Social Impact category. The text also refers to algorithms and complex computations, which relates to System Integrity in the context of technology oversight and safety. However, the text is less concerned with data governance, robustness, or specific security measures, given its focus on valuation and taxation rather than operational integrity, benchmarks, or auditing procedures. Hence, while some aspects do connect to robustness, the overall focus aligns more closely with social implications, making it moderately relevant in this context.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The legislation discusses high-technology property in the context of taxation related to electronic data processing, relevant to the private sector but not specifically targeted at politics, public services, healthcare, or other defined sectors. The bill does hint at applications involving AI, which could indirectly affect public services or private enterprises, particularly in the taxation and valuation of technology assets, but it lacks direct applicability to any particular sector beyond general private enterprise considerations. Therefore, while the relevance to private enterprises is moderately significant, it is not strong enough to classify it as directly pertinent to specific sectors like healthcare or government services.
Keywords (occurrence): artificial intelligence (2) show keywords in context
Description: The purpose of this bill is to require law enforcement officers and political subdivision officials from irresponsibly utilizing certain surveillance technologies and artificial intelligence facial recognition technologies, setting forth legislative findings, providing definitions, establishing parameters for the responsible and constitutional use of these technologies.
Summary: The bill establishes the "Responsible Use of Facial Recognition Act" in West Virginia, regulating the use of surveillance and facial recognition technologies by law enforcement to protect citizens' constitutional rights against unreasonable searches. It mandates responsible use, oversight, and American-developed technology to ensure civil liberties are safeguarded.
Collection: Legislation
Status date: March 4, 2025
Status: Introduced
Primary sponsor: Chris Rose
(4 total sponsors)
Last action: To Government Organization (March 4, 2025)
Societal Impact
System Integrity (see reasoning)
The text explicitly discusses the use of artificial intelligence facial recognition technologies in law enforcement and the importance of regulating these technologies to protect constitutional rights, indicating a strong focus on social implications of AI. It highlights the need for accountability and guidelines to mitigate potential misuse by government officials, which directly aligns with the Social Impact category. The focus on ensuring secure and appropriate use of AI in law enforcement also suggests relevance to System Integrity, as it establishes parameters for a responsible framework regarding the operation of AI technologies in a sensitive context. However, while aspects of data management and accuracy are covered, the primary focus is not on data governance or robustness, thus those categories are less relevant.
Sector:
Government Agencies and Public Services (see reasoning)
This legislation has a direct impact on law enforcement, specifying how AI technologies should be managed within this sector. It is particularly relevant to government operations, as it discusses protocols for the use of facial recognition technologies, thus heavily involving Government Agencies and Public Services. While there could be indirect implications for privacy and civil liberties regarding the Judicial System, the primary focus on law enforcement and not on judicial processes weakens its relevance. It does not touch on healthcare, private enterprises, academic institutions, international cooperation, or NGOs, as it strictly pertains to law enforcement practices. Therefore, the primary score is for Government Agencies and Public Services.
Keywords (occurrence): artificial intelligence (10) algorithm (1) show keywords in context
Description: An Act To Criminalize The Unlawful Dissemination Or Publication Of An Intimate Or Nonintimate Image Or Audio Created Or Altered By Digitization Where The Image Or Audio Is Disseminated Or Published With Intent To Cause Harm To The Emotional, Financial Or Physical Welfare Of Another Person And The Actor Knew Or Reasonably Should Have Known That The Person Depicted Did Not Consent To Such Dissemination Or Publication; To Define Terms; To Provide That The Crimes Include The Use Of Images Or Audi...
Summary: The bill criminalizes the unauthorized dissemination of digitally altered intimate images or audio intended to harm an individual's welfare without their consent. It establishes penalties for offenders and outlines specific exceptions.
Collection: Legislation
Status date: March 4, 2025
Status: Other
Primary sponsor: Chris Johnson
(3 total sponsors)
Last action: Died In Committee (March 4, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The key AI-related term in the text is 'artificial intelligence,' which is mentioned in the context of altering images and audio through computational means. Given that the bill criminalizes unauthorized digital alterations possibly facilitated by AI technologies, it has major implications for social issues like consent, confidentiality, and emotional harm which underpins the Social Impact category. There is also a connection to System Integrity since the legislation implicitly suggests the need for the integrity of AI-generated media, highlighting the potential for harm in disseminating altered content. Data Governance may be relevant due to privacy considerations associated with data used in AI systems for image and audio processing. However, the emphasis on societal harm strongly supports Social Impact as the most relevant category here. Robustness does not apply, as this legislation does not focus on benchmarks or performance standards of AI systems.
Sector:
Government Agencies and Public Services
Private Enterprises, Labor, and Employment (see reasoning)
The legislation primarily addresses issues related to digital media, privacy violations, and consent in disseminating altered images or audio. Therefore, it is relevant to the sector of Private Enterprises, Labor, and Employment, as businesses in media may face legal consequences in the use of AI-generated media content. There is also minor relevance to Government Agencies and Public Services since regulations set forth may guide public institutions in handling cases of non-consensual media publication. The content does not specifically touch upon healthcare, judicial system uses of AI, or other designated sectors significantly enough to merit a higher score. Additionally, the act signifies broader implications for all sectors relating to the ethical and legal use of AI, though it doesn't directly legislate for them.
Keywords (occurrence): artificial intelligence (1) show keywords in context
Description: Amend The South Carolina Code Of Laws By Adding Section 38-59-23 So As To Require A Licensed Physician To Supervise And Review Healthcare Coverage Decisions Derived From The Use Of An Automated-decision Making Tool.
Summary: The bill mandates that licensed physicians supervise and review healthcare coverage decisions made by automated decision-making tools using AI, ensuring human oversight in healthcare service authorization.
Collection: Legislation
Status date: March 11, 2025
Status: Introduced
Primary sponsor: Ronnie Sabb
(sole sponsor)
Last action: Referred to Committee on Banking and Insurance (March 11, 2025)
Societal Impact
System Integrity (see reasoning)
The text primarily revolves around the regulation of AI in healthcare decision-making processes. It mentions the necessity for licensed physicians to supervise AI-derived healthcare coverage decisions, which directly relates to the societal implications of AI's role in the healthcare sector, the responsibilities of professionals in ensuring human oversight, and the accountability for AI decisions. Given that the text addresses the impact on individuals interacting with AI in a healthcare context, it fits squarely within the 'Social Impact' category. It also touches on 'System Integrity' by imposing oversight and review requirements on the decision-making processes involved, ensuring that automated tools don't operate in isolation from human judgment. The prevention of actions based solely on AI outputs further enforces the need for human involvement, resonating with both the societal implications and system oversight that are crucial when discussing the integrity of AI systems. However, the text does not specifically address data governance issues like bias, security, or data usage, nor does it provide benchmarks for the robustness of AI systems. Consequently, the 'Robustness' category is less relevant.
Sector:
Healthcare (see reasoning)
The bill is directly aimed at the healthcare sector as it concerns the supervision of automated decision-making tools used for healthcare coverage decisions. It mandates that healthcare professionals oversee AI systems to ensure responsible use, thereby directly impacting healthcare practices and patient care quality. Given that it involves the ethical and practical implications of AI in healthcare, it is highly relevant to the 'Healthcare' sector. While there may be indirect applications to other sectors such as 'Government Agencies and Public Services' due to the public nature of healthcare, the focus of the bill remains squarely within healthcare.
Keywords (occurrence): artificial intelligence (1) automated (4) show keywords in context
Description: Imposes liability for damages caused by a chatbot impersonating licensed professionals or giving any medical or psychological advice.
Summary: The bill imposes liability on chatbot proprietors for damages caused by chatbots impersonating licensed professionals or giving medical/psychological advice, ensuring consumer protection and accountability.
Collection: Legislation
Status date: March 6, 2025
Status: Introduced
Primary sponsor: John Zaccaro
(sole sponsor)
Last action: referred to consumer affairs and protection (March 6, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text is very relevant to the Social Impact category because it addresses the implications of AI chatbots impersonating licensed professionals and providing potentially harmful advice. It holds system developers accountable for the outputs of chatbots, which ties directly to consumer protection and the psychological safety of individuals who might seek medical or psychological advice. It outlines specifics like liability for damages, which is crucial for public trust and fairness in AI interactions. The measures proposed aim to prevent harm resulting from misinformation and misuse of AI, aligning with the goals of protecting society from AI-driven risks. In terms of Data Governance, the text doesn’t directly address data management or collection processes, but it does imply the need for accurate identity representation in chatbot interactions, thus warranting a moderate score. System Integrity is moderately relevant as it discusses liability and the responsibilities of chatbot proprietors, implying standards for chatbot operations, although it doesn’t specifically cover security mechanisms or human oversight. Lastly, the relevance to Robustness is limited; while it discusses regulation, it doesn’t engage directly with performance benchmarks or auditing processes, placing it on the lower end of the scale for this category.
Sector:
Healthcare
Private Enterprises, Labor, and Employment (see reasoning)
The legislation is highly relevant to the Healthcare sector, as it explicitly prohibits chatbots from providing medical or psychological advice, which directly impacts how AI can be used in healthcare settings. This aligns closely with regulations around AI in healthcare and the management of health-related data. In the context of Private Enterprises, Labor, and Employment, the legislation is also relevant, as businesses utilizing chatbots will need to adapt their practices to comply with this liability regulation, impacting their operations and employment practices regarding who is responsible for chatbot management. Although there is a general relevance of AI across sectors, the primary focus remains on healthcare and business implications. It doesn’t specifically address political implications, judicial applications, or educational contexts, leading to lower scores for those sectors.
Keywords (occurrence): artificial intelligence (1) chatbot (11) show keywords in context
Description: Establishes the crime of aggravated harassment by means of electronic or digital communication; provides for a private right of action for the unlawful dissemination or publication of deep fakes, which are digitized images which are altered to incorporate a person's face or their identifiable body part onto an image and such image depicts a pornographic or lewd sex act or graphic violence.
Summary: The bill establishes aggravated harassment through electronic communication, targeting unlawful deep fake dissemination. It enables victims to seek legal action and damages against offenders for privacy violations.
Collection: Legislation
Status date: March 3, 2025
Status: Introduced
Primary sponsor: Steven Otis
(2 total sponsors)
Last action: referred to science and technology (March 3, 2025)
Societal Impact
System Integrity
Data Robustness (see reasoning)
The text is focused on defining and criminalizing the use of deep fakes, which are a direct manifestation of artificial intelligence technology. It explicitly mentions AI and machine learning in the context of digitization, indicating its relevance to the use of AI in creating manipulated images. The legislation discusses accountability and harm caused by such misuse, which directly aligns with societal implications and ethical considerations associated with AI.
Sector:
Politics and Elections
Government Agencies and Public Services
Judicial system
Hybrid, Emerging, and Unclassified (see reasoning)
The legislation addresses the regulation of AI technologies (specifically deep fakes) which can significantly impact various sectors. It primarily concerns Politics and Elections through its implications of misuse in campaigns, Government Agencies and Public Services as it relates to public safety and digital communication regulations, and potentially influences the Judicial System through legal ramifications for harassment. Overall, the text shapes the landscape of AI regulation across various sectors.
Keywords (occurrence): artificial intelligence (1) machine learning (2) deepfake (15) show keywords in context
Description: Tells state agencies to buy common software through an online portal. Allows the agencies to buy the software through normal buying processes. Tells a state agency to start a program to give grants and loans to other state agencies so that they can replace their old computers and software, and for other purposes. Says that the agencies must pay back the moneys from cost savings. (Flesch Readability Score: 64.2). Requires contracting agencies to purchase common off-the-shelf software or other ...
Summary: Senate Bill 1089 establishes a framework for Oregon state agencies to purchase standardized software through a centralized online portal and creates a fund for grants and loans to modernize IT systems.
Collection: Legislation
Status date: Feb. 25, 2025
Status: Introduced
Primary sponsor: Aaron Woods
(sole sponsor)
Last action: Referred to Information Management and Technology. (Feb. 25, 2025)
Societal Impact
System Integrity
Data Robustness (see reasoning)
The text primarily outlines requirements for state agencies to purchase common software and improve information technology systems. It includes provisions for grants and loans aimed at replacing outdated tech and facilitating the adoption of artificial intelligence (AI) in state operations. The introduction of a program to develop or implement AI signifies its importance to the legislation. However, the text is largely administrative in nature regarding procurement processes and does not delve deeply into broader societal impacts, governance of data, systems integrity, or robust performance benchmarks. Thus, it holds varying relevance to the categories.
Sector:
Government Agencies and Public Services
Hybrid, Emerging, and Unclassified (see reasoning)
The text involves government operations focusing on the procurement of information technology products and incorporates AI into state agencies' operations. The creation of grants to support modernization, including AI projects, highlights a significant governmental adaptation to technology. However, it does not engage with more nuanced aspects of AI in politics, judiciary, healthcare, or other sectors; it is largely centered on administrative agency operations. Consequently, it bears moderate relevance to a few sectors but less so to others.
Keywords (occurrence): artificial intelligence (1) machine learning (1) show keywords in context