4673 results:
Description: Safeguard Health Ins. Utilization Reviews
Collection: Legislation
Status date: March 13, 2025
Status: Introduced
Primary sponsor: Gale Adcock
(sole sponsor)
Last action: Filed (March 13, 2025)
Description: Relating to the use of an automated employment decision tool by an employer to assess a job applicant's fitness for a position.
Collection: Legislation
Status date: March 13, 2025
Status: Introduced
Primary sponsor: Salman Bhojani
(sole sponsor)
Last action: Filed (March 13, 2025)
Description: Relating to the use of an automated employment decision tool by a state agency to assess a job applicant's fitness for a position.
Collection: Legislation
Status date: March 13, 2025
Status: Introduced
Primary sponsor: Jose Menendez
(sole sponsor)
Last action: Filed (March 13, 2025)
Description: Prohibits the provision of an artificial intelligence companion to a user unless such artificial intelligence companion contains a protocol for addressing possible suicidal ideation or self-harm expressed by a user, possible physical harm to others expressed by a user, and possible financial harm to others expressed by a user; requires certain notifications to certain users regarding crisis service providers and the non-human nature of such companion models.
Collection: Legislation
Status date: March 13, 2025
Status: Introduced
Primary sponsor: Clyde Vanel
(sole sponsor)
Last action: referred to consumer affairs and protection (March 13, 2025)
Description: A RESOLUTION creating the Senate Study Committee on Artificial Intelligence and Digital Currency; and for other purposes.
Collection: Legislation
Status date: March 13, 2025
Status: Introduced
Primary sponsor: John Albers
(6 total sponsors)
Last action: Senate Hopper (March 13, 2025)
Description: Altering the selection of the membership and chair of the Maryland Cybersecurity Council.
Collection: Legislation
Status date: March 12, 2025
Status: Engrossed
Primary sponsor: Brian Feldman
(7 total sponsors)
Last action: Third Reading Passed (47-0) (March 12, 2025)
Description: Camera usage prohibited for traffic safety enforcement, and previous appropriation cancelled.
Collection: Legislation
Status date: March 12, 2025
Status: Introduced
Primary sponsor: Drew Roach
(6 total sponsors)
Last action: Introduction and first reading, referred to Transportation Finance and Policy (March 12, 2025)
Description: Relating to the use of artificial intelligence by health care providers.
Collection: Legislation
Status date: March 11, 2025
Status: Introduced
Primary sponsor: Salman Bhojani
(sole sponsor)
Last action: Filed (March 11, 2025)
Description: Relating to an automated artificial intelligence review of library material purchased by public schools; providing an administrative penalty.
Collection: Legislation
Status date: March 11, 2025
Status: Introduced
Primary sponsor: Hillary Hickland
(sole sponsor)
Last action: Filed (March 11, 2025)
Description: Amend The South Carolina Code Of Laws By Adding Section 38-59-23 So As To Require A Licensed Physician To Supervise And Review Healthcare Coverage Decisions Derived From The Use Of An Automated-decision Making Tool.
Collection: Legislation
Status date: March 11, 2025
Status: Introduced
Primary sponsor: Ronnie Sabb
(sole sponsor)
Last action: Referred to Committee on Banking and Insurance (March 11, 2025)
Description: AN ACT relating to health; requiring a public school to provide certain information relating to mental health to pupils; prohibiting certain uses of artificial intelligence in public schools; requiring that a pupil be allowed credit or promotion to the next higher grade despite absences from school in certain circumstances; deeming certain absences from school to be approved absences; imposing certain restrictions relating to the marketing and programming of artificial intelligence systems; p...
Collection: Legislation
Status date: March 11, 2025
Status: Introduced
Primary sponsor: Jovan Jackson
(2 total sponsors)
Last action: Read first time. Referred to Committee on Education. To printer. (March 11, 2025)
Description: AI/Ban Deceptive Ads
Collection: Legislation
Status date: March 11, 2025
Status: Introduced
Primary sponsor: Harry Warren
(6 total sponsors)
Last action: Filed (March 11, 2025)
Description: High-risk artificial intelligence; development, deployment, and use; civil penalties. Creates requirements for the development, deployment, and use of high-risk artificial intelligence systems, defined in the bill, and civil penalties for noncompliance, to be enforced by the Attorney General. The bill has a delayed effective date of July 1, 2026.
Collection: Legislation
Status date: March 7, 2025
Status: Enrolled
Primary sponsor: Michelle Maldonado
(24 total sponsors)
Last action: Fiscal Impact Statement from Department of Planning and Budget (HB2094) (March 7, 2025)
Description: COMMERCIAL LAW -- GENERAL REGULATORY PROVISIONS -- ARTIFICIAL INTELLIGENCE ACT - Establishes regulations to ensure the ethical development, integration, and deployment of high-risk AI systems, particularly those influencing consequential decisions.
Collection: Legislation
Status date: March 7, 2025
Status: Introduced
Primary sponsor: Louis Dipalma
(7 total sponsors)
Last action: Introduced, referred to Senate Artificial Intelligence & Emerging Technol (March 7, 2025)
Societal Impact
Data Governance
System Integrity
Data Robustness (see reasoning)
The text heavily emphasizes regulating high-risk AI systems and mitigating algorithmic discrimination, indicating a strong societal impact. It addresses fairness and bias, thereby falling under Social Impact. Data Governance is also prominently featured, focusing on managing data used in AI systems, including transparency measures for consumers. System Integrity is relevant as it discusses the importance of human oversight and responsible deployment actions by developers and integrators. Robustness is present to an extent through advocating for performance measures and assessments of high-risk AI systems, although it is less emphasized compared to the other three categories. Overall, the text strongly integrates AI concerns into societal, governance, and systemic contexts.
Sector:
Government Agencies and Public Services
Judicial system
Private Enterprises, Labor, and Employment (see reasoning)
The text predominantly affects the sectors involving Government Agencies and Public Services due to its regulatory nature, which directly informs how the government will oversee AI implementations. It may also touch upon Private Enterprises, Labor, and Employment relating to employment discrimination due to its focus on consequential decisions in hiring practices. The Judicial System can be slightly implicated due to the mention of legal repercussions tied to AI outputs affecting consumers. However, it remains primarily grounded in an overarching governmental context with regards to AI oversight.
Keywords (occurrence): artificial intelligence (154) automated (2) algorithm (2) show keywords in context
Description: Autonomous driving systems; regulation; work group; report. Directs the Chair of the Innovations Subcommittee of the House Committee on Transportation to convene a work group of relevant stakeholders to develop draft legislation governing the regulation of autonomous driving systems and to report its findings to the General Assembly no later than November 30, 2025.
Collection: Legislation
Status date: March 7, 2025
Status: Enrolled
Primary sponsor: Jackie Glass
(sole sponsor)
Last action: Fiscal Impact Statement from Department of Planning and Budget (HB2627) (March 7, 2025)
Description: The purpose of this bill is to include in the high-technology property valuation statute the hosting and processing of electronic data as part of a data center operation and high-performance data computing to process data and perform complex computation and solve algorithms at high speeds in connection with digital, blockchain, and/or artificial intelligence technologies.
Collection: Legislation
Status date: March 7, 2025
Status: Introduced
Primary sponsor: Daniel Linville
(3 total sponsors)
Last action: To House Finance (March 7, 2025)
Societal Impact
System Integrity (see reasoning)
The text explicitly mentions artificial intelligence technologies in the context of property valuation for assets used in high-performance data computing, particularly for electronic data processing services. This ties directly to the potential social ramifications of how AI technologies and high-tech property are valued and taxed, especially as these technologies affect economies, job markets, and industry dynamics. These references allow for a strong connection to the Social Impact category. The text also refers to algorithms and complex computations, which relates to System Integrity in the context of technology oversight and safety. However, the text is less concerned with data governance, robustness, or specific security measures, given its focus on valuation and taxation rather than operational integrity, benchmarks, or auditing procedures. Hence, while some aspects do connect to robustness, the overall focus aligns more closely with social implications, making it moderately relevant in this context.
Sector:
Private Enterprises, Labor, and Employment (see reasoning)
The legislation discusses high-technology property in the context of taxation related to electronic data processing, relevant to the private sector but not specifically targeted at politics, public services, healthcare, or other defined sectors. The bill does hint at applications involving AI, which could indirectly affect public services or private enterprises, particularly in the taxation and valuation of technology assets, but it lacks direct applicability to any particular sector beyond general private enterprise considerations. Therefore, while the relevance to private enterprises is moderately significant, it is not strong enough to classify it as directly pertinent to specific sectors like healthcare or government services.
Keywords (occurrence): artificial intelligence (2) show keywords in context
Description: Revise election laws regarding disclosure requirements for the use of AI in elections
Collection: Legislation
Status date: March 7, 2025
Status: Engrossed
Primary sponsor: Janet Ellis
(sole sponsor)
Last action: (S) Transmitted to House (March 7, 2025)
Description: Relating to use of artificial intelligence in utilization review conducted for health benefit plans.
Collection: Legislation
Status date: March 7, 2025
Status: Introduced
Primary sponsor: Suleman Lalani
(sole sponsor)
Last action: Filed (March 7, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
This text primarily addresses the use of artificial intelligence in healthcare, particularly in the context of utilization reviews for health benefit plans. It outlines specific requirements for how AI algorithms must be designed and used, ensuring fairness, transparency, and the avoidance of discrimination, which falls under the category of 'Social Impact.' Additionally, the legislation describes the requirements for accuracy, review, and oversight of AI systems, linking to 'Data Governance' and 'System Integrity.' Therefore, these categories are highly relevant as the text emphasizes accountability and safety concerning AI's role in healthcare processes.
Sector:
Healthcare (see reasoning)
The text is directly related to the healthcare sector, as it specifically defines how AI should be utilized by utilization review agents within health benefit plans, assessing clinical cases based on AI algorithms. The focus is on regulatory compliance and the ethical use of AI in healthcare settings. Given the explicit mention of healthcare applications and requirements in the context of AI utilization, a high score in this sector is warranted.
Keywords (occurrence): artificial intelligence (3) algorithm (12) show keywords in context
Description: Elections; political campaign advertisements; synthetic media; penalty. Prohibits electioneering communications containing synthetic media, as those terms are defined in the bill, from being published or broadcast without containing the following conspicuously displayed statement: "This message contains synthetic media that has been altered from its original source or artificially generated and may present conduct or speech that did not occur." The bill creates a civil penalty not to exceed $...
Collection: Legislation
Status date: March 7, 2025
Status: Enrolled
Primary sponsor: Scott Surovell
(2 total sponsors)
Last action: Bill text as passed Senate and House (SB775ER) (March 7, 2025)
Societal Impact
Data Governance
System Integrity (see reasoning)
The text explicitly addresses the dissemination of artificial audio and visual media in elections, indicating a significant social impact by trying to mitigate misinformation and deception in political campaigns. It discusses penalties for using synthetic media misleadingly, directly correlating with accountability measures and consumer protections, as it aims to protect voters from being misled by AI-generated content. As such, it is highly relevant to the Social Impact category. The Data Governance category is relevant as it also involves regulations on disclosure in the context of AI-generated media, ensuring that voters are provided with accurate information about the nature of the media they encounter. However, it does not delve deeply into data management aspects, which may limit its relevance. The System Integrity category pertain to the requirement of clear disclosures and the regulations for online platforms regarding synthetic media, focusing more on the integrity of the media being presented in political contexts than on inherent security or control issues. The Robustness category is less relevant here since there is little mention of benchmarks, auditing, or performance measures for AI systems in the text.
Sector:
Politics and Elections
Government Agencies and Public Services (see reasoning)
The text is highly relevant to the Politics and Elections sector as it specifically discusses regulations concerning the use of AI-generated content in electoral contexts. The legislation aims to combat misinformation and help maintain electoral integrity through the regulation of synthetic media, making it extremely relevant to this sector. The Government Agencies and Public Services sector could be moderately relevant, as it involves the regulation and enforcement of these new media guidelines by governmental bodies, but it primarily focuses on political campaign contexts. The Judicial System sector does not appear to directly relate here, as the text primarily addresses legislation rather than its application in legal proceedings. Healthcare, Private Enterprises, Labor, and Employment, Academic and Research Institutions, International Cooperation and Standards, Nonprofits and NGOs, and Hybrid, Emerging, and Unclassified sectors do not seem to relate to the text at all, as it doesn't address the use or regulation of AI outside of the electoral context.
Keywords (occurrence): synthetic media (5) show keywords in context
Description: Establishes the crime of aggravated harassment by means of electronic or digital communication; provides for a private right of action for the unlawful dissemination or publication of deep fakes, which are digitized images which are altered to incorporate a person's face or their identifiable body part onto an image and such image depicts a pornographic or lewd sex act or graphic violence.
Collection: Legislation
Status date: March 7, 2025
Status: Introduced
Primary sponsor: Jessica Scarcella-Spanton
(sole sponsor)
Last action: REFERRED TO CODES (March 7, 2025)
Societal Impact
Data Governance (see reasoning)
The text explicitly defines and addresses the creation and dissemination of deep fakes, focusing on the implications of these technologies for harassment and privacy violations. This indicates a clear relevance to the Social Impact category, as it discusses potential harms (psychological, material) caused by AI-driven technologies like deepfakes. The mention of 'machine learning’ and ‘artificial intelligence' in the context of digitization also suggests implications for Data Governance regarding how data related to individuals can be manipulated or misused. However, the text does not deeply engage in integrity issues of AI systems nor benchmarks for robustness. Thus, while there is some discussion about the integrity related to harassment through misinformation, the core focus on social and data rules remains paramount, leading to higher scores in those categories compared to the others.
Sector:
Government Agencies and Public Services
Judicial system (see reasoning)
The text specifically highlights the use of AI technologies in the production of deep fakes, which can have significant implications for privacy and individual rights. Various sections discuss legal actions individuals can take regarding unauthorized image use, indicating a concern for the protections of individuals in digital communication contexts. The mentions of machine learning and artificial intelligence suggest relevance to the Government Agencies and Public Services sector as it implies regulatory frameworks necessary to protect citizens as well. Other sectors have marginal relevance as they don't directly discuss judicial systems, healthcare, or specific applications in those sectors. Hence, the scores reflect strong relevance to sectors dealing with individual rights and government regulatory oversight.
Keywords (occurrence): artificial intelligence (1) machine learning (2) deepfake (15) show keywords in context