By Yoel Molina, Esq., Owner and Operator of the Law Office of Yoel Molina, P.A.
Unlike the EU’s single AI Act, the U.S. uses a
layered, sectoral approach: federal agencies set policy through executive orders, OMB memoranda, NIST guidance, and enforcement by regulators like the FTC—while
states pass their own AI laws on chatbots, deepfakes, employment tools, and likeness rights. Here’s a practical explainer of the U.S. rule stack, the key
2025 updates, and a no-nonsense checklist for Florida SMBs that use or buy AI.
The federal rule stack (what applies nationally)
White House & OMB: agency playbook (2025). In April 2025, the White House issued
OMB Memorandum M-25-21, ordering federal agencies to accelerate responsible AI, with inventories of use cases, risk controls, and public reporting. While directed at government, it’s a bellwether for procurement standards (logging, bias testing, safety assessments) that vendors will increasingly meet. (
White House)
NIST AI Risk Management Framework (AI RMF) + Generative AI Profile. NIST’s RMF (2023) and the
Generative AI Profile (NIST AI 600-1, July 2024) have become the default
how-to for AI governance in U.S. organizations—covering data quality, testing, documentation, monitoring, and incident handling. Courts and agencies aren’t bound to it, but it’s the most widely referenced U.S. standard. (
NIST)
FTC enforcement—truth-in-AI claims. The FTC is aggressively policing
deceptive AI marketing (e.g., tools promising guaranteed growth or “detection” capabilities without evidence). In August 2025 it sued “Air AI” over allegedly misleading growth and refund claims aimed at small businesses, signaling that unsupported AI claims are squarely in the agency’s sights. (
Federal Trade Commission)
National policy direction (mid-2025). The administration’s
America’s AI Action Plan (July 2025) consolidates federal AI priorities—R&D, standards, workforce, national security—and frames a pro-innovation posture alongside safety. Expect continued reliance on agency rulemaking and standards over one sweeping statute. (
White House)
The state layer (where most prescriptive rules live)
Colorado AI Act—delayed but still the template. Colorado passed the first comprehensive U.S. “high-risk AI” law (2024), requiring developers and deployers to use
reasonable care to prevent algorithmic discrimination and to keep records, notices, and impact assessments. In August 2025, Colorado
postponed the law’s start date to
June 30, 2026, but its structure is already influencing vendors and policymakers nationwide. (
Colorado General Assembly)
California SB 243—chatbot disclosure & safeguards (Oct 13, 2025). California enacted
first-in-the-nation chatbot safeguards, requiring clear disclosure when users talk to AI (not a human) and adding safety features (including mental-health escalation and reporting for certain systems). Even non-California firms with consumer-facing bots will feel the pull of this standard. (
The Verge)
Right-of-publicity & voice-cloning (Tennessee “ELVIS Act”). Tennessee expanded its right-of-publicity law in 2024 to cover
AI voice and likeness misuse—with criminal and civil penalties—spurring copycat bills elsewhere through 2025. If your marketing uses synthetic voices/images, lock down consent and indemnities. (
Armstrong Teasdale LLP)
Deepfake crimes & civil remedies (e.g., New Jersey 2025). States continue to criminalize harmful deepfakes and create private rights of action (e.g.,
New Jersey, 2025). Expect more election- and child-safety-focused statutes, plus notice/labeling duties. (
AP News)
The bigger picture: states are busy, Congress is fragmented. A late-2025 survey shows
hundreds of AI-related state bills, with only a small fraction enacted—still enough to create real compliance spread. Preemption efforts have stalled; companies should plan for
multi-state differences in HR, chatbots, and content rules. (
Future of Privacy Forum)
What this means for Florida SMBs (and how to act)
Even without a single federal AI law, you can meet regulators more than halfway by adopting controls that mirror NIST/OMB and anticipate Colorado/California-style rules.
1) Hiring & HR tech
-
Human-in-the-loop for material decisions. No fully automated hiring, promotion, or termination.
-
Bias testing & documentation. Run adverse-impact tests each hiring cycle and keep your methodology/results.
-
Candidate notice & appeal. Tell applicants an automated tool assists screening and provide a human review path.
-
Vendor diligence. Require validation summaries and bias-testing cadence from HR tech providers (Colorado’s “reasonable care” playbook is a good proxy). (
Colorado General Assembly)
2) Marketing, chatbots & sales enablement
-
Disclose “You’re chatting with AI.” Add conspicuous labels at first contact and in headers—California now requires it; others will follow. Provide one-click escalation to a human. (
The Verge)
-
Substantiate claims. If your bot or tool “detects” fraud, “prevents” chargebacks, or “guarantees” leads, keep rigorous test data—or don’t say it. The FTC is watching. (
Federal Trade Commission)
-
Safety routing. For consumer bots, set keyword triggers (self-harm, abuse) to route to resources and log interventions (a SB 243 theme). (
The Verge)
3) Content, IP & likeness
4) Privacy, data & security
-
Adopt NIST AI RMF controls. Data quality checks, red-team tests, access controls, and incident playbooks for prompt injection/misuse. (
NIST)
-
No training on customer data (by default). Put it in contracts; log model/version, prompts, files, and approvers for material outputs.
-
Retention discipline. Keep AI logs 90–180 days unless contract/law requires longer.
5) Contracts & procurement (what to add now)
-
AI use addendum (employees/contractors): no confidential data in unapproved tools; output handling rules.
-
Vendor terms: no-training-on-customer-data, security attestations (SOC 2/ISO 27001), sub-processor lists, breach notice windows, copyright/likeness
indemnities, and human-review checkpoints for high-risk uses.
-
High-risk readiness: Ask vendors to meet a
Colorado-style “reasonable care” standard now (impact assessments, notices, logging). (
Colorado General Assembly)
-
A quick U.S. AI compliance checklist (Q4 2025)
-
Name an AI Lead + Privacy/Security Lead and publish approval gates for HR, finance, legal, and customer comms.
-
Inventory AI use cases and tag HR/credit/pricing/eligibility decisions as
high-risk; require human approval and logs.
-
-
Update privacy notices to disclose AI assistance and offer a human contact path for meaningful decisions.
-
Deploy chatbot disclosures and escalation to a human; log safety keyword routes. (
The Verge)
-
Stand up content QA (similarity checks, talent consents, rollback plan).
-
Refresh vendor contracts with AI-specific warranties, data-use limits, and indemnities.
-
FAQs we hear from clients
Do we need to follow Colorado if we’re not in Colorado? Not legally—yet—but vendors and large customers are adopting its
impact assessment/notice/logging patterns. Treat it as the U.S. “floor” for high-risk uses. (
Colorado General Assembly)
Is the NIST RMF mandatory? No, but it’s the most referenced U.S. standard for AI governance and an excellent way to show regulators, partners, and courts you acted reasonably. (
NIST)
Will federal law preempt the states soon? Unclear. In July 2025, the Senate
removed a sweeping preemption idea from a megabill—so state activity will continue for now. (
Reuters)
Contact Us
For legal help navigating U.S. AI adoption—contracts and vendor terms, privacy/compliance, employment-law guardrails, IP/content risk, and policy design—contact Attorney Yoel Molina at
admin@molawoffice.com, call
(305) 548-5020 (Option 1), or message via
WhatsApp at (305) 349-3637.