Artificial intelligence (AI), machine learning (ML), and the algorithms that often support artificial intelligence have generated policymaker interest.  We acknowledge that as technological advances emerge, policymakers’ understanding of how these technologies work is vital for responsible policymaking.  Our member companies are committed to responsible AI development and use. TechNet will advocate for a federal AI framework that brings uniformity to all Americans regardless of where they live, encourages innovation, and ensures that consumers are protected.  TechNet therefore supports the following principles:

  • Comprehensive, interoperable data privacy laws should precede AI regulations.
  • Avoid blanket prohibitions on artificial intelligence, machine learning, or other forms of automated decision-making. Reserve any restrictions only for specific, identified use-cases that present a clearly demonstrated risk of unacceptable harm, and narrowly tailor those requirements to the harms identified.
  • Do not force developers or deployers of AI/ML to share publicly information that is proprietary or protected, and do not require an AI registry.
  • Ensure safety and security of information by ensuring data retention requirements are appropriately scoped to need and clearly defined by law.
  • Leverage existing authorities under state law that already provide substantive legal protections, and limit new authorities specific to the operation of artificial intelligence, machine learning, and similar technologies where existing authorities are demonstrably inadequate.
  • Ensure any requirements on automated decision tools focus on high-risk uses, defined as those uses reasonably likely to result in the loss of life or liberty or have legal effects, and those decisions based solely on automated decisions.
  • Regulation should encourage clear disclosure of AI systems — e.g., use of simulated personas like chatbots should be clearly identified.
  • Avoid overly broad designations that lead to uncertainty of who and what is affected — for example, statutory language that reads “including but not limited to …”.
  • Limit enforcement to the relevant state agencies and avoid private rights of action. Ensure any enforcement actions limit damage awards to clearly cognizable forms of actual demonstrated harms directly resulting from violations of the law.
  • Provide safe harbors for companies that test and mitigate any bias or issues found in AI systems, as well as a reasonable right to cure period upon notice.
  • Ensure sensitive data with appropriate cybersecurity protections can be used to conduct internal testing and foundation model training to ensure algorithms work inclusively and as intended by developers.
  • Ensure any requirements are clearly allocated to specific roles in the artificial intelligence value chain. Recognize the different roles and responsibilities of “developers” and “deployers” of AI, including their technical limitations, and regulate them distinctly as appropriate.
  • Avoid a one-size-fits-all policy approach and support a risk-based framework that ensures that comparable AI use cases are subject to consistent oversight and regulation across sectors. However, some sector-specific requirements may be appropriate for specialized uses.
  • Rely on self-certification mechanisms wherever possible, and avoid mandating external or third-party audits of impact assessments or risk assessments. Rather, identify the audit or assessment requirements and goals, allowing companies to determine if they are capable of conducting the audit or must seek third-party support.
  • Rely on established national and international standards and frameworks, including the NIST AI Risk Management Framework and ISO standards, to ensure interoperability and avoid a patchwork of inconsistent regulations.

Other Policy Agendas

Privacy and Security

December 6, 2023

Read More

Artificial Intelligence

December 6, 2023

Read More

Education and Workforce Development

December 6, 2023

Read More