March 12, 2020

Previous
Next

TechNet Comments on Artificial Intelligence Guidance for Federal Agencies

Washington, D.C. — TechNet submitted comments to the Office of Management and Budget regarding “Guidance for Regulation of Artificial Intelligence Applications,” the draft memorandum for federal agencies to use when considering future regulatory and non-regulatory approaches toward artificial intelligence (AI).

TechNet’s comments, in part, emphasize building public trust in AI, supporting the use of flexible and non-regulatory approaches for adapting to rapid changes in AI, and highlighting the critical importance of mitigating bias.

“At a time when other nations are putting forward competing visions of AI regulation — including the European Union’s recent proposal that we believe will stifle innovation in this transformative technology — U.S. global leadership on this issue has never been more essential,” wrote TechNet president and CEO Linda Moore.  “This draft OMB memo is timely, necessary, and we strongly support the process it lays out.  We support its comprehensive approach to address both federal and state regulation, ten principles for the stewardship of AI applications, and the emphasis on establishing voluntary consensus standards.

“We believe effective government approaches to AI will help eliminate unnecessary barriers to innovation; provide predictable and sustainable regulatory and legal environments for innovators and businesses of all sizes; protect public safety; and build public trust in the technology.”

The full letter is below and a PDF is available here.

TechNet’s artificial intelligence principles are available here.

RE: Office of Management and Budget Memorandum — Guidance for Regulation of Artificial Intelligence Applications

Dear Director Vought:

TechNet appreciates the Trump Administration’s efforts to promote U.S. global leadership in emerging technologies, including artificial intelligence (AI) as outlined in Executive Order 13859, “Maintaining American Leadership in Artificial Intelligence.”  We welcome this opportunity to share our perspective on the draft memorandum put forward by the Office of Management and Budget (OMB) — “Guidance for Regulation of Artificial Intelligence Applications” — for federal agencies to use when considering future regulatory and non-regulatory approaches toward AI.

TechNet is the national, bipartisan network of technology CEOs and senior executives that promotes the growth of the innovation economy by advocating for a targeted policy agenda.  Our diverse membership includes dynamic American businesses ranging from startups to the nation’s leading technology companies and represents more than three million employees and countless customers in the fields of information technology, e-commerce, the sharing and gig economies, advanced energy, cybersecurity, venture capital, and finance.

Simply put, we believe AI is transformational and can revolutionize the way we live and work — to defend our country against cyberattacks, deliver high-quality health care solutions, manage our farms, assist persons with disabilities, and train workers, among many other applications.

To that end, in February 2019, TechNet released our first-ever AI federal policy principles to guide our work and future engagement with policymakers.  Since then, we have supported the following AI-related efforts in Congress:

  • S. 1558, the Artificial Intelligence Initiative Act;
  • H.R. 2202, the GrAITR Act;
  • H.R. 2575 and S. 1363, the Artificial Intelligence (AI) in Government Act
  • H.Res.153, supporting the development of guidelines for ethical development of artificial intelligence; and
  • Non-legislative Congressional efforts encouraging standards-setting on ethical AI by the National Institute of Standards and Technology (NIST).  

Most recently, on February 28, we urged Congress to support the Administration’s appropriations request to double research and development (R&D) spending in nondefense AI from approximately $973 million to almost $2 billion by FY 2022.

At a time when other nations are putting forward competing visions of AI regulation — including the European Union’s recent proposal that we believe will stifle innovation in this transformative technology — U.S. global leadership on this issue has never been more essential.

This draft OMB memo is timely, necessary, and we strongly support the process it lays out as well as the three primary goals U.S. Chief Technology Officer Michael Kratsios outlined in his January 7, 2020, Bloomberg op-ed: “Ensure public engagement, limit regulatory overreach and promote trustworthy technology.”

We support its comprehensive approach to address both federal and state regulation, ten principles for the stewardship of AI applications, and the emphasis on establishing voluntary consensus standards.

As you finalize this Memorandum, below are additional comments and suggestions for your consideration:

Scope: Although the guidance memo OMB ultimately issues will not be directed at independent agencies, it should nonetheless encourage them to abide by it in principle and align their efforts to the best extent possible.

Encouraging Innovation and Growth in AI: We believe effective government approaches to AI will help eliminate unnecessary barriers to innovation; provide predictable and sustainable regulatory and legal environments for innovators and businesses of all sizes; protect public safety; and build public trust in the technology.

The draft memorandum instructs agencies to consider the effect of federal regulation on state and local governments and states that federal “agencies may use their authority to address inconsistent, burdensome, and duplicative State laws that prevent the emergence of a national market.”

One of the fundamental challenges facing the legislative and executive branches at all levels is ensuring that our policies keep pace with the speed of innovation.  The development and implementation of national standards and best practices — notably voluntary standards and, where appropriate, targeted national regulatory standards to serve specific public policy goals (such as the protection of human life and public safety) — are significant issues for TechNet and our member companies.

As state and local governments consider policies and pass laws that attempt to regulate the development and use of specific AI technologies, there is a greater risk of a patchwork approach across numerous industries and sectors.  This would discourage innovation and investment, undermine new business development, and fuel consumer uncertainty and distrust.  

As federal agencies review their regulatory and non-regulatory approaches to AI going forward, they should also submit to OMB their recommendations identifying potential changes to federal law that this administration and future ones can propose to Congress.

Principles for the Stewardship of AI Applications

1. Public Trust in AI: TechNet agrees that public trust in AI is critical to unlocking its benefits.  In fact, our policy principles state that, “AI should be architected and deployed with trust and other safeguards built into the design.  This means that requirements for privacy, transparency, and security have equal weight.”

Therefore, building public trust in the technology should be a primary goal of agencies as they implement the actions prescribed in this memorandum.

2. Public Participation: No comments.

3. Scientific Integrity and Information Quality: The draft memorandum includes a list of best practices for agencies to follow as they develop regulatory approaches to AI.  We believe this list should also include “articulation of a clear public policy need when proposing a regulatory approach to AI” as a best practice.

4. Risk Assessment and Management: TechNet supports standards-setting and regulatory approaches to AI that are risk-based, as well as ensuring that any regulatory approaches being considered or proposed are linked to specific public policy goals that are clearly in the national interest.

OMB should provide clear guidelines for agencies to use when assessing the level of risk posed by AI applications.  Such guidance would help agencies better distinguish between appropriate versus unacceptable risks, and prevent inconsistent and wide disparities in the risks different agencies are willing to accept.

With regards to bias, TechNet has enshrined the following objective in our policy principles: “Throughout its lifecycle, AI must reflect human values and ensure that its performance is appropriately monitored and evaluated.  AI must not perpetuate illegal bias and discrimination, especially against protected classes.”

It is important to acknowledge both what is meant by bias as well as the reality that all models inevitably have some degree of bias.  While we aspire for objective technology, the reality is that it will ultimately be influenced by the people who build it and the data that feeds it.  

The draft memorandum rightly states that the goal should be to “mitigate” bias, given that completely “eliminating” or “removing” bias from models is not technically possible.  As a result, the better approach is mitigation, if not prevention, of specific outcomes associated with bias.  In order to mitigate bias-driven outcomes, agencies should consider requiring appropriate safeguards and post-deployment monitoring where appropriate.

5. Benefits and Costs: TechNet supports using a cost-benefit analysis when agencies consider whether to regulate AI applications.  As noted in the draft memorandum, it is important that AI systems be compared to non-AI systems currently in place when such an analysis is performed.  This will allow agencies to determine whether an AI system will make conditions better or worse than the status quo.

6. Flexibility: AI is evolving quickly every day, which is why TechNet supports the use of flexible regulatory and non-regulatory approaches that can adapt to rapid changes and updates to AI applications.

If agencies determine that regulation is necessary, they should also account for the fact that a regulation specifically developed to address what a technology looks like on a given day is likely to quickly become obsolete given how rapidly AI technology evolves.  

TechNet believes the best approach is for government to work with industry and other AI stakeholders to focus on governance in the use of technology in order to address the issues that arise in specific uses and applications of AI, especially when they impact human life and public safety.  Focusing on the applications of the technology, outcomes, and governance approaches — rather than the technology —will enable the kind of flexibility needed.

7. Fairness and Non-Discrimination: As previously noted, while we aspire for objective technology that is completely free of bias, we also recognize that it is inevitably going to be influenced by the people who build it and the data it is provided.  And even that data is going to reflect some degree of pre-existing social and cultural biases.  Therefore, TechNet supports the draft memorandum’s focus on fairness and non-discrimination.

8. Disclosure and Transparency: TechNet believes that establishing trust and transparency should prioritize two key factors: understandability and interpretability.

Understandability can help individuals who are less likely to have significant technical expertise to better understand how algorithms work, how their data is being used, and how their actions can generate new predictions.

Interpretability allows a technical expert, such as an AI/machine learning expert, to better understand why an algorithm made a given decision.  Interpretability would allow government to know how their models will behave in the “real world.”

Interpretability has been a major focus of what organizations refer to as “explainable AI.”  For example, the Defense Advanced Research Projects Agency (DARPA) defines this as the ability of machines to: 1) explain their rationale; 2) characterize the strengths and weaknesses of their decision-making process; and 3) convey a sense of how they will operate in the future.

R&D on interpretability is a priority for many of TechNet’s members who are working to achieve broadly accepted technical solutions.

9. Safety and Security: TechNet supports the draft memorandum’s statement that “agencies should … encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process.”  We believe that a best practice when it comes to deploying a safe, secure, and fair AI system is the development and execution of internal governance models.

In addition to implementing internal governance models, organizations should be expected to increase investments in the security of their AI systems.  Many organizations’ current security investments are dedicated to securing the hardware and software attack surface, including patching vulnerabilities, static and dynamic analysis of production codes, and hardening operating systems.  While these are important measures, they do not account for the risk of AI models themselves being the target of adversarial attacks.  Thus, organizations’ security budgets should reflect this risk.

To that end, TechNet supports efforts by NIST to establish a Taxonomy and Terminology of Adversarial Machine Learning (ML) as a baseline to inform future standards and best practices for assessing and managing the security of ML.  

Non-Regulatory Approaches to AI

Pilot Programs and Experiments: TechNet supports the draft memorandum’s proposal allowing agencies the flexibility to grant waivers and exemptions from regulations, as well as the use of pilot programs that provide safe harbors for specific AI applications.  In doing so, agencies can encourage innovation and growth in their sectors.

Voluntary Consensus Standards: TechNet supports the preference given in the draft memorandum to voluntary consensus standards.  These can help create and safeguard trust at the heart of AI-driven systems and business models, and permit the flexibility for innovation, allowing codes to develop with the technology.  Examples of these standards include the Institute of Electrical and Electronics Engineers’ Global Initiative for Ethical Considerations in AI and Autonomous Systems (AS) and its nine pipeline standards on Ethically Aligned Design.

These multi-stakeholder initiatives can help harmonize industry standards and practices by providing equal access to AI resources for businesses of all sizes, and both new entrants and well-established companies alike.  They can also help identify gaps in existing standards and certifications, which can then be acted upon to fill.  One-size-fits-all solutions should be avoided as applications vary across sectors and industries.

Reducing Barriers to the Deployment and Use of AI

Agency Participation in the Development and Use of Voluntary Consensus Standards and Conformity Assessment Activities: TechNet supports federal agency engagement with the private sector on the development of voluntary consensus standards. TechNet agrees that such engagement would “help agencies develop expertise in AI and identify practical standards for use in regulation.”

Appendix A: Technical Guidance on Rulemaking

Regulatory Impact Analysis: The draft memorandum includes guidance for agencies on how to conduct cost-benefit analyses of potential regulatory approaches and states, “When quantification of a particular benefit or cost is not possible, it should be described qualitatively.”

TechNet believes that all cost-benefit analyses of AI applications should include consideration of both quantitative and qualitative metrics.  

Qualitative metrics provide important context and should not be used only when quantitative measurements are unavailable.  In fact, qualitative analysis is often needed to understand quantitative measurements.  For example, there are multiple ways to measure quantitatively the “fairness” of a particular AI application; and the best way to determine which of those quantitative measurements provides the best measurement is often through context and qualitative analysis.

Assessing Risk: The draft memorandum states, “Agencies should also consider that an AI application could be deployed in a manner that yields anticompetitive effects that favors incumbents at the expense of new market entrants, competitors, or up-stream or down-stream business partners.”

Given the early stages of development and implementation of AI, agencies should not assume that this technology will necessarily yield anti-competitive effects. Indeed, AI could be deployed in a manner that yields pro-competitive effects.  By increasing efficiencies and disrupting incumbents, AI could give competitors an advantage over more well-established companies.

Agencies have a key role to play in helping realize the pro-competitive benefits of AI by opening up data to help drive innovation from startups and smaller enterprises that, unlike large corporations, might not have the resources to accumulate a critical mass of data.  The most critical levers to help small enterprises take advantage of AI are access to data, technology, and people.

The federal government should lead by example by sharing public-sector data sets through the creation of public data platforms that small enterprises can freely access.  Additionally, government can encourage the private sector and scientific and research institutions to share data and collaborate, which can help support the development of vibrant AI ecosystems.

In closing, we thank you again for considering our perspective and recommendations to improve the draft OMB memorandum before it is finalized.

Previous
Next
CONNECT WITH US
CONTACT US
805 15th Street,  NW Suite 708 Washington, DC 20005
(202) 650-5100
info@technet.org
Privacy Policy