The future of American democracy depends on ethical AI

the Transform Technology Summits begins October 13 with Low-Code / No Code: Enabling Business Agility. Register now!

Earlier this summer, the National Artificial Intelligence Research Resources (NAIRR) Task Force launched a request for information (RFI) on how to build an implementation roadmap for a shared AI research infrastructure. Along with requests for ideas on how to own and implement this agenda, he sought guidance on how to better ensure that privacy, civil liberties, and civil rights are protected in the future. To achieve this goal, values-based ethical reasoning education and training resources must be at the center of the Task Force’s strategy.

What is at stake

Congressional approval of the National Defense Authorization Act for fiscal year 2021 directing the Biden White House to create the NAIRR Task Force could be as consistent with America’s democratic ideals as numerous wars, policies, and movements. of civil rights in our past.

While the NAIRR Task Force’s first public announcements do not explicitly refer to foreign governments, make no mistake that geopolitical competition with China, Russia, and other nation-states looms large in the urgency of its mission.

Since the Manhattan Project and the race to develop the atomic bomb, one technology has not been as important in its potential to reshape the balance of power between Western democracy and what the Stanford Institute for Man-Centered Artificial Intelligence calls “digital authoritarianism. “Similar to the nuclear arms race, the path the United States takes to develop and deploy this technology will determine the scope of freedom and quality of life for billions of people on Earth. The stakes are so high.

The precedents are clear

While the due date set for the NAIRR Task Force report and roadmap is not until November 2022, it is important to note that ensuring ethics in AI upholding America’s values ​​is a lengthy process. and extremely fundamental to the American identity. . However, the precedent for an ethical and inclusive roadmap is written in our history, and we can look to the military, medical and legal professions for examples of how to do it successfully.

The military. On July 26, 1948, President Harry Truman issued Executive Order 9981 to initiate the desegregation of the military. This led to the establishment of the Presidential Commission for Equal Treatment and Opportunities in the Armed Forces and one of the most important ethics and values ​​reports in the history of the United States. But it’s worth noting that it wasn’t until January 2, 2021 that retired 4-star General Lloyd Austin III was appointed as the first Black Secretary of Defense. Embedding American ethics and values ​​in disciplines related to artificial intelligence will require the same sustained and relentless effort.

The field of medicine. The American Medical Association (AMA) Code of Medical Ethics is considered the gold standard in ethics and values ​​in a professional discipline, dating back to the 5th century BC. C. and the ideals of the Greek physician Hippocrates of “alleviating suffering and promoting well-being in a faithful relationship with the patient.” Despite this deep and rich history of ethics at the core of medicine, it took until 1977 for Johns Hopkins to become the first medical school in the country to implement a required course in medical ethics in its core curriculum.

The law. Bar associations began introducing codes of ethics for attorneys and judges in the United States in the early 1800s, but it was not until the early 1900s and the widespread adoption of the Harvard method In law schools, legal ethics was tied to professional responsibility, and a clear set of moral duties to society was embedded in legal education and the profession.

The road ahead

The disciplines related to AI (computer science, engineering and design) lag far behind other professions in ethical requirements, education and training. However, there are dozens of promising organizations and initiatives driven by technology ethics working to promote and unify education and training in ethical reasoning in AI.

Higher education. Integrating ethics and values ​​training into the core curriculum of every college-educated engineer, designer, and computer scientist should be a central goal of any national AI strategy.

For that end, The Markkula Center for Applied Ethics at the University of Santa Clara is one of the most prolific producers of hands-on technology ethics curricula, case studies, and decision-making training for students and professionals. Likewise, MIT has begun to develop a specific AI ethics curriculum for this purpose and should also be consulted during the implementation planning process. It’s more, ethics in AI institutes They are being created around the world and represent fertile ground for resources to be added to the NAIRR Working Group.

While most of these efforts focus on higher education and current professionals, the Task Force also has the opportunity to begin sharing ethical values ​​and resources with the large STEM-focused high school programs that are emerging across the globe. country. The STEM Education Committee of the National Science and Technology Council has outstanding the need for greater ethics education at all levels of STEM education, and the NAIRR Task Force has the opportunity to distribute and unify those resources.

Public-private associations and consortia. Leading public, private, and professional organizations are creating best-in-class offerings that train AI professionals in methods to build ethically sound AI. Consulting with these external groups will be essential as the NAIRR Working Group advances its national AI strategy.

For example, the World Economic Forum (WEF) Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning Platform is having a significant impact on governments and corporations around the world through consulting, publicly available research, white papers, ethical toolkits, and case studies. These products can help accelerate the benefits and mitigate the risks of artificial intelligence and machine learning.

Similarly, the Responsible AI Institute (RAI) has created the first independent and accredited certification program for the Responsible AI Institute. In fact, RAI has already been leveraged by the Joint Artificial Intelligence Command (JAIC) of the United States Department of Defense to incorporate ethics and values. responsible AI railings in hiring practices.

Looking ahead, it will take us years to incorporate ethics and values ​​into AI-related professional disciplines, but it is possible. As the NAIRR Task Force institutes its roadmap, the team should reference our history, provide resources for ethics training in college settings, scale that training to high school STEM programs, and work with professional organizations to instill top-of-the-class materials to enhance the skills of those currently in the industry. If we are to win the AI ​​innovation race while preserving our democratic principles, we must start here and we must start now.

Will Griffin is the chief ethics officer for Hypergiant, an artificial intelligence company based in Austin, Texas. It received the IEEE 2020 Award for Distinguished Ethical Practices and created Hypergiant’s Top of Mind Ethics (TOME) framework, which won the Communitas Award for Excellence in Artificial Intelligence Ethics.

VentureBeat

VentureBeat’s mission is to be a digital urban plaza for technical decision makers to gain insight into transformative technology and transact. Our site offers essential information on data technologies and strategies to guide you as you run your organizations. We invite you to become a member of our community, to access:

  • updated information on the topics of your interest
  • our newsletters
  • Exclusive content from thought leaders and discounted access to our treasured events, such as Transform 2021: Learn more
  • network features and more

Become a member


Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here