Artificial Intelligence
Tuesday, June 12, 2018
Emerging technologies offer many opportunities to enhance services, but what happens when we take humans out of the equation? Some may think that simply because technology is not human it cannot be unethical. However, that is not the case.

While artificial intelligence can vastly expand our access to knowledge, it has also been known to proliferate bias. The internet of things brings convenience to our every day lives, but raises issues around privacy. Virtual reality can educate us through emersion but can also be addictive. In the face of this reality, it may be tempting to simply avoid new technology to circumvent ethical issues. However, if government organizations are at the forefront of testing these technologies and setting the standards for usage, they will help to ensure the protection of our citizens and entities, promote positive outcomes, and realize the benefits of emerging technology. Here are a few things government can do to address ethics in technology.

Develop roles and teams dedicated to the ethical use of technology

Clear accountability will help drive the ethical technology agenda in government. The government already has groups, such as the National Institute of Standards and Technology, the Office of Science and Technology Policy (OSTP), and the Office of American Innovation, that are focused on IT modernization and could be leveraged to lead this effort. When DJ Patil served as the US Chief Data Scientist, he saw it as part of his role to “work carefully and thoughtfully to ensure data science policy protects privacy and considers societal, ethical, and moral consequences.” This is a role that the future OSTP director could assume as well. Governments could also consider the creation of new teams. The European Union, recognizing challenges with ethics in robotics and artificial intelligence, called for the creation of “a new European Agency for robotics to supply public authorities with technical, ethical and regulatory expertise and a voluntary ethical Code of Conduct to regulate who would be accountable for the social, environmental and human health impacts of robotics and ensure that they operate in accordance with legal, safety and ethical standards.” Aside from cross-government groups, individual agencies and departments should consider their use and role in emerging technology and who is responsible for ensuring the ethical application of it.

Create or update policies, regulations, and standards that guide technology ethics but also allow for innovation

Consortiums of private sector organizations, like the Partnership on AI to benefit people and society, have begun to consider standards around the ethics of technology, but the private sector does not have the broad view of government. It is through this lens that government can help to ensure the ethical development and use of technology. However, current policies and regulations may not be addressing or keeping up or in some cases, may even be hindering ethical or efficient use of technology. As Vivek Wadhwa noted in the MIT Technology Review, “effective laws and standards of ethics are guidelines accepted by members of a society, and that these require the development of a social consensus.” While the current pace of technology change is rapid, the pace of social consensus is much slower. Nonetheless, there are certainly use cases for regulators and policymakers to reference such as foreign policy guiding the ethics of war and state data security laws, not to mention the soon to be enforced General Data Protection Regulation in Europe.

Research, test, and collaborate with experts and stakeholders to better understand emerging technology and its ethical implications

Ultimately, in order to create the policies, regulations, and standards for the ethical use of technology, government organizations must understand the technology by conducting or funding research, testing outcomes, and engaging experts and stakeholders both in and out of government. The US Defense Advanced Research Projects Agency (DARPA) explains their responsibility as twofold: “First, the Agency must be fearless about exploring new technologies and their capabilities; this is DARPA’s core function, and the Nation is best served if DARPA pushes critical frontiers ahead of its adversaries. At the same time, DARPA is committed to addressing the broader societal questions raised by its work and engaging those in relevant communities of expertise to provide context and perspective for consideration. DARPA works rigorously within the law and regulations and with appropriate organizations where legal and policy frameworks already exist. In new and uncharted territory, the Agency engages a variety of experts and stakeholders with varying points of view—both to hear what they and their professional communities of practice have to say and to help convey to those communities DARPA’s insights about what technology can and cannot do.” One example of this is in DARPA’s work around increasing trust in AI. In order to do so, they granted $6.5M to computer science professors at Oregon State University to better understand and communicate with AI.

Educate and enable government employees to understand and reduce ethical risks

The final component is ensuring that government employees and constituents are educated and have the tools and resources to ensure the ethical use of technology. The Office of Government Ethics published 14 general principles outlining what’s considered ethical behavior for government employees, which could be updated to include the ethical use of technology and data. In addition, tools such as mobile apps could make it easier for employees to understand technology and its ethical implications. For example, in 2017, the US Department of Agriculture launched a mobile app to provide their employees answers to ethical questions on the go.

While it may seem easier to take a sit and wait approach to avoid the risks associated with emerging technology, these tools are already deeply embedded in our everyday lives. Citizens are looking to government to provide protections that the market cannot - to understand the technology and be proactive in addressing ethical concerns. This requires government organizations to be nimbler as changes occur more rapidly and unforeseen consequences arise, to provide a framework for ethical modernization, and to be the voice of the human in an increasingly humanless landscape.