How Accountability Practices Are Pursued by AI Engineers in the Federal Government

Written by John B. Desmond, artificial intelligence direction editor
Two experiences were clarified on how artificial intelligence developers follow within the federal government Amnesty International for the Global Government The event was actually held and personally this week in Alexandria, Virginia.
Taka Arga, chief data scientist and director in the United States Government Accountability Office, Description of the artificial intelligence accounting framework that he uses within its agency and plans to make it available to others.
Bruce Godman, the chief strategy in artificial intelligence and machine learning in Defense innovation unit (DIU), a unit from the Ministry of Defense that was established to help the US military to use faster for emerging commercial technologies, described work in its unit to implement the principles of developing artificial intelligence to the terms that the engineer can apply.
Arga, the first large data science assigned to the US Government Accountability Office and director of the Innovation Laboratory at the Government Accounting Office, discussed Artificial intelligence accountability framework Help the development of a forum in government experts, industry, and non -profit organizations, as well as federal inspectors and artificial intelligence experts.
“We are adopting the auditor’s view on the framework of artificial intelligence accountability,” Areega said. “Get in verification work.”
The efforts made to produce an official framework began in September 2020 and included 60 % of women, 40 % of them of minorities, an incomplete representation, to discuss more than two days. The effort made was paid through the desire to establish the framework of artificial intelligence accountability in the reality of the work of the daily engineer. The resulting framework was published for the first time in June as what Arga described as “version 1.0”.
It seeks to bring a “high -height position” on Earth
“We have found that the artificial intelligence accountability framework had a very high position,” Areega said. “These are the ideals and aspirations worthy of praise, but what do you mean for the practitioner of daily artificial intelligence? There is a gap, while we see Amnesty International spread throughout the government.”
“We fell to the life cycle approach,” which is heading through the stages of design, development, publishing and continuous monitoring. Development efforts stand on four “columns” for governance, data, monitoring and performance.
Governance reviews what the organization has developed to oversee artificial intelligence efforts. “It may be the chief artificial intelligence officer in its place, but what does that mean? Can a person make changes? Is it multidisciplinary?” At the level of the system within this pillar, the team will review the individual artificial intelligence models to see if it was “circulated intentionally.”
As for the data column, its team will examine how to evaluate the training data, and the extent of its representation, and it works as intended.
For the performance column, the team will consider the “societal impact” that the artificial intelligence system will create in publishing, including whether it risks violation of the Civil Rights Law. “The auditors have a long busy record in the shares of the stocks. Arga said:
While emphasizing the importance of continuous monitoring, he said: “Amnesty International is not a technique that it publishes and forgets.” He said. “We are preparing to constantly monitor the typical drift and fragility of algorithms, and we are properly expanding the scope of artificial intelligence.” Areega said the assessments will determine whether the artificial intelligence system continues to meet the need “or whether the sunset is more convenient.”
It is part of the discussion with NIST on the government accountability framework. “We don’t want an environmental system for confusion,” said Arga. “We want the government’s full approach. We feel that this is a first useful step in pushing high -level ideas, leading to a meaningful rise to artificial intelligence practitioners.”
DIU evaluates whether the proposed projects meet the guidelines of moral artificial intelligence
In DIU, Goodman is involved in a similar effort to develop instructions for artificial intelligence projects within the government.
Projects participated in the implementation of artificial intelligence for humanitarian assistance, disaster response, predictive maintenance, counter -discovery, and predictive health. He heads the responsible AI working group. He is a faculty member at the University of Singularity, and he has a wide range of consultant customers from inside and outside the government, and holds a doctorate in artificial intelligence and philosophy from Oxford University.
In February 2020, the Ministry of Defense adopted five areas of Amnesty International’s moral principles 15 months after consulting with artificial intelligence experts in the commercial industry, governmental academic circles and the American public. These areas are: responsible, fair, tracked, reliable and wise.
“This is well visible, but it is not clear to the engineer how to translate it into specific requirements for the project,” Joud said in a presentation on the guidelines of artificial intelligence responsible at the Amnesty International event event. “This is the gap that we are trying to fill.”
Before DIU looks at the project, they go through moral principles to see if a crowd passes. Not all projects do. “There must be an option to say that technology is not present or that the problem is not compatible with artificial intelligence,” he said.
All stakeholders in the project, including from commercial sellers and inside the government, must be able to test and verify the validity of the minimum legal requirements to meet the principles. “The law does not move faster than artificial intelligence, and that is why these principles are important,” he said.
Also, cooperation is taking place throughout the government to ensure that values are preserved and preserved. “Our intention in these guidelines is not an attempt to achieve perfection, but to avoid catastrophic consequences,” said Goodman. “It may be difficult to get a group to agree on the best result, but it is easier to get the group to agree on the worst.”
Godman said that DIU guidelines along with status and additional material studies on DIU “soon”, to help others benefit from the experience.
Here are questions that DIU asks before the start of development
The first step in instructions is to determine the task. “This is the most important question,” he said. “Only if there is an advantage, if you use artificial intelligence.”
Next is a standard, which must be prepared in advance to see if the project has been delivered.
After that, he evaluates the ownership of the candidate data. “The data is important for the artificial intelligence system and is the place where there can be many problems.” Goodman said. We need a specific contract about who has data. If it is mysterious, this may lead to problems. “
After that, the Goodman team wants to be evaluated a sample of data. Then, they need to know how and why the information was collected. “If approval is granted for one purpose, we cannot use it for another purpose without reunification of approval,” he said.
After that, the team asks if the officials of the officials are identified, such as pilots who can be affected if one of the components fails.
Next, holders of the responsible task must be identified. “We need one individual for this purpose,” said Godman. We often have a comparison between the performance of the algorithm and its ability to explain it. We may have to decide between the two. These types of decisions have an ethical component and operational component. So we need a person responsible for these decisions, which is compatible with the leadership chain in the Ministry of Defense. ”
Finally, the DIU team requires a retreat if things get worse. “We need to be careful about giving up the previous regime,” he said.
Once all these questions are answered in a satisfactory way, the team moves to the development stage.
In the lessons learned, Godmann said, “The metrics are the key. Simplyne may not be enough accuracy. We need to be able to measure success.”
Also, it fits technology for the task. “High -risk applications require low -risk technology. When possible damage is significant, we need great confidence in technology.”
Another learned lesson is to put expectations with commercial sellers. “We need the sellers to be transparent,” he said. “When someone says they have a royal algorithm they cannot tell us, we are very careful. We look at the relationship as cooperation. It is the only way we can make sure of the development of artificial intelligence responsibly.”
Finally, “Amnesty International is not magic. It will not be resolved. It should only be used when necessary and only when we can prove that it will provide an advantage.”
Learn more in Amnesty International for the Global Governmentin Government Accountability Office, in Artificial intelligence accountability framework On Defense innovation unit location.
[og_img]
2021-10-21 20:34:00