The Ethics Of AI Is Receiving A Lot Of Attention. This Seems Like A Good Place To Start.

This appeared last week:

AI systems should be accountable, explainable, and unbiased, says EU

The European Union has published new guidelines on developing ethical AI

By James Vincent
The European Union today published a set of guidelines on how companies and governments should develop ethical applications of artificial intelligence.
These rules aren’t like Isaac Asimov’s “Three Laws of Robotics.” They don’t offer a snappy, moral framework that will help us control murderous robots. Instead, they address the murky and diffuse problems that will affect society as we integrate AI into sectors like health care, education, and consumer technology.
So, for example, if an AI system diagnoses you with cancer sometime in the future, the EU’s guidelines would want to make sure that a number of things take place: that the software wasn’t biased by your race or gender, that it didn’t override the objections of a human doctor, and that it gave the patient the option to have their diagnosis explained to them.
So, yes, these guidelines are about stopping AI from running amuck, but on the level of admin and bureaucracy, not Asimov-style murder mysteries.
To help with this goal, the EU convened a group of 52 experts who came up with seven requirements they think future AI systems should meet. They are as follows:
  • Human agency and oversight — AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes.
  • Technical robustness and safety — AI should be secure and accurate. It shouldn’t be easily compromised by external attacks (such as adversarial examples), and it should be reasonably reliable.
  • Privacy and data governance — Personal data collected by AI systems should be secure and private. It shouldn’t be accessible to just anyone, and it shouldn’t be easily stolen.
  • Transparency — Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make.
  • Diversity, non-discrimination, and fairness — Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines.
  • Environmental and societal well-being — AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
  • Accountability — AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance.
You’ll notice that some of these requirements are pretty abstract and would be hard to assess in an objective sense. (Definitions of “positive social change,” for example, vary hugely from person to person and country to country.) But others are more straightforward and could be tested via government oversight. Sharing the data used to train government AI systems, for example, could be a good way to fight against biased algorithms.
Lots more here:
As high level principles these seem to me to capture what we are wanting to see from AI.
I note the CSIRO is Australia is pursuing similar work.

CSIRO promotes ethical use of AI in Australia's future guidelines

For Australia to realise the benefits of artificial intelligence, CSIRO said it's important for citizens to have trust in how AI is being designed, developed, and used by business and government.
By | | Topic: Innovation
 The Commonwealth Scientific and Industrial Research Organisation (CSIRO) has highlighted a need for development of artificial intelligence (AI) in Australia to be wrapped with a sufficient framework to ensure nothing is set onto citizens without appropriate ethical consideration.
Data61, CSIRO's digital innovation arm, has published a discussion paper [PDF]Artificial Intelligence: Australia's Ethics Framework, on the key issues raised by large-scale AI, seeking answers to a handful of questions that are expected to inform the government's approach to AI ethics in Australia.
Highlighted by CSIRO are eight core principles that will guide the framework: That it generates net-benefits, does no harm, complies with regulatory and legal requirements, appropriately considers privacy, boasts fairness, is transparent and easily explained, contains provisions for contesting a decision made by a machine, and that there is an accountability trail.
"Australia's colloquial motto is a 'fair go' for all. Ensuring fairness across the many different groups in Australian society will be challenging, but this cuts right to the heart of ethical AI," CSIRO wrote.
CSIRO said that while transparency and AI is a complex issue, the ultimate goal of transparency measures are to achieve accountability, but that the inner workings of some AI technologies are not easy to explain.
"Even in these cases, it is still possible to keep the developers and users of algorithms accountable," it added. "On the other hand, AI 'black boxes' in which the inner workings of an AI are shrouded in secrecy are not acceptable when public interest is at stake."
Conceding that there is no one-size-fits all solution to the range of legal and ethical implications issues related to AI, CSIRO has identified nine tools it says can be used to assess risk and ensure compliance and oversight.
These include impact assessments, reviews, risk assessments, best practice guidelines, industry standards, collaboration, monitoring and improvement mechanisms, recourse mechanisms, and consultation.
More here:
The good thing is that there seems to be a sense of alignment with these various initiatives, and that can only be a good thing I believe.
Worth browsing the fuller reports.
David.
Share on Google Plus

About ana05

This is a short description in the author block about the author. You edit it by entering text in the "Biographical Info" field in the user admin panel.
    Blogger Comment
    Facebook Comment

0 nhận xét:

Đăng nhận xét