Freedom to Tinker, hosted by Princeton's Centre for Information Technology Policy (CITP), has outlined seven common pitfalls often committed in discussions around AI ethics, particularly when these fail to include ethicists or philosophers.
1: The Reductionism Trap - "Doing the moral thing is essentially the same as acting in an X way, so ethics is the same as Xness. If we're being X we're being ethical."
2: The Simplicity Trap - "We need to distill our moral framework into a compliance checklist so that it's user-friendly, practical, and action-guiding. Once we've decide on a path of action, we'll go through the checklist to make sure we're being ethical."
3: The Relativism Trap - "We all disagree about what is morally valuable: there is nothing objectively morally good. Things can only be morally good relative to a person's individual value framework."
4: The Value Alignment Trap - "If relativism is wrong, there must be one morally right answer and we have to make sure everyone in our organisation acts in alignment with it. If our ethical reasoning leads to moral disagreement, then we have failed!"
5: The Dichotomy Trap - "The goal of ethical reasoning is to 'become ethical'."
6: The Myopia Trap - "The ethical trade-offs we identify in one context will be the same as in other contexts, in terms of both scope and nature of those trade-offs."
7: The Rule of Law Trap - "Ethics = rule of law. Ethics is a good substitute for the lack of appropriate legal categories for the governance of AI; when we have those legal frameworks, then we don't have to worry about ethics anymore."
Read here.
