top of page

The Future of AI: Language, Ethics, and Technology

Summary Notes of Event 

Event Description:

Hosted by the Centre for the Humanities and Social Change’s research project ‘Giving Voice to Digital Democracies’. Funded by the Humanities and Social Change International Foundation.

 

25/03/19 @ the Centre for Research in the Arts, Social Sciences, and Humanities (CRASSH), The University of Cambridge

 

Speakers: 

  • Baroness Grender MBE (House of Lords Select Committee on AI): ‘AI Ready, Willing and Able? What Can the Government Do?'

 

  • Dr. Melanie Smallman (University College London/Alan Turing Institute): 'Fair, Diverse and Equitable Technologies: The Need for Multiscale-Ethics'

 

  • Dr. Adrian Weller (University of Cambridge/Alan Turing Institute/The Centre for Data Ethics and Innovation); 'Can We Trust AI Systems?'

 

  • Dr. Marcus Tomalin (University of Cambridge): 'The Ethics of Language and Algorithmic Decision-making'

 

  • Professor Emily M. Bender (University of Washington): 'A Typology of Ethical Risks in Language Technology with an Eye Towards Where Transparent Documentation Can Help'

 

  • Dr. Margaret Mitchell (Google): 'Bias in the Vision and Language of Artificial Intelligence'

Baroness Grender - AI Ready, Willing and Able?

  • Helped write the Select Committee Review on AI, came up with 5 ‘commandments’. 

    • The AI ​​should be developed for the common good and benefit of humanity; 

    • It should operate on principles of intelligibility (technical transparency and explanation of its operation) and fairness; 

    • It should not be used to diminish the data rights or privacy of individuals, families or communities; 

    • All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside AI; 

    • And the AI ​​should never be given the autonomous power to hurt, destroy or deceive human beings.

​

  • Quote from Google CEO: AI will prove a more profound change than fire or electricity.

  • Issue of AI is already out “there” and in-play. Matt Hancock said it was like trying to boil the ocean.

  • Analogy of HFEA (Human Fertilisation  and Embryology Authority) + Warnock Committee -> tech advancing lead to ethical questions and the law needed to catch up.

​

  • Select Committee recommendation: International norms need to be established and pushed by the UK.

  • UK set to be a world leader in AI – have a beneficial regulatory environment, as seen with Fintech; the scale and scope of AI companies; depth of research carried out.

    • Need to identify USP

      • On the ethical side of AI.

        • Put it at the centre of AI development and use.

          • This would benefit the UK and cause us to lead internationally, rather than passively accept international norms/policy.

  • Resources:

    • AI Ethical Framework

    • High Level Expert Group on AI – 5 principles for socially good AI across EU

    • Communications Select Report – Regulating in a Digital World – 10 principles

  • Need an ethical approach to AI in order to reassure the public.

  • AI Council, Office for AI, Innovate UK, Centre for Data Ethics + Innovation … are all examples of political leadership in this area.

  • Need to equip people for future

    • Retraining

    • Education to foster digital understanding 

    • Add ethical design and use of AI to the curriculum

  • More resources:

    • CDEI Bias review, uses the Equality Act as its basis in legislation.

    • DCMS report on Data Ethics and Cambridge Analytica ‘Disinformation and “Fake News”’

    • CDEI targeting review

    • Data Ethics Framework

  • Publisher v Content platform debate

  • Electoral Commission needs new powers for the digital world, it’s currently not equipped to deal with misinformation, ads, and online campaigning.​

​

Q&A

 

  • How to balance keeping a competitive edge with maintaining a strong ethics? Particularly, when competing with China.

    • Example of TFL data being open source which meant that smaller companies could make use of it, which grew the economy as a result.

  • Transparency can lead to underperformance, e.g. neural networks designed to be interpretable

    • UK gov agrees in their response to the report.

      • But we need trust and consent for AI/Big Data -> transparency and ethics is tantamount.

  • Difficult to be ethical when AI can be piecemeal, e.g. Project Maven, where a company is asked to do object-recognition, which then later is combined by the Pentagon into autonomous weapons system without the AI companies knowing.

  • How are we to understand the notion that UK’s USP is on the ethical side of AI?

    • It has an ethical/regulatory environment that is conducive to innovation.

      • This response strikes me as viewing ethics in a particular way, perhaps ethics as regulation, rather than ethics as morality.

Dr. Meleanie Smallman - Why We Need ‘Multiscale’ Ethics:

  • At UCL, they teach engineers about ethics.

  • Car analogy: the car has shaped society and cities, changed the way we live our lives. It’s not just a technology that gets us from A to B.

 

  • Digital Tech is driving inequality

  • This affects attitudes to tech and the state of the world

  • This is built into the tech

  • We need ethics to include ‘shape of the world’ factors.

 

  • The growth of the economy is decoupled from wages.

  • Case study: Google state aid from Ireland, but didn’t pay taxes there – resulted in being fined by the EU.

    • While cases like this carry on, there is a correlation with protests about cuts and austerity.

 

  • “Automating Inequality” a book by Virgina Eubank, very recommended.

  • Automated systems e.g. in policing and fraud detection, in their most invasive and punitive forms are aimed at the poor.

  • Tech abuse in the form of Internet of Things and surveillance are increasingly common.

  • Public perception that new tech benefits big business not people

    • People feel out of control

 

  • Ethical risks and frameworks are all about use, consent and security

    • Not much is currently about how the world or services will change / how it will look.

    • Nothing also about inequality.

 

  • ‘Multiscale Ethics’ (2019) by Smallman

 

  • Need different voices and perspectives on the development of AI -> not just big business or technologists or ethicists.

Dr. Adrian Weller - Can We Trust AI Systems?

  • Need to make sure that trust is not misused

    • Trustworthiness markers are needed

  • AI itself is not responsible – deployers of AI are.

  • Machine Learning, Computer Vision, and Voice-activated tech are all advancing   

    • Limited by datasets

    • Algorithms can fail 

    • Lack common sense and huma reasoning 

 

Deep learning:

  • Data hungry

  • Computer intensive

  • Bad at representing uncertainty

  • Black boxes

  • Subject to adversarial attacks and examples

    • Black and white noise/static can be subtly imposed on a picture to fool a classification software, e.g. mistake a panda for a gibbon.

      • Or put tape on a ‘Stop’ sign in such  a way that an automous car reads it at “45mph” speed limit. 

  • Need to ensure we can trust AI before deploying it into the world and scaling up in real life.

    • Need to know how it works before we know it’s trustworthy.

  • Transparency: for the developer?

    • So they can e.g. see if it’s working

  • For the user?

    • E.g. credit score rating, so they can ask ‘why did I get this score?’

  • For an expert?

    • E.g. what caused the car to crash? Let’s them work backwards and find out what happened.

 

2 themes in transparency of AI

  • (1) restrict model class so it’s easy to understand

  • (2) use the best model possible but use other tools to help understand the model.

    • (1) decision tree model?

      • Can quickly become complex and even humans can disagree or be confused by them

    • (2) saliency approach?

      • Can pick out irrelevant factors, based on incidental features of the training data.

        • Can use a tool to see what led to the decision which ultimately leads to an improvement to the system by adapting the data set.

  • Explanation can be abused to manipulate the system.

    • In humans, explanations for our actions is ad hoc, a form of rationalisation.

      • There is a similar risk in AI – creating a smokescreen, hiding the real reason for a decision.

  • Explanations can lead to users gaming the system.

  • We should see transparency as a means to an end, not just an end in itself.

    • No good having fully transparent autonomous cars that continually crash and kill people.

 

Fairness in decision-making:

  • E.g. loans, criminal justice, recruitment, even soap dispensers.

    • Google’s image captioning system that misidentified black people as gorillas.

      • This was ‘fixed’ by removing gorillas from the labelling tech.

  • Bias in, bias out?

    • Not quite true, it’s a starting point.

 

  • Need measures of trustworthiness

    • Reliable performance

    • Fairness

    • Appropriate privacy and transparency

    • Control over influence

 

  • AI-powered influence is new and can be dangerous

    • This needs to be mitigated.

Dr. Marcus Tomalin - The Ethics of Language and Algorithmic Decision-Making:

  • Classic theories of ethics are 

    • Action focussed

    • Agent oriented

    • Anthropocentric

  • Ethics and language are linked.

    • E.g. speech acts (Austin) such as promises, judgements etc.

    • Utterances can have good or bad consequences, of which some are banned in various societies e.g. hate speech.

  • Ethical linguistic norms vary generally

    • Depending on language used, context of utterance, the tone/register/genre/style of utterance.

    • Which of these should be adhered to by AI?

 

  • Need to be ontocentric (object-oriented) + patient-oriented as a new philosophical framework.

  • AICT (Artificially Intelligent Communications Technologies) are artificial agents

    • Their utterances are morally qualifiable – have impact (harm/offence).

  • Does the dataset determine the moral character of the utterances?

 

  • Example of NMT (Neural Machine Translation):

    • Often sexist by having masculine words as default.

      • Result of training data bias

    • But data doesn’t decide, it’s inert, it merely provides the opportunity to make decisions.

 

  • Decision Theory

    • Philosophy branch to understand how humans decide

      • Humans have preferences, beliefs and desires with which we make decisions.

    • Do machines follow the same lines?         

      • Where and when do they make the decisions?

  • (NMT) – decoders use a search strategy to find the best translation using probability values

    • Where does this come from?

      • Weights + bias vectors of encoder

      • Number-mapping of source sentence

      • Encoders conversion of the source sentence into a fixed-dimension summary vector

        • None of these do the deciding(!)

  • We should accept that datasets are biased, but modify the output relevant to the target culture

    • Known as post-processing

  • Can use this to quarantine output; alert user to potential offence of translation.

 

Q+A)

  • Hits home that human decision-making uses factors that aren’t easily quantifiable

    • Pulls in lots of relevant elements that don’t reduce down neatly.

      • [Interestingly, we seek the same rates of success for, e.g diagnostic AI/Doctors, but we want to counter-act and mitigate bias, overcoming/surpass the bias in training data in e.g. NMT. The formers purpose is to free up time, so don’t mind if it’s just as bad. The latter strikes us as perpetuating problems and a missed opportunity to rectify wrongs.]

Professor Emile M. Bender - A Typology of Ethical Risks: 

(NLP = Natural Language Processing)

​

  • There are Direct or Indirect stakeholders, that either choose or don’t choose to interact with NLP systems.

  • Direct + by choice:

    • Spell-check

    • Voice assistant

  • Direct + not by choice:

    • Encounter GPT-2 generated text online

  • Indirect

    • Subject of Query

      • Name search results in ads based on my ethnicity

        • Employment prospects could be potentially harmed

      • Facebook status translated incorrectly

        • E.g. Palestinian man who said ‘Good Morning’ but was translated to ‘Attack them’.

    • Contributor to broad corpus

      • ASR doesn’t caption words as well

      • Language system doesn’t recognise accents

    • Subject of stereotypes

      • Voice assistances are female and ordered around

      • Systems using webtext to understand words reflect stereotypes

        • E.g. automated Yelp review of Mexican restaurants.

  • Solution?

    • Transparent documentation and training data

      • State what the system is trained on

        • Therefore, can know what may be underrepresented in dataset

    • i.e. foreground the data.

    • Responsibility would be spread across society:

      • E.g. NLP researchers and developers

        • To test broadly

      • Procurers

        • To choose the system based upon their needs using the data statement

      • Consumers

        • Question it to see if appropriate for them.

      • Public

        • Advocate for others; be aware of the process

      • Policy makers

        • Require transparency from companies, developers etc.

  • Data statements will help mitigate negative impacts of NLP.

Dr. Margaret Mitchell - Bias in the Vision and Language of AI:

(Un)fairness in AI vision and language

 

  • Prototype theory

    • Yellow bananas

      • Don’t describe bananas as yellow – yellow is prototypical of bananas

  • Human reporting bias

    • Frequency of things deemed worth mentioning

      • Can affect dataset

  • Biases in data

    • In collection of data

    • In interpretation of data

  • Human bias at every stage, causes feedback loop as output becomes training data.

  • Bias network effect aka ‘bias laundering’

  • Bias can be good, bad, or neutral.

  • Bias in statistics and machine learning

  • Algorithmic bias

  • ML systems can amplify injustice

    • Criminality of the face

      • New physiognomy 

    • Homosexuality from the face

      • Differences in culture not facial structure is the relevant factors picked out

        • Using glasses and head tilts in photo leads to similar rates of classification

  • Model cards for model reporting

    • Report how well a model works

      • How and why it works

    • E.g. primary intended use; groups likely to be affected; how well it performs across groups – whether there is parity

​

www.ml-fairness.com

bottom of page