AI Myths, Realities, and Challenges:

Summary Notes of Event 

Event Description

Hosted by IRMA (Information Risk Management and Assurance) Specialist Group of the BCS (British Computer Society).

 

12/03/19 @ The Chartered Institute for IT, London

 

Speaker: Mike Small, Senior Analyst at KuppingerCole

Topics of talk:

  • What AI is

  • Limitations of AI today

  • AI in practice today

  • Areas where AI could be successfully applied

  • Ethical issues of AI and potential approaches

What Is AI:

  • Is the attempt to simulate human abilities. 

    • For this we need knowledge. Historically understood via epistemology, mathematics, and science.

  • Involves robotics and decision making. 

    • Mike will focus moreso on decision making.

  • Is not new. History stretching back to the 1940s with various attempts to imitate human abilities e.g. chess, speech, etc.

    • What has changed is the invention of the Cloud. It brings the tech from the 1980s into the modern era by storing more data and making it easily accessible.

  • Works using rules. These same rules can be misapplied or misinterpreted by people: AI systems can help. AI often seen as less fallible and more trusted.

    • Anecdote: Salford University has an autonomous car which students regularly, intentionally step in front of in order to make brake. Would not do so if a fellow student were driving.

  • E.g. social security advice at DHSS – standardising advice by offering consistent suggestions.

  • E.g. container dock layout planner in Hong Kong – economising on space and efficiently organising the mooring of ships.

  • Is the business of data analytics. It builds up from statistical systems using algorithms into neural networks and results in features such as computer vision and natural language processing.

 

Machine Learning: “Iterative convergence on best match to training data”

  • One trains the system to analyse data with a training data set. Subsequently, just add more data to generate output.

  • Perceptron as example of an algorithm: a form of maths used to turn data sets/input into output, using weightings and thresholds.

 

  • AI is more general than Machine Learning. However, these terms are often interchangeable. Though, it used to be the case that rules-based sytems were called AI, and then systems based on LISP. Now it is Machine Learning.

  • Mike’s joke/point: systems are called solutions when finished and work; they’re called AI or Machine Learning when unsure – needs a bit of fairy dust to make it sound more interesting/sellable.

 

 

Deep Learning aka convolution – layers of processing

Limitations of AI:

Lack of common sense

  • Neural networks cannot explain themselves

  • Insufficient training data + bias

  • Machine Learning can be duped

    • E.g. exploit facial recognition using glasses (Adversarial Misclassification).

AI in Practice:

Security Analytics

    • Identify anomalies; event analysis and support less skilled analysts.

    • AI looks not just at data set but also books, websites, social media, etc. to output suggestion to analyst that takes into consideration recent developments and news.

      • Aside: AI can be used to augment more junior and less skilled roles – requires less of them, gives suggestions and makes them spot aspects/issues that they previously failed to acknowledge.

  • Manufacturing

    • Trained to spot defects in HD TVs by scanning the surface for pixel damage.

  • Regulatory Compliance (Finance)

    • Helps deal with sifting through regulation, particularly new regulation, and relates obligations to company.

    • Identifies any adjustments needed, especially in light of regulatory change.

    • Also used to identify insider trading since communication between traders are monitored.

      • E.g. identify buzzwords and code that can be linked to trader behaviour e.g. “it going to rain in Chicago” and then trader sells shares in Chicago.

  • Autonomous database

    • Suggestions to organise data differently and thereby improve performance.

Future of AI:

  • Narrow AI (emerging)

    • Single tasks, single domain: very accurate, very fast.

  • Broad AI (disruptive)

    • Muti task, multi domains: explainable; transfers skills and tasks across domains.

  • General AI (revolutionary)

    • Cross domain learning and reasoning; autonomy.

 

  • The challenge is public acceptance.

    • GM crops/Golden Rice as case study of innovative technology curtailed by protests.

    • Computing may flit from benign to malign in public perception, thus evading risks but also losing out on potential gains and benefits.

 

  • Surveillance Capitalism: the focus has changed from money to data. This could potentially face a backlash.

Ethics:

  • Involves looking at the Context->Consequences->Justification

  • Popular case study: Trolley problem

    • Different cultures have different answers.

    • Can’t sue a car, who do you sue? The designer, developer, builder, owner?

    • In court, would need to defend the decision made by the car. Would this be clear?

  • Need to consider:

    • Benefit v harmfulness

    • Inclusion v exclusion 

      • Diversity

      • Disability

      • Bias

    • Bias v unfairness

      • Bias towards the culture of the data set

      • Not all bias is bad, unfair or unintended bias is.

    • Good v bad behaviour

    • Responsibility

    • Economic impact

  • How do you convince an AI that it has done something wrong?

    • Film example: ‘Dark Star’ bomb.

©2019 by AITHICS