ExplainAI

Introduction

The concept

The concept is to deconstruct popular AI algorithms into their simplest or simpler forms. Either in a domain specific manner, solving the problem with field experts (5), alternatively in a non-domain specific manner (1-4), where the algorithms themselves are investigated. The objective is to rank AI algorithms that can be:

  1. Explained (best)
  2. Interpreted
  3. Decomposed
  4. Mirrored 
  5. Used in parallel with existing solutions (acceptable)
  6. None of the above (risky)

The approach is to utilise RI.SE SICS’s 40+ years experience withSwedish and European industry to select AI-algorithms underconsideration for their AI-worthiness.  Since domain specificsolutions and AI methods address the same problem, we will comparethem, using ongoing and recently completed projects. We will adjudgewhere domain specific solutions could be used (#5 above), and wherenot, an AI solution will be used to improve the existing one.

The envisioned area

5G & cloud, however companies are transforming through digitalisation & want to include them in the big data domain. 
Standards work is very relevant & timely. Why? industry, whilst watching standards are pressing ahead with their own solutions. 
Your own “Standards watch” highlights important activities, but it is an effort to follow these, as well as the plethora of others. Humane AI works with a European network of AI research laboratories called CLAIRE.  The BDVA association is one we are a member of (see KPIs) as well as HLEG, but following these takes time, personal time & cost.

a) People gaps: Hiring developers who understand data & models. Exacerbated by the chronic shortage of ML engineers & data scientists.
b) Bad practices: focussing on % improvements, but not the holistic picture, see worst practices. 
c) Technological depth: needed  to 1) understand the algorithms, and 2) reduce them to something that is at least interpretable.

Expertise

My education is in Maths & Physics (B.Sc), Comp. Sci (M.Sc.) and EE (Ph.D.) and Information Theory & Diff. Equations (Post-doc), and continues today statistical processes, Bayesian models and time series. With some funding RI.SE can pass on this information, explain it, and make recommendations within existing projects. This seems logical to bridge the gap between standards and industry. RI.SE, is the national non-profit research organisation, whereof SICS is a part (incorrectly identified as an SME in the previous evaluation). I have worked with Swedish industry for 20+ years.
I am responsible for the SVI , part of the European Investment Fund, BDVA, & CLAIRE participation mentioned in Nature A site dedicated to AI-related activities within transport is at BADA Actively working with Ericsson on load classification using ExplainAI concepts (new K-means algorithms to include streaming data sources)

Impact

A new focus on the algorithms and data processing we are doing in 2019. Initially we will implement these ideas through papers and seminars (which we do anyway) with Swedish industry. By linking with such conferences and partners, BDVA and an EU proposal on Explainable AI we will bring to the attention of European industries the risks of deploying AI and how to mitigate them. As of yet, many of the algorithms are being written and tested with real data, but not deployed, so we have a window of opportunity. Support from StandICT & the European Investment fund we will take ExplainAI to the European level. Recently  a session at the European Conference on Data Analysis where we presented algorithms from an interpretable point of view. 
Furthermore, since we working with larger Swedish industries, they all operate in the EU, of the ones listed, all of them operate in 15+ countries which automatically brings in a European perspective. 

Standard Group(s)

  • ISO / IEEE

Other Standard Group(s)?

Indicate what is the added value of the proposed activity to targeted SDO & ICT standardisation processes

As indicated our added value is to help European industries jumping into AI solutions that they neither understand nor can maintain.  Our role will simply be to educate industries into the advantages and pitfalls of introducing AI into their organisations.

For the standardisation process we will simply provide which AI algorithms have an explainability value. Now algorithms are ranked or judged in terms of accuracy (RMSE), computation and memory usage, but not how transparent they are. Something akin to a TRL for algorithms, is what we envisage in around 2 years. Clearly, initiatives such as CLAIRE and BDVA will be easier to influence in the first stages.  However, with the larger companies (and their weight) larger organisations, and via the EU itself we will add value in the technical aspects of the algorithms. 

Implementation

A1. 6 months : A BDVA-approved paper presented to the groups above A2. 12 months: Release a paper on algorithmic explainability (ECDA)
B1. 18 months: Suggestions to focussed EU groups (CLAIRE, HLEG, BDVA)
B2. 24 months: Input to standardisation with industry (Ericsson and ABB onboard) for ISO/IEC JTC 1/SC 42, WG 11, Compressed Representation of Neural Network Functions. Also ISO TC204 Intelligent Transport systems we will contribute to (with Scania and Traton). 

The document (Section 8.3.2) actually states TBD, where we could contribute too.
C1. 30 months: Technical report on Deep learning on transparency (with standards and industry feedback)C2. 36 months: Assess and report the standards situation in Europe 2022 regarding algorithmic explainability ISO/IEC JTC1 is ongoing and through membership we can contribute to computational methods and trustworthiness. 

Operational mode

RI.SE (The research institute of Sweden) has nearly 3500 employees and hosts the national standards agency (www.sp.se). We are wellaccustomed to hosting standards bodies as well as de-facto ones. This is either on our own, or with the larger actors in Sweden. Thisincludes patent applications, and as a non-profit governmental organisation we must be transparent. Furthermore, as some money is from governmental sources, we are expected to release data, software and results.
The type of membership for an AI type activity is not defined yet, but technical or affiliate is acceptable, in order to state and propose the problems and suggestions to fix them where needed. RI.SE can also be the clearing house for companies worried about the new tech (one large city authority does this with us currently).

Measurable KPIs

1.  A BDVA-approved paper presented to the groups below 2.  Release a paper on algorithmic explainability (at ECDA 2019)
3. Suggestions to focussed EU groups (CLAIREHLEGBDVA)
4. Input to standardisation as in the workplan 
5. Technical report on Deep Learning on transparency (with standards and industry feedback)
6. Assess and report on the standards situation in Europe 2022 regarding algorithmic explainability (basically where are we?)

Resources

The funding required for the long term, includes support for travel & meetings.  The financial support will fund attendance at meetings. For 2019 the following have been identified for the first year, however we will reapply to implement the 3 year plan above.

EU expert group meetings

  • Big Data Value Association 2019, invited to attend and present at their board meetings.
  • Higher layer expert group
  • AI Alliance meeting
  • Claire symposium (1K Euro)

Standards accessMembership of ISO Sweden (SIS)

  • 2 years, 2019 & 2020 IEEE access
  • P7006 and ALGB bias of data working group proposals

Conferences: it is important to attend state of the art conferences and workshops.

  • ICML conference, Jun. 2019
  • NerIPS conference, Dec. 2019