Innovation through Design Thinking
Ignite and sustain innovation into your company's DNA and navigate disruption with our online course.
IDWPlease enter a keyword and click the arrow to search the site
Or explore one of the areas below
Exposing the inner workings of machine learning allows us to take full advantage of artificial intelligence
It has been called the ‘dark heart’ of artificial intelligence (AI) – the complicated ‘black box’ of hidden machine learning algorithms that many would have us believe will allow AI to take our jobs and run our lives.
But before that can happen AI must be integrated into our everyday systems and protocols – including regulation. Product users and stakeholders must also have trust in AI and machine learning – otherwise they simply won’t use it.
New interpretability techniques are now making it possible to lift the lid on the black box.
Overcoming the “Why should I trust you?” scepticism about AI and machine learning is perhaps the biggest challenge that businesses need to master to gain trust from their stakeholders – customers, employees, shareholders, regulators and broader society.
This is particularly important in applications where predictions carry societal implications – for example, criminal justice, healthcare diagnostics, or financial lending. Transparency is a tool for detecting bias in machine learning models. Increased interpretability is also critical for meeting regulatory requirements such as the General Data Protection Regulation (GDPR) by making models auditable.
Ignite and sustain innovation into your company's DNA and navigate disruption with our online course.
IDWThink at London Business School: fresh ideas and opinions from LBS faculty and other experts direct to your inbox
Sign up