Skip to main content

Please enter a keyword and click the arrow to search the site

AI – unlocking the black box

Exposing the inner workings of machine learning allows us to take full advantage of artificial intelligence

AI-unlocking-the-black-box-banner

It has been called the ‘dark heart’ of artificial intelligence (AI) – the complicated ‘black box’ of hidden machine learning algorithms that many would have us believe will allow AI to take our jobs and run our lives.

But before that can happen AI must be integrated into our everyday systems and protocols – including regulation. Product users and stakeholders must also have trust in AI and machine learning – otherwise they simply won’t use it.

New interpretability techniques are now making it possible to lift the lid on the black box.

Increased transparency equals more trust


Overcoming the “Why should I trust you?” scepticism about AI and machine learning is perhaps the biggest challenge that businesses need to master to gain trust from their stakeholders – customers, employees, shareholders, regulators and broader society.

This is particularly important in applications where predictions carry societal implications – for example, criminal justice, healthcare diagnostics, or financial lending. Transparency is a tool for detecting bias in machine learning models. Increased interpretability is also critical for meeting regulatory requirements such as the General Data Protection Regulation (GDPR) by making models auditable.

Select up to 4 programmes to compare

Select one more to compare
×
subscribe_image_desktop 5949B9BFE33243D782D1C7A17E3345D0

Sign up to receive our latest news and business thinking direct to your inbox