This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minutes read

AI ethics and governance: the challenge of balance and bias

That artificial intelligence should be deployed and developed within robust ethical frameworks has become a truth almost universally acknowledged. The implications of undetected bias (in the data sets used to train AI or indeed in the minds or group make-up of the authors of the code) are well-documented and present clear risk factors for any ethical framework. This risk landscape is increasingly well understood. Yet, what a successful model for navigating that landscape might look like remains a challenging question, as demonstrated by the recent disbanding of Google’s “Advanced Technology External Advisory Council” (ATEAC) following controversy surrounding its members.  

It is timely that, in the same week, the High Level Expert Group on AI (an independent committee established by the European Commission in June 2018) has published the final version of its “Ethics Guideline for trustworthy AI” which includes a list of seven principles designed to help organisations create and implement “trustworthy” AI. The guidelines are one of the first of their kind and aim to provide businesses with a voluntary set of practical principles which can be operationalised. EU officials have stated that there are no plans to initiate specific legislation in respect of AI but the hope is that requiring early adoption of high ethical standards will afford European businesses a competitive advantage as other jurisdictions catch up. The non-binding ethical guidelines establish seven principles required for trustworthy AI:

  • Human agency and oversight;
  • Technical robustness and safety;
  • Privacy and data governance;
  • Transparency;
  • Diversity, non-discrimination and fairness;
  • Environmental and societal well-being; and
  • Accountability.

The European Commission’s guidelines have been welcomed by many. Yet, even where a clear set of principles exist, businesses are still faced with the challenge of creating a governance structure which enables those principles to be embedded within their specific organisations. Many, including Google, have landed on the creation of ethics committees, boards or councils for this purpose, with the aim of benefiting from external or independent advice and viewpoints.

Google’s answer was ATEAC, which was intended to complement its internal governance structure by advising on the implementation of Google’s AI Principles (Google’s ethical charter for the responsible development and use of AI within its research and products), including complex ethical and philosophical questions of how and where Google should utilise its AI. The intention was to allow Google to benefit from the wisdom of a council of independent experts representing a broad spectrum of views and interests. However, two appointments to ATEAC proved controversial, namely Kay Coles James, President of the Heritage Foundation (a conservative, Washington-based think tank) and Dyan Gibbens, CEO of drone company Trumbull Unmanned. These appointments led to a backlash amongst Google employees (with parallels to the now familiar no-platforming debates which have surfaced in recent years), based largely on Ms James’ views on LGBTQ rights and immigration. An online petition stated to have the support of over 2,000 Google employees has also gained support from a substantial number of academics. Finally, on 4 April, Google announced, “it’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board.”

The difficulty Google experienced with ATEAC is a clear example of the challenge facing businesses seeking to establish robust, fair and representative ethical frameworks. Businesses need to inspire confidence in their own governance and reassure their employees and other stakeholders that they have established bodies capable of ensuring ethical principles are implemented. Establishing a balanced committee, with board-level representation, to oversee the use and development of AI ought to be high on the agenda of any organisation whose activities touch AI but seeking a balanced representation of viewpoints and values on such a committee is not necessarily a straightforward task.  Businesses will need to be mindful of the public relations angle to any appointment. They must also ensure that any advisory body operates under clear terms of reference and a strong set of principles which align with the business’ own brand, governance and organisational values. Due diligence on individual appointments – their values and views and assessing these in the context of the business’ own values and aims – will become increasingly important.

The difficulty Google experienced with ATEAC is a clear example of the challenge facing businesses seeking to establish robust, fair and representative ethical frameworks.

Tags

commercial, esg, blog, ai, esg