Welcome to Sustainable AI. Across the planet, children's lives are being shaped by the unseen work of algorithms. For many of these young people, social media, gaming, and the apparently limitless attractions of the connected world are now powerful forces. As a result, AI is guiding them relentlessly to their next video clip, news story, suggested friend, or influencer. And of course, the pervasive effect of AI does not stop with children. Throughout society, there is growing concern that we are simply not in control of a technology that has the potential to do immense harm. Rogue algorithms, biased facial recognition, and deep fake videos may just be the tip of a troubling iceberg. But if boundaries are needed, where and how do we go about drawing them?

For Maria Axente, the first guest in our second season of Future Says interviews, the debate is central to her professional life and work. As responsible AI and AI for good lead at PwC, the multinational professional services business, she is clear and convincing on the need for frameworks as a precursor to the deployment of AI, not an afterthought. Before doing anything with the technology, organizations and enterprises need to consider the implications, and mitigate the risks.

Given that her roles also include serving as an advisory board member on the UK's All-Party Parliamentary Group on AI, it is not surprising that Axente offers real insight into the bigger issues at stake for humanity. But in her role at PwC, she is equally accustomed to delivering practical proposals for clients from industry, academia, government, and civil society.

In doing so, Axente draws on the responsible AI toolkit that she helped to create, and a set of guiding principles that are the product of an extensive investigation into the world's leading academic studies and best practices. That's been essential in terms of securing the trust and confidence of all those who now refer to them. Equally, the thoroughness of this research has not resulted in an unwieldy and complex array of standards. In fact, the team at PwC's AI Center of Excellence identified just nine universal principles that can cover the responsible use of this technology. Within them, any type of organization and enterprise will find the relevant guidance they need to start building an effective framework for AI.

More broadly, Axente welcomes the EU Commission's proposal in April 2021 for the world's first set of dedicated AI regulations. Any potential weaknesses are very much outweighed by the fact that it provides Europe, and the wider world, with a starting point. While the details can and should be debated, the bigger picture is compelling: we cannot afford a situation in which unaccountable developers have completely free rein to create and deploy algorithms, regardless of the consequences.

Axente's support for effective regulation certainly does not reflect a pessimistic outlook. She highlights the groundbreaking work of UNICEF's AI for Children initiative in shaping public policy worldwide. There's also obvious enthusiasm for the benefits that AI can deliver.

Ultimately, Axente believes positive outcomes with AI will demand balance and bravery. The freedom for innovation to flourish needs to be balanced with protection against the potential for harm. What's more, we need to be brave enough to address the darker side of human nature, and our inevitable tendency to use technology for malevolent purposes. It may not be easy, but if we can get both these things right, the future for Sustainable AI is looking bright.

Click here to find the interview with Axente, catch up with all five episodes of season one, and sign up to be notified of new Future Says episode updates.

Attachments

  • Original document
  • Permalink

Disclaimer

Altair Engineering Inc. published this content on 16 September 2021 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 16 September 2021 14:01:06 UTC.