How Ai Can Revamp A Business

Matt Kuperholz, Partner and Chief Data Scientist, PwC

Matt Kuperholz, Partner and Chief Data Scientist, PwC

First stated over 200 years ago and restated famously by Winston Churchill and more recently Ben Parker (Spiderman’s Uncle), this quote is relevant when considering Artificial Intelligence (AI).

I have been using AI to solve my clients’ problems for over twenty years. I’ve seen the evidence of how it frequently and increasingly empowers businesses in all industries. But what about the increased responsibility that comes with this increased power?

When businesses use AI to improve productivity, reduce risk and operating costs, and when promoting rapid innovation, do they adequately consider the risks associated with AI implementation?

Risk mitigation requires AI is developed, deployed, governed, operated, and maintained in a responsible fashion. This article discusses what we mean when we talk about Responsible AI. AI is changing the world at exponential rates. Enabled by the same exponential trends in data collection and computing power, AI is the technology on every business’s radar. AI applications are already pervasive as we move towards increasingly AI-enabled businesses

I am extremely optimistic about the net positive value to society that AI will provide. However, a new level of attention will be needed for the development and deployment of AI-powered solutions. Their pervasive successes makes them a potential point of failure. We surveyed executives and found 85 percent believe their companies are taking sufficient measures to protect against AI’s risks. However, when our research probed deeper, the findings suggest an underappreciation of the true challenges and level of effort needed to responsibly capitalize on AI. When it comes to implementing controls around AI, there’s still a long way to go.

AI is not just increasingly prevalent, it’s often invisible. It can be hidden in everyday

AI introduces novel risks—not only technology risks in relation to its performance or security and control; but also non-technical risks including misalignment with organizational ethical principles, as well as social and economic risks (reputation, liability profiles, and job displacement).

Our research found only one-third of respondents have fully engaged with risks related to data, AI models, outputs, and reporting. In the face of growing public concern over issues such as bias in algorithms, this is cause for concern. AI is not just increasingly prevalent, it’s often invisible. It can be hidden in everyday business processes and vendor-supplied solutions. Businesses are developing machine-based decision-making and learning systems virtually unencumbered by AI-specific laws, legislation or regulation—particularly in the APAC region. Rigorous AI risk management is increasingly critical.

The key role of the Chief information officer (CIO) is to set and lead an organization’s information technology (IT) strategy, to ensure the enterprise-level IT roadmap and infrastructure and resources are in alignment. They must provide an executive-level interface between the technology department and the rest of the business. Similarly the Chief Risk Officer is the executive accountable enabling the efficient and effective governance of significant risks, which must include AI. This isn’t only the AI developed by the business, but includes that integral to third-party services utilized by the business. Controlling how AI solutions are built, bought or rented, implemented and governed is typically the responsibility of the data scientists, data engineers, developers and the business owner. In a traditional risk management framework, there are often too few AI-specific controls.

Few organizations have detailed actionable AI-specific ethics frameworks. AI audit criteria and ethics are often discussed but more rarely integrated, actioned, or enforced. The significant disconnect between those planning, building and deploying AI and those ultimately accountable for the risks in the eyes of regulators, customers and society is concerning.

Ensuring Responsible AI, including integrating risk mitigation and attending to any ethical concerns, is essential for building trust in its application. It includes using explainable or auditable algorithms, embedding controls to catch undesired data and algorithm biases, considering the impact on privacy and supporting global standards of practice. Responsible AI, and in fact the responsible use of advanced analytics and data, must consider the complete ecosystem—from collecting and sourcing data, to the development and deployment of algorithms, to the actions that result from their outputs.

Our solutions for ensuring Responsible AI are underpinned by five pillars:

1. Is AI unbiased? Is it fair? There are many types of bias, and fairness is difficult to define, measure and control. While it may be difficult, it is not impossible.

2. Is AI interpretable? Is it transparent and explainable? Can it be proved that it does what it is supposed to do? The degree to which an AI must be interpretable will depend on its purpose and, increasingly, on legislation and customer and citizen expectations.

3. Is AI robust and secure? Is it safe? Working with this aspect of Responsible AI will require rigorous validation, monitoring and maintenance.

4. Is AI appropriately governed? Ensuring this will begin at design and influence its operating models and processes.

5. Is AI legal, ethical and moral? These are challenging questions and will depend on jurisdiction, company roles and moving societal norms. They also require developing, deploying and governing effective and actionable ethical frameworks in regard to AI and other advanced uses of data.

The take away from this article for the CIO is:

• As the use of AI increases in breadth and speed to market, business should prioritize its governance in accordance with tailored ethical frameworks;

• Establishing a governing body, best-practice playbooks and risk documentation will reduce risk and evaluation time; and

• Good AI governance will allow companies to use AI efficiently and safely to realise their ambitions.

Weekly Brief

Read Also

Power of Connectivity and Data

Power of Connectivity and Data

Homan Yuen, Partner, Fusion Fund
 Understanding Fine Details of the IPO

Understanding Fine Details of the IPO

Jeremy Abelson, Portfolio Manager, Irving Investors
Improving Access to Healthcare with Technology

Improving Access to Healthcare with Technology

San Banerjee, Vice president of Digital Experience, Texas Health Resources
E-Discovery, the Internet of Things and the Big Data Landscape

E-Discovery, the Internet of Things and the Big Data Landscape

James Carpenter, CTO and CISO, Texas Scottish Rite Hospital for Children
Consumer First. The Advertising of Tomorrow

Consumer First. The Advertising of Tomorrow

Keith Petri, Chief Executive Officer, LockrMail