How Can Artificial Intelligence Create Human Value Without the Cost

Artificial Intelligence (AI) has brought a great deal of change to modern businesses, governments and the general public all over the world. People expect that AI’s applications will transform the way business is done by creating new business models, shoring up efficiencies and increasing profits. There is a humongous anticipation about AI and companies and governments alike are showing interest and confidence in using AI in their businesses.

But this is not the first time people are enthusiastic about AI. During its early days, in 1950-70, the initial hype led to what is known as “AI Winter”. Funding for AI projects and research was not available for quite some time, and people were no longer interested. Something similar happened a decade later. There was a renewed interest in 1980-87, which again died down later on and research funding became scarce. This history is essential to remember as organizations must keep in mind that  AI may face another “Winter”. As such, they must take steps to ensure their AI investments are smart enough to bring about long term value. They must understand the risks associated with their AI investments to make good use of AI to build human value without extracting human cost.

For safeguarding their AI investments, the first step is to understand what AI technologies are capable of doing right now. A corollary to this step is to know the cost of a mistake. Finally, it is essential not to get carried away by the enthusiasm, as it is still in its early stages.

What AI technologies are capable of doing today

Initial rapid advances made in the AI research led to researchers making claims that machines may surpass humans in general intelligence very soon ( Marvin Minsky: AI pioneer). But that did not come about due to a variety of reasons chiefly limited and costly computing power in those days, as well as lack and storage of data systems were expensive to maintain too. These led to the AI winter when funding was no longer available. These technical issues have been resolved now. Still, the AI experts are fighting shy of making claims about attaining AGI (artificial general intelligence) in the foreseeable future. There have been impressive breakthroughs, yes, but in the words of Oren Etzioni, professor at the University of Washington and CEO of the Allen Institute for AI, “We’re so far away from…even six-year-old level of intelligence, let alone full general human intelligence…”

You might also be interested to read: Technology Trends in HR : Challenges and Rationale (Use of Blockchain to Solve HR Challenges)

While AGI perhaps is a long term goal for researchers, the latest area of interest is the ANI or the artificial narrow intelligence.

Artificial narrow intelligence

Sophisticated and affordable computer power available today, vast amounts of digital data and deep learning (Geoffery Hinton’s breakthrough) have resulted in enormous quantities of ANI applications. Such applications perform single and specified tasks very efficiently and consistently, even better than humans. These ANI algorithm based applications are creating human value and are extensively used in digital voice assistants, product recommendation and cancer detection. These applications have found use in even astronomy and genetics (insights from human genetic data). The commercial use of such ANI application is truly vast.  Because of this, people’s optimism in AI this time will sustain and most likely will not lead to an AI Winter.

Mistakes and what they cost

Although current technologies can offer solutions to a vast number of issues, it is essential to take into account risk profiles of these issues in finding suitable solutions through ANI.

ANI systems are in use extensively to take advantage of the enormous digital data available.  There are indeed benefits in terms of efficiency and productivity, but there are risks attached. ANI’s shortcomings and the risk of unintended human cost require scrutiny.  ANI algorithms use its training data; if there are social cognitive biases in this data, they will be carried forward as these algorithms cannot reason beyond the training data. The cost of an error like this can have a severe and far-reaching effect in situations where the algorithm’s result affect human lives and fate.

At other times, algorithmic errors are inconvenient that do not impact the usefulness of it in a meaningful manner.  Simple mistakes in digital voice assistants do not undermine its value, and use of DVA (digital voice assistants) continues to increase. On the other end of the spectrum, however, algorithmic errors, like in the case of accidental deaths involving self-driven cars, totally shatter consumer confidence in AI-based applications.

In crucial areas like education and job recruitment, mistakes (such as racial and gender bias) can lead to a hefty cost in terms of human value.  Continued mistakes can lead to a crisis of confidence in the technology itself, leaving this class of ANI applications to be eclipsed by another AI Winter.

AI technology, traditionally, has been working on to copy or even surpass human cognitive and physical capabilities.  Problems arise when algorithms are applied in different contexts.  Businesses and governments must assess the risk profiles of the algorithms used by them. An essential guide for such an assessment is to know whether their applications are ANI T ( artificial narrow intelligence-transactional), ANI C (artificial narrow intelligence-consequential), AGI (artificial general intelligence) or ASI (artificial superintelligence). The risk profile of ANI T (e.g. digital voice assistants – single task, limited context) is low, whereas, ANI C (self-driven cars- single task in a dynamic context) is quite high. As far as AGI (multiple tasks in a proactive manner) and ASI (surpassing all human capability) are concerned, risk profiles are entirely unknown so far.

Based on the risk assessment, businesses can make critical decisions about investment in AI for the successful digital transformation of their enterprises. New research is going on to remove the technological limitations of ANI and to control and significantly reduce the undesirable results of algorithmic errors such as Ethical AI and tools for auditing algorithms for biases. This will go a long way in fostering trust in this technology. The key to the usefulness of AI of course depends on industry leaders and governments making human value uppermost in developing all and any forms of AI solutions.

A time for cautious enthusiasm

AI is exciting no doubt, but unbridled passion does not serve the purpose. Human beings are obsessed with their vision of what AI is all about. A human clone, an artificially created being with human emotions, intelligence and behavior.  Breakthroughs are seen as coming close to achieving this vision. The missteps, mistakes and hidden flaws are overlooked. As a result, disappointments ensue when errors happen, and AI technology itself is deemed untrustworthy. As of now, AI technology is not advanced enough to have a decision making capability in human contexts. Human interaction and judgement are still required.

It is crucial to keep the expectations low and to cover the gap between the present reality and hopes to avoid another AI Winter.

Businesses and governments having substantial investments in ANI must work together to reduce the chances of another AI Winter.  A public discourse that has pared down expectations, being mindful of ANI’S flaws, managing customers expectations and developing thoughtful and ethical ANI applications to foster trust are some of the ways which will go a long way in thwarting an AI Winter.

ANI, even with its faults, is useful today and promises to improve the quality of life significantly in the coming years. It may not be close to AGI yet, but business leaders and governments would do well to use it appropriately and safely to deliver unimaginable human value in times to come.

Reference: www.ey.com

Comments are closed.