Is AI liberating or debilitating?

All of a sudden, it seems, Artificial Intelligence (AI) is everywhere. I don’t mean everywhere as in its use and application (although it’s getting there), but everywhere in the press.

AI is exciting, liberating and innovative – AI is concerning, disruptive and debilitating; it all depends on your point of view.

When the greatest minds of our time – I’m talking about Stephen Hawking in particular here – have doubts about the impact AI will have, we should all sit back and take stock. “The ethical dilemma of bestowing moral responsibilities on robots calls for rigorous safety and preventative measures that are fail-safe, or the threats are too significant to risk,” he (and others) said in an open letter quoted on the Mission Critical Systems Forum at http://bit.ly/1PiWLwM.

On the face of it, AI is emancipating; freeing workers from the drudgery of repetitive actions so they can focus on the creative, innovative, pioneering parts of their lives that require original thought. So why the worry? Well, it’s simply that some are concerned that these intelligent machines could take over parts of our society that we didn’t intend; those creative, innovative and pioneering roles. Only last month, a bunch of techo-scientists created an AI programme that could… could… make judges redundant. “An artificial intelligence system has correctly predicted the verdicts of cases heard at the European Court of Human Rights, with a 79% accuracy,” reports BBC News (http://bbc.in/2eZPf1s). The implication is that, with the criteria established, such decision-making simply comes down to algorithms.

As Computer World explained: “Artificial intelligence is expected to transform a wide range of industries as simple tasks are automated and carried out by machines. The IT sector is no different, with machine learning algorithms increasingly being targeted at automating and improving data centre operations.” (http://bit.ly/2eBfit8)

Morgan Stanley, using data from an Oxford University study, predicted that nearly half of US jobs will be replaced by robots over the next two decades, said Roger Attick on the forum (http://bit.ly/2eKCZOL). That’s just one example of the many times AI has been discussed on the Mission Critical Systems Forum (http://bit.ly/2exu59y)

This ability of machines to ‘learn’ from their actions and reactions and ‘think’, assess and react accordingly does create endless possibilities for its use. AI is already here and in widespread use, according to Dann Albright (at http://bit.ly/2exYypw), in areas such as virtual personal assistants, video games, smart cars, purchase prediction, fraud detection, online customer support, news generation, security surveillance, music and movie recommendation services and smart home devices.

In fact, the artificial intelligence market is estimated to grow from just under $420m in 2014 to more than $5bn by 2020, at a CAGR in excess of 53% from 2015 to 2020. The White House says AI holds the potential to be a major driver of economic growth and social progress (http://bit.ly/2efbPTl).

The question Prof Hawking and others is asking is how do we limit the ability of AI machines? How can we retain the upper hand? (Artificial intelligence: the best thing yet, or a disaster in the making? http://bit.ly/2eNhVsx)

Where will AI go from here? How will it affect you and your work? And how concerned should we be that fast-learning, fast-thinking machines could pose more problems than they solve?

To discuss this and other articles please visit the Mission Critical Systems Forum group on LinkedIn.

Comments are closed.