Peter Thomas, chief operating officer of the Leasing Foundation, gives an overview of new technologies that will impact the leasing industry. This month’s focus is artificial intelligence

What is it?

Like Big Data, the subject of a previous column in this series, artificial intelligence, or AI, is one of those phrases that is now ubiquitous.

Like the ‘synthetics’ in the TV show Humans, we are used to thinking about AI as ‘computers that mimic people.’
But AI has a long history, a complex present, and an exciting future, so before we look at the present and the future of AI, a little background will be useful.

The idea of AI has been around a long time. Automata – machines that copy what people do, writing, or playing an instrument or playing games – have been built by every civilisation and in every time period from the early Greeks, through the Renaissance, to the golden age of automata in Europe in the late 19th century.

But the challenge that AI really wanted to take on was not just to create machines that acted like people, but ones that thought like people.

The idea of AI as ‘thinking machines’ started with the English mathematician Alan Turing. Amongst his other accomplishments, including breaking German ciphers at Bletchley Park, Turing proposed a way of testing if machine was indistinguishable from a human. His Turing Test describes a person and a computer asking and answering questions using typed messages. If the human couldn’t tell if the responses came from a human or a computer, the computer was to all intents and purposes intelligent.

In 1966, scientist Joseph Weizenbaum created a program that seemed to pass the Turing Test. ‘Eliza’ mimicked a psychotherapist responding to a patient. It worked by looking at keywords in a human’s typed messages and adding them to a set of stored responses, such as: “Tell me more about your mother,” where “mother” was a keyword from the person’s question. Eliza was successful in fooling people into believing that they were talking to a real therapist.

The modern age of AI started in the 1950s with the development of more powerful computers, more memory storage and more complex programs. Scientists realised that computers could be used to do complex tasks such as playing chess, and AI researchers then started to look at other complex problems such as how to produce human language.

It became obvious though, that thinking cannot be reduced to simple steps in an algorithm. Being intelligent – and behaving intelligently – is a difficult thing to do.

For example, even children can respond to the phrase: “Can you get me that thing from just over there behind the green thing that we bought yesterday, please?”, but computers, however powerful, could never do this. Humans can because they employ complex reasoning strategies, use memory, ask questions, draw on past experiences and have an ability to tolerate ambiguity that computers, operating with algorithms, find it hard to do.

How does it work?

This history is important to place modern AI into perspective. Using massive amounts of data, and using complex adaptive algorithms that learn as they acquire new data, AI is starting to have a huge impact on business and society.

In 2016 the market for AI-related products, hardware and software reached more than $8bn, and will grow to tens of billions in the next decade, spanning every area of our lives from transport, utilities, healthcare, manufacturing, the environment and finance. This is because AI has abandoned the idea that the goal is make computers intelligent just like people are, and instead create computers that are intelligent enough to help people do things better.

The algorithms that help AI systems solve scheduling problems, recognise speech or understand financial market conditions for example, are now based on approaches such as ‘deep learning’ that collect and use huge amounts of data to build more and more sophisticated representations of a problem. One such problem might be how to identify people’s faces in images: The more images that are processed the deeper the learning, until the AI-based face recognition system that Facebook uses is almost 100% accurate.

Another example, of many, comes from home security company Cocoon ( Its system uses the usual motion sensors and cameras in home security setups, but also analyses sounds.

Every house has a unique sound fingerprint – pets running around, the central heating going on and off – and deep learning software can raise the alarm when the fingerprint changes in ways that it has learned suggest your house is being broken into (opening doors or smashing glass) and ignoring others.

Take Apple’s Siri: It is getting more intelligent as its algorithms learn about people’s searches; or Google’s DeepMind, which is using ‘neural networks’ that mimic how the human brain works to solve problems with uncertain outcomes or inadequate knowledge – such as how to plan radiotherapy treatment for hard to treat cancers. The results are as good as, and in some case better than, humans can achieve, and can help doctors make better treatment decisions.

In finance, companies like Sybenetix are using AI to help compliance officers detect and investigate suspicious trading patterns; uses AI to help banks improve risk controls and maintain sufficient capital so they do not become unprofitable due to regulatory implementation costs.

There are many other examples of AI in finance: robo-trading, robo-advisors, and the fintech company Kensho (, whose AI system can take questions like: “What happens to car firms’ share prices if oil drops by $5 a barrel?” and retrieve financial reports, company filings, historical market data and return a reply to the question.

AI harnesses the power of Big Data and uses smart algorithms to do something useful with it. It can take the human work out of mundane, repetitive tasks or automate them so only a few require human intervention. Law firms are investing in AI to automate simple the tasks undertaken by lawyers leaving them to focus on higher-value work, and AI is being used to make investment decisions – BlackRock, Bridgewater and Schroders are all investing in AI that can outperform human financial decision-making.

What impact will it have?

In finance, AI is one of the technologies to watch in the next decade. As a result, regulators are struggling to ensure market integrity as less-sophisticated firms try to compete against those employing AI.

We can expect to see customer expectations change to a point where AI-based systems are trusted more than human ones, and interaction with AI systems are preferred over human interaction.

Companies that do not invest – or at least include – AI in their strategic vision and technology roadmaps will start to lag and they will find themselves struggling in a market against fintechs that are based on AI.

For leasing and asset finance – not an industry known for being at the forefront of technology – companies will need to learn about what AI can bring and build this into their products and processes.