What is safe artificial intelligence in mortgage lending?

If you have people making decisions from data, you probably need artificial intelligence – but not the kind that’s making headlines right now.

The sensational headlines this week are about generative chatbots – programs like ChatGPT that carry on natural-sounding conversations in (written) English. They’re amazingly lifelike and seem to be thinking for themselves. But the things they say are often false, and even when telling the truth, they can’t tell you where they got their information. They’re working from large tables of how words are commonly used, not information about the outside world.  So despite the “wow” factor, they’re not, by themselves, the right tool for anything in mortgage lending that I can see.

Chatbots do have their uses.  You might want to have a web page that takes customers’ questions in plain English and answers them.  Generative technology can be useful on the input side, for recognizing different ways of wording a question, but the answers have to be controlled.  When a customer asks for his loan balance, the chatbot must actually look up the balance, not just make up something that uses words in a plausible way.  Even if the computer misunderstands the question, it must not spout falsehoods.

But chatbots are just one tiny part of AI. They are one application of machine learning, which itself is still not the whole of AI, but let’s look at that next.

Machine learning means getting software to recognize patterns and train itself from data. Machine learning is very useful for finding statistical regularities and estimating probabilities.  It is basically statistical regression, greatly expanded into many dimensions. Neural networks are one kind of machine learning, and they are multi-layer statistical models, not models of brains.

The results of machine learning are only probable, not certain. You have to be ready to live with inaccuracy. Fortunately, people recognize that the answers aren’t coming from a conscious human mind, and it’s easier for humans to be cautious. Machine learning will tell you whether a borrower is probably a good risk. It will not tell you for certain exactly what that borrower will do. That is easy to understand, and useful.

Apart from inaccuracy, the big risk with machine learning is that it will learn the wrong things – specifically, discriminatory decisioning.  If you tell a computer to find patterns, it will find them, whether or not they are patterns society wants to perpetuate.  If the data used to train a machine learning model reflects historic racial bias, it may discover this and perpetuate it in its predictions. It has no way to know you don’t want it to use that knowledge.  It might even detect race indirectly, from location (old-fashioned illegal redlining), or choice of hairdressers, or anything else.

How strongly you guard against this depends on what you are using machine learning for.  If you’re just plotting an advertising strategy or making predictions internally, the prejudiced computer may not violate laws or regulations – but if it’s making decisions about people, it certainly will.  The cure is to block inappropriate information from being used, so the machine is only learning from data you’re entitled to use, and also to test the results to see if the system is in fact biased. You usually cannot look at the machine learning system to find out what it learned, because the patterns are hidden in matrices of numbers.

But even that isn’t all of AI. Traditionally, AI comprises all uses of computers that are based on the study of human thought. That includes some technologies that are not in today’s limelight but are very applicable to finance. They revolve around knowledge-based systems and explicit rules for reasoning.

One time-honored method is knowledge engineering: Get a human expert, such as a loan underwriter, to work through a lot of examples and tell you how to analyze them.  Then write a computer program that does the same thing, and refine it, with help both from the human expert and from statistical tests.  The result is likely to be a rule-based, knowledge-based system, using well-established techniques to reason from explicit knowledge.  And it can well be more accurate and reliable than the human expert because it never forgets anything. On the other hand, unlike the human expert, it knows nothing that was not built into it.

Knowledge engineering mixes well with machine learning approaches that output understandable rules, such as decision trees.  There are also ways to probe a machine learning system to extract explicit knowledge from it; this is called explainable AI (XAI).

Of course, knowledge-based systems face a pitfall of their own that we recognized long ago: “As soon as it works reliably, it’s no longer called AI!” But we’re in business to make good decisions, not to impress people with magic.

This article by Dr. Michael Covington originally appeared in National Mortgage News.

Related Articles

Balancing fairness and innovation: AI in mortgage lending

Read More

If We Build it, They Will Come: Marketplace Model Suits Mortgage Lending…

Read More

Only one in four Americans thinks their credit score reflects their…

Read More