Imagine that a landlord owns a Terraced house and is wondering if it’s more profitable to sell the house as-is, or to split it into two flats.

The landlord can get a valuation for the house. In this example, I used AccuVal to value a 5 room, 950sqft house near Waterloo Station. I got about £1.251m.

The trick, however, lies in valuing the imaginary flats that don’t actually exist. No traditional AVM in the market can do it and experienced valuers will likely struggle!

This is a case where a Machine Learning AVM can help. I assumed that two flats exist at the same location, one is 3 rooms 500sqft, and the other is 2 rooms 400sqft (50sqft of usable floor area would be lost due to modifications). Using AccuVal, the valuation for the two imaginary flats turned out to be £699K and £495K respectively.

The total sum of the two flats (£1.194m) turned out to be less than the house itself before factoring development cost. As such, we can assume that splitting this house wouldn’t be a very good idea.

Local authorities and Registered social housing Providers (RPs) can now use AccuVal to obtain the EUV-SH (Existing Use Value – Social Housing) straight away.

Get in touch if you need to do bulk valuation. For individual valuations, you can use AccuVal free of charge.

Traditional AVMs rely heavily on tracking price change of each property. This approach lends itself well if the property has been sold recently at a fair market value (which negates the need to re-value it in the first place).

However, this approach suffers from a dreadful caveat. If a property was sold at a price lower than the fair market value, the AVM will continue to undervalue it, and vice versa.

AccuVal follows a fundamentally different approach. It doesn’t track properties at all. Instead, it tries to learn the fair market value from a mix of property and location data. As a result, it’s far more resilient than the best AVM in the market. This example is for a recently sold flat in Greater London at a fraction of the market value. Zoopla/Hometrack continues to undervalue it but AccuVal managed to get a proper valuation.

It puzzles me why most lenders in 2021 are still relying on obsolete tech when better alternatives are already available!

In a previous post, I wondered how close AI has come to passing the Turing test in property valuation. Someone said the Turing test was a “high bar”. I agree.

If we ask a human expert to value a property solely by it’s basic information: the postcode, type, rooms, age and EPC rating, they would say that’s impossible.

However, with AI, strength comes from numbers. In this case, it’s the large number of transactions computers can learn from.

So, using basic information, in addition to other relevant data (about the location), I picked ALL properties sold since January 2021 (about 250K), used AccuVal to value them, and compared the valuation with the actual prices. The result has been extremely interesting:

– 30% of properties valued with over 95% accuracy
– 52% over 90% accuracy
– 68% over 85% accuracy
– 78% over 80% accuracy
– 86% over 75% accuracy

The remaining 14% valued with less than 75% accuracy are mostly outliers and invalid transactions, which I opted to keep.

Speed-wise, the computer I used (7th Gen Intel i7) was able to value 2100 properties per second.

In conclusion, I don’t know if AI has passed the Turing test, but the potential should be obvious by now.

In a previous post, I promised to elaborate on why a “safety mechanism” is essential when Machine Learning (e.g. Deep Learning) is used to automate decision making.

In a classic computer software situation, the computer executes code written by a programmer in a chosen programming language. As such, the software behaves in a very predictable manner. However, when Deep Learning is used, the decision is not made by an explicit programmer’s code. Instead, it is made by a pre-trained model.

One might ask, why not “debug” the model when things go wrong? It’s a computer code at the end of the day!

Unfortunately, while Deep Learning might appear simple on paper (see image on the left), this is just a simplified abstraction. In reality, it’s totally different (see image on the right). It should be clear by now why AI is black box and no one can really tell where the error may be.

For this reason, it makes sense to test the output against well-know “good” limits and take action when the prediction is off range. For example, override the prediction or notify the user.