Ever since 2006, Zillow has been buying, selling, and renting real estate across the United States. What started as a modest real estate business eventually became a behemoth with a market cap hovered around 23.5 billion dollars at its height. However, as it turned out, home sellers were flocking to Zillow because it was overpaying. The resulting fallout cost them $304 million in write down, shattering their stock price, laying off 2000 employees and created both financial and reputation loss for both the company and its leadership.
How did this happen? Was their AI properly managed?
The story began with a reasonable mission. The goal was to minimize labor costs and increase customer satisfaction, translating to a more profitable and sustainable business model. To achieve this Zillow turned to the ultimate goal of any process: automation. The premise was simple: instead of an agent dealing with the customer immediately, an Artificial Intelligence algorithm would attempt to price the home and make an initial offering, known as the “Zestimate”. This deep learning machine would collect all kinds of data, some scraped from online databases, some provided by the seller. Square footage, recent renovations, appliances, nearby listings, and many other variables were compared against each other to reach the end result, the “Zestimate”, to serve as the baseline for future negotiations.
In theory, the AI algorithm would make an offer in Zillow’s favor, and if the customer agreed they could come to an agreement from the comfort of their own home on short notice, with minimal interaction with an agent. Zillow saves on commission and labor, and the customer saves on convenience. A win-win situation, at least on paper.
Unfortunately, Zillow proved a bit too confident in their reliance on Explainable AI - what was originally coined “an exciting new advancement” by Zillow in a 2019 article touting their Explainable AI. It turned out that the “Zestimate” was often much higher than the initial offering an agent would have made, often above market value. At the same time, too much confidence was placed in the Zestimate’s offering by staff, and the result was many homes bought at a loss: $304 million in real estate losses in one quarter.
The financial costs and staff loss are equally harmful to the company’s reputation. After the news of the write down and layoff broke out, over the course of the next few days, Zillow groups share price dropped from $105 to $63, dropping their market cap by roughly $10 billion. A significant loss for Zillow, but an important lesson for others; AI must be properly managed and too much reliance on the traditional post-hoc Explainable AI without proper Model Risk Management can cause significant financial and reputation harm.
MIT Technology Review Article: Why asking an AI to explain itself can make things worse?
Preventing future mishaps
Several discussion threads on the Internet seems to suggest that it's a business process failure as the model was actually quite good but the business wanted to tweak and allow up to 7% of overpaying. On the other hand, according to Agus Sudjianto, a rockstar in the Model Risk Management world, releasing the AI to production without proper model validation was also likely a culprit.
For those unfamiliar, proper Model Risk Management must include independent model validation to determine whether the model was fit for purpose and conceptually sound.
Zillow claims to have used Zestimate in the same way that their customers did, as a baseline to negotiate from. If this was true, it means that their AI has failed model validation 101 - models should be independently validated to make sure they are fit for purpose. In this case, it seems that they were using Zestimate to estimate pricing for both buy and sell which has exactly the opposite model risk and purpose. One (whether it’s a human or an AI) would want to buy low but sell high obviously!!
Zillow’s failure has made one thing clear: proper investment into AI model risk management resources including people, processes and technology should be sought after to prevent such failures for any organization looking to operationalize AI.
Credit: Thanks to Agus’ LinkedIn Post for inspiring this blog post.
About Fairly AI
Fairly AI’s mission is to support the broad use of fair and responsible AI by helping organizations accelerate safer and better models-to-market. We offer an award-winning AI Governance, Risk and Compliance SaaS solution for automating Model Risk Management. Our enterprise-scale, API-driven productivity and risk management tools feature automated and transparent reporting, bias detection, and continuous explainable model risk monitoring, thus streamlining AI development, validation, audit and documentation processes across the three lines of defense for financial institutions around the world. Visit us at https://www.fairly.ai or follow us on Twitter and LinkedIn.