We know that real estate markets are highly stratified, pretty illiquid and not overly transparent. There are not huge numbers of buyers, sellers, and transactions in a lot of asset classes. And even where there are sufficient transactions, the uniqueness of each parcel of real estate doesn’t always allow for instance clarity. Many other financial assets have greater volume and more transparency for market participants to gain understanding of the workings of their markets, but even those markets represent a challenge to a rational person.
In real estate, we are often dealing with the evidence of a very few sales or with related transactions that need substantial additional analysis to understand. While statistics and other quantitative analysis are useful tools, they aren’t often of great help with the typically small data sets that are prevalent.
So as real estate experts, every sale in a sub market means something. A market analyst needs to find out what the sale represented to the participants in the transaction and how the transaction fits into the broader market.
Appraisers, brokers, bankers, and many other market participants rely heavily on aggregated data, i.e., surveys and interpretation of surveys, so as to gain a better understanding of the larger market in which individual transactions take place. The overview data helps establish a context for the transactional activity within a market or sub market.
There is a huge amount of data available for analysis. Data is generated by myriad sources, ranging from brokerage (sub regional to national), survey services, and data services. Is it all correct? What can we rely upon in the process of market analysis? What is appropriate for the problem to be solved? Can too much data lead to processor freeze up?
Common misapplication of aggregated data and survey results abound. A couple of examples: using national data and applying it to local markets without adjustment; using investor specific survey data to draw conclusions for another investor class altogether.
Often it’s a matter of deciding which data is relevant and how you can present a convincing case for the use of data in a particular context. The art and science of using secondary source data is applying it at the right level of the market and making comparison with the appropriate asset class. For instance, national lodging data probably won’t convince anyone looking at a market analysis of an independent roadside motel that it is appropriate to that asset without a lot of explanation and even then, someone might be well justified in saying, “the view from 30,000 feet makes for interesting reading, but where’s the local data to support your assumptions?”
Clients are much better informed than they once were. They have access to much of the same data you do. They are looking to you to do something with the data to justify your existence (and possibly your engagement.) As the time worn phrase goes, you have to bring something to the table, some value added.
The big picture may not tell the whole story. If necessary, be prepared do some original, independent research.
As it is said, we are “drowning in data, but thirsting for knowledge.” The data is out there, use it wisely, use it appropriately.
Bill Pastuszek, MAI, ASA, MRA, heads Shepherd Associates, Newton, Mass.