The Experts Said Property Would Crash. Here's What Happened to Investors Who Listened.
We analysed 88 directional forecasts from the RBA, Big 4 banks, and independent economists over 15 years. Their direction accuracy: 60%. A dummy model that says 'prices will rise' every year: 73%. The experts lost to a one-line rule.
By Luke Metcalfe · Microburbs Research · February 2026
The call that should have made every investor question everything
Cast your mind back to early 2023. Rates had risen aggressively. Prices had already fallen through 2022. Westpac, ANZ, NAB and the broader consensus all forecast another 5 to 8% decline in national property prices.
The logic was sound. Affordability was stretched. The RBA was not done. Household budgets were under real pressure.
An investor in Parramatta, Geelong or Brisbane reading the financial press would have thought: now is not the time to buy. Some would have sold.
Instead, national prices rose +8.1% in 2023. The consensus had predicted −5%. That is a 13.1 percentage point error.
For a leveraged investor at 80% LVR, following the experts and sitting on the sidelines cost them 38.5 percentage points of equity return. In a single year.
2023 in numbers: Consensus forecast: −5%. Actual outcome: +8.1%. Net error: 13.1 percentage points. For a leveraged investor at 80% LVR who exited to cash: −38.5pp equity return versus staying invested.
This was not a freak event. The same pattern played out in 2019 when the election outcome reversed every bearish model. And again in 2020 when emergency stimulus caught everyone off guard. Three times in five years, the consensus was not just wrong. It was wrong in a way that materially hurt investors who followed it.
So why does everyone say experts are '81% accurate'?
The short answer: because they measure the right thing in the wrong way.
We built a dataset of every major property forecast from 2010 to 2024. That is 99 individual calls from the RBA, Big 4 banks, SQM Research, and independent economists. Removing neutral calls leaves 88 directional forecasts. We matched each one against the CoreLogic national actual.
At the consensus level, experts got the direction right in 13 of 15 years. That sounds impressive. But we have 88 individual directional forecasts, not 15.
But here is the problem with that number.
Australian property rose in 12 of those 15 years. A dummy model that just says 'prices will rise' every single year scores 73% across those same 88 forecasts. No model required. No research. Just say 'up' every year.
The experts? 60%. They scored 13 percentage points worse than the dummy model. In bull markets, they wasted credibility on bearish calls. In bear markets, they added some value, but not enough to close the gap.
15-year scorecard: expert consensus forecast vs actual outcome, 2010 to 2024. The dashed red line shows what experts predicted. The solid green line shows what happened.
A bootstrap analysis across 50,000 resampled datasets confirmed this. The experts beat the always-bull model in only 12% of iterations. In 88% of simulations, saying 'up' every year was the better strategy.
The pattern is clear in the data. In up-years (12 of 15), always-bull scores 100%. Experts score less, because some of them called bearish. In down-years (3 of 15), experts add genuine value. But three good years cannot offset twelve years of unnecessary bearish calls.
'Across 88 directional forecasts, experts scored 60%. A one-line model that says prices will rise scored 73%. The experts did not beat the base rate. They lost to it.'
The full 15-year scorecard
| Year | Forecast | Actual | Error | Verdict | Note |
|---|---|---|---|---|---|
| 2010 | +4% | +5.0% | +1.0pp | Stimulus tailwind | |
| 2011 | +4% | −3.8% | 7.8pp | GFC aftershock | |
| 2012 | +2% | +0.3% | 1.7pp | Slow recovery | |
| 2013 | +4% | +9.8% | +5.8pp | Credit boom | |
| 2014 | +6% | +7.9% | +1.9pp | Sustained growth | |
| 2015 | +5% | +8.0% | +3.0pp | Pre-APRA frenzy | |
| 2016 | +4% | +5.6% | +1.6pp | Moderate growth | |
| 2017 | +4% | +4.2% | +0.2pp | APRA tightening starts | |
| 2018 | −1% | −6.5% | 5.5pp | APRA squeeze | |
| 2019 | −6% | +2.3% | 8.3pp | Election reversal | |
| 2020 | −15% | +3.0% | 18.0pp | Emergency stimulus | |
| 2021 | +7% | +22.1% | +15.1pp | TFF boom | |
| 2022 | −17% | −5.3% | 11.7pp | Rate rises | |
| 2023 | −5% | +8.1% | 13.1pp | 510k migration | |
| 2024 | +5% | +4.9% | 0.1pp | Most accurate year |
The two failure modes that cost investors money
The headline accuracy hides two distinct ways expert forecasts mislead investors. Both are systematic. Both cost real money.
Failure Mode 1: Bearish calls that blow up
There were 7 bearish calls over 15 years. Only 4 were correct. That is 57% accuracy. Barely better than a coin flip.
All 3 wrong calls (2019, 2020, 2023) shared the same flaw. They correctly identified the underlying economic stress. But they did not anticipate the policy response. The mechanism was right. The government pivot was not in the model.
In 2019, the expected credit crunch was reversed by a surprise election result and immediate APRA easing. In 2020, nobody modelled HomeBuilder, emergency rate cuts, and quantitative easing arriving simultaneously. In 2023, nobody predicted 510,000 net migration.
The pattern is clear. Bearish calls fail precisely when the downside scenario is most likely to drive you to act on them. You exit the market. The government pivots. You miss the recovery. And you pay transaction costs both ways.
Failure Mode 2: Systematic magnitude undershoot
Even when direction is right, the numbers are wrong. In 8 of 9 bullish years where experts correctly called a rise, they understated the actual gain. Average undershoot: 3.6 percentage points.
The 2013 credit boom: consensus +4%, actual +9.8%. The pre-APRA frenzy of 2015: consensus +5%, actual +8%. The TFF-fuelled 2021 boom: consensus +7%, actual +22.1%.
This matters because magnitude determines position sizing. An investor expecting 7% growth behaves very differently from one expecting 22%. Different leverage decisions. Different hold-or-upgrade decisions. Different portfolio allocation decisions. And the expert forecast pointed at the wrong answer in almost every year.
The wealth simulation: 15 years of following the experts
We ran a simple simulation. An always-hold investor versus one who followed the consensus each year. Own property when consensus was bullish. Sit in cash when consensus was bearish. Both start with $100 in 2009.
The always-hold investor turns $100 into $233. The consensus-follower turns $100 into $241.
The advantage: $8 over 16 years. That is 3.4% total. Roughly 0.2% per annum of outperformance. Before transaction costs.
Now add stamp duty and agent fees for the 2019 and 2023 exits and re-entries. The consensus-follower actually underperforms.
The clearest finding: Simply holding Australian property for 15 years, ignoring all expert forecasts, delivered essentially identical returns to a strategy of actively following the consensus. With far less stress. And far lower transaction costs.
With leverage, the wrong calls become devastating. The 2023 wrong bearish call cost −38.5pp equity return at 5x leverage (8.1% x 5 = 40.5%). An investor who sold because the experts said 'crash' then watched prices surge 8.1% while sitting in cash. That is the actual cost of following a wrong bearish consensus.
Not all forecasters are equal
The aggregate accuracy masks big differences between forecaster categories. Understanding the incentives behind each forecast matters as much as the forecast itself.
Structurally bullish. Banks make money when mortgages are written. Their economics teams have an institutional incentive to publish optimistic outlooks. This shows up clearly in the data: the Big 4 are consistently the most bullish forecasters in the dataset. Useful as a directional signal. Not useful for position sizing or bearish calls.
Widely treated as the most authoritative voice. In practice, their property market guidance has historically tracked consensus rather than led it. The RBA's accuracy is similar to the broader average. Not materially better, despite privileged access to economic data and mortgage flow information.
Slight bearish tilt compared to other categories. Mixed track record. Independent economists are not systematically better or worse than the banks. They carry less institutional bias toward bullish calls. But that has not translated into better accuracy on bearish ones.
Should not be used as investment forecasters. Full stop. This is a category error. Regulators and government voices signal policy intent, not market prediction. Treating their statements as investment forecasts produces the worst accuracy in the entire dataset.
Publishes bull/base/bear scenario ranges instead of single point estimates. A format that is both more intellectually honest and more practically useful. The actual outcome was inside their scenario range in non-shock years. Prefer forecasters who acknowledge uncertainty over those who publish false-precision point estimates.
The Three-Tier Trust Framework
Based on 15 years of data, here is a practical framework for deciding which expert signals to act on and which to filter out.
| Tier | Confidence | Trust | Discard |
|---|---|---|---|
| Tier 1 High confidence | Stable macro environment. No pending policy change. | Trust direction. Experts reliably call 'up' in rising or neutral environments. | Discard magnitude estimates. Systematically understated by 3.6pp on average. |
| Tier 2 Conditional | Confirmed credit-tightening cycle underway. | Trust bearish calls during confirmed APRA tightening (2011, 2018, 2022 precedent). | Discard bearish calls that depend on a single policy scenario holding. This is the pattern that failed in 2019, 2020, and 2023. |
| Tier 3 Best practice | Any environment. | Prefer scenario-range forecasters like SQM Research. Use the range to stress-test your plan. | Discard single-number point forecasts from large institutions. False precision masks genuine uncertainty. |
The real question is not what the experts think
Here is what the industry does not talk about.
National-level analysis explains only 24% of individual property price variance. Suburb-level analysis explains 61%. Street-level analysis explains 89%. The full breakdown is in our Market Cohesion research.
What the experts forecast
Better, but still noisy
Where the signal actually is
The macro consensus is one variable. And a weak one. It accounts for less than a quarter of what drives your property's actual performance.
Investors fixated on expert forecasts are watching the wrong screen. The question is not 'will the national market rise 5% or fall 3%?' The question is 'what are the specific fundamentals of this street, in this pocket, in this suburb?' That is exactly what our suburb reports and property reports are built to answer.
'National forecasts explain 24% of individual property variance. Street-level fundamentals explain 89%. Most investors spend most of their research time on the variable that matters least.'
The bottom line
Don't let a bearish consensus call override a long-term investment decision unless the credit-tightening cycle is firmly underway, unambiguous, and not at risk of policy reversal.
Three times in five years, investors who did exactly that watched prices surge while they sat on the sidelines.
'Macro forecasting is conditionally reliable on direction in stable environments. It is unreliable when policy risk is elevated. And it is systematically useless on magnitude. Adjust your use of it accordingly.'
About this research: Analysis by Luke Metcalfe, Microburbs Research. Dataset covers CoreLogic national dwelling value index vs consensus forecasts, 2010 to 2024. Full method available in the whitepaper. This is research commentary, not financial advice.
Find suburbs the experts aren't talking about
Microburbs analyses street-level fundamentals. Not the same consensus data everyone else reads.