OPINION: The AI debate in real estate is asking the wrong question

by David Ursino

The views expressed in this column are solely those of the author.

 

A recent piece in REM framed artificial intelligence as a two-sided debate: doomers who believe AI will crash global property values within two years, and industry voices who counter that adoption is the only thing separating tomorrow’s winners from today’s stragglers. Both camps are wrong. They have accepted the same false premise: that AI adoption is the variable that determines which agents survive the next decade. It is not. Competence is. AI just exposes it faster. Everything that follows is the evidence for that claim.

As of early 2026, Claude — one of the most capable large language models available — covers just 33 per cent of the tasks it could theoretically perform in the most AI-exposed profession in the economy: computer programming. The theoretical ceiling is 94 per cent. Actual use is roughly a third of that. This is the profession everyone agrees AI should be eating first.

It is not.

That gap is the most important data point in a conversation the real estate industry is currently having badly. AI is a multiplier on the competence that already exists. It is not a substitute for competence that does not.

 

The false binary

 

The doomer case relies on a category error. Real estate is not programming, customer service or data entry — the three most AI-exposed occupations in the Anthropic Economic Index, where even there, observed use sits far below theoretical capability. Real estate agents do not appear on the most-exposed list at all, because the core of the job — counselling a family through the largest financial decision of their lives, reading a room in a multiple-offer negotiation, knowing when a seller is six months from a divorce and therefore not actually motivated — is judgement work performed in the presence of another human being. A language model can draft the listing description. It cannot hold the hand of a first-time buyer whose pre-approval just evaporated.

The booster case is softer but in some ways more damaging. The dominant narrative in trade media over the past year has been that AI adoption is the defining skill of the next-generation agent. Agents who use AI will dominate. Agents who do not will be replaced. It has a pleasing symmetry. It is also incomplete. AI is a multiplier, not a substitute. Multiply zero by anything and the answer is still zero. An agent producing lifestyle content in place of market knowledge in 2024 is not rescued by producing 10 times as much AI-generated lifestyle content in 2026. The output is just louder.

The competence ceiling

 

This is where the debate connects to a problem the industry already has. As of Dec. 31, 2025, 52.7 per cent of the 69,728 agents at the Toronto Regional Real Estate Board (TRREB) closed zero transactions in 2025, according to TRREB data compiled by Redatum. Another 36.6 per cent closed between one and four. Combined, 89.3 per cent of the registrant pool completed four or fewer deals in a calendar year. That number did not happen because those agents lacked AI. It happened because a 25-year bull market rewarded presence over precision, and the skills that worked for two decades became obsolete when the market shifted in 2022.

Layer AI on top of that pool and the gap gets worse, not better. The agent who did not know how to price a home in 2021 is not suddenly a pricing expert because Claude can generate a comparative market analysis in 30 seconds. They are an agent who cannot audit the output. Anthropic’s March 2026 report on the labour market impacts of AI makes the underlying point: in the most exposed occupations in the economy, observed AI use covers a fraction of theoretical capability — 33 per cent in computer and math roles, and lower elsewhere. The gap is not because the technology cannot do the work. The gap is because the work, done well, still requires a human to decide what to ask, how to verify the answer and when to override the machine. That last skill — judgement about when to trust the output — is the one you cannot buy with a subscription.

An agent using AI badly is not really using AI at all. They are using it to produce the appearance of productivity — more posts, more emails, more automated follow-ups, none of it anchored in the market knowledge a client is actually paying for. Sixty AI-generated market update reels a week is not a business. It is a content mill with a real estate agent attached to it.

 

Where AI actually earns its keep

 

None of this is an argument against AI in the business. It is an argument against using AI to scale the wrong part of the business. The agent population that will thrive over the next five years will use AI the way a good carpenter uses a nail gun — to remove the part of the job that is pure repetition so that more hours land in the part of the job that is pure judgement. That means AI handling first-draft listing descriptions, showing-feedback summaries, comparable-sale research, disclosure review, FINTRAC documentation workflows and the 40-odd micro-tasks that eat the back half of every working day. Every hour AI claws back from administrative work is an hour returned to the only thing clients actually pay an agent for: a human being who is fully present, properly prepared and capable of rendering judgement in the room.

 

The compliance tail nobody is talking about

 

There is a second problem the adoption-first narrative skips, and it is the one managing brokers are going to spend the next 24 months cleaning up. Every AI-generated output an agent produces is still the agent’s legal work product, and by extension the brokerage’s. An AI-generated comparative market analysis that hallucinates a sale price is a pricing misrepresentation. An AI-enhanced listing photo that adds a window the property does not have is a misrepresentation under the Trust in Real Estate Services Act (TRESA) and a disclosure obligation the Real Estate Council of Ontario (RECO) is now actively examining. An AI-drafted Financial Transactions and Reports Analysis Centre of Canada (FINTRAC) client identification record that misses a politically exposed person flag is a brokerage-level compliance failure, not an agent-level one.

The common thread: you cannot audit an AI output if you do not have the underlying knowledge the AI is supposed to be assisting with. The agent who does not know the neighbourhood cannot spot the bad comp. AI does not lower the competence floor for compliance work. It raises it, because the speed of output compresses the time available to catch errors. Brokerages carrying thousands of under-producing agents already have a monitoring problem. Adding AI-generated work product to that pool without a corresponding investment in agent competence is how a compliance department goes from stretched to overwhelmed.

 

Actionable takeaways

 

For agents: Stop asking whether to adopt AI and start asking which 30 per cent of the week is actually client-facing judgement. Point AI at the other 70 per cent. Adopting AI to produce more content is not a strategy. Adopting AI to buy back hours for clients is.

For brokerages: Write the AI policy now, not after the first RECO complaint. Define what agents can and cannot delegate to AI, require human sign-off on client-facing outputs and treat AI-generated work product the same way the brokerage treats any other document that carries a registrant’s name. The liability is already sitting in the file cabinet.

For associations and educators: The continuing education curriculum is three cycles behind the technology. An agent who cannot price a home without AI should not be registered to price a home with AI. Competence is the prerequisite. Tools are the multiplier. The order matters.

 

The reframe

 

So, is AI the next big short for real estate? No. The industry does not have an AI bubble. It has a competence bubble — 25 years of easy-mode market conditions produced an agent population most of whom cannot do the job to modern standards, and the trade press has spent the past year selling them a tool that is supposed to fix the problem. It will not. AI is not the short. It is the stress test. The agents who were always going to be in the top 10 per cent will use it to get better. The agents who were always going to wash out will use it to look busier on the way out. Nothing about that is a crash. It is the reckoning 2022 started, arriving on a faster timeline.

 

David’s data notes: TRREB agent and transaction data sourced from TRREB via Redatum, covering Jan. 1, 2025 to Dec. 31, 2025. Membership denominator (69,728) is the 2025 annual average derived from monthly TRREB statistics. Zero-deal agent count includes appraisers, managers and non-trading members. Team transactions reported under team leader names are excluded from individual agent counts. Data covers MLS resales only; excludes pre-construction, exclusive listings, leases and commercial transactions. AI usage and task coverage data sourced from Anthropic, “Labour market impacts of AI: A new measure and early evidence” (March 5, 2026). Regulatory references: TRESA; RECO; FINTRAC. This article is industry commentary and does not constitute legal advice.

 

The post OPINION: The AI debate in real estate is asking the wrong question appeared first on REM.

LiLiT Hakobyan

"My job is to find and attract mastery-based agents to the office, protect the culture, and make sure everyone is happy! "

+1(416) 816-5514

lilithak@yahoo.com

8854 Yonge Street, Richmond Hill, ON, L4C 0T4

GET MORE INFORMATION

Name
Phone*
Message