As artificial intelligence becomes more embedded in the property sector, PropTech provider Reapit is advising estate and letting agencies to be cautious when using or offering services based on so-called ‘black box’ AI — systems that make decisions without clear or explainable logic.
Reapit says it is committed to using ‘explainable’ AI in its own platforms and has raised concerns about the growing reliance on opaque AI systems, particularly those based on deep learning models. These systems can be difficult, or even impossible, to fully understand or audit — a problem that has implications for legal compliance and customer trust.
IBM and other technology experts have highlighted that some AI models are so complex that their decision-making processes cannot be easily interpreted, even by their creators. In practical terms, this means businesses using such systems may not be able to justify how a particular decision was made — a growing issue as AI tools are used for tasks like automated valuations, property listings, and tenant communication.
This lack of transparency carries legal and operational risks. Under the UK’s Data Use and Access Act 2025 and existing consumer protection regulations, businesses must inform individuals when AI is used to make significant decisions. Legal guidance from Freshfields notes that people have the right to receive an explanation, request human review, and challenge decisions — particularly in sensitive areas such as tenant applications or service complaints.
Furthermore, the Digital Markets, Competition and Consumers Act 2024 places stricter controls on marketing practices. If AI tools produce misleading property descriptions, altered images, or inaccurate valuations, and this is not properly disclosed or reviewed, agencies could face penalties.
Reapit’s warning comes at a time when the property industry is rapidly adopting AI-driven tools. Agencies are encouraged to ensure any AI used in their operations is transparent, auditable, and compliant with current regulation.
A recent case of AI-rendered property photos prompted Sam Richardson, deputy editor of Consumer Magazine to say “Finding the right home to buy or rent can be tricky enough without having to worry about AI or edited images. With home buyers and renters likely needing to view several properties, this could waste their time and money travelling to viewings of properties that look nothing like they do online.”
Despite these high-profile cases, McKinsey forecasts AI adoption in real estate to grow by over 40% globally by 2026, with PropTech investment expected to exceed €10bn (£8.7bn) annually. The firm says that success will depend on using proprietary data, getting executive buy-in, and establishing safeguards against bias and hallucination.
“The question is not whether agents will use AI, it’s whether they’ll use the right AI,” said Matt McGown, chief product officer at Reapit. “Generic tools might save time, but they can also introduce risk. If your AI can’t show how it reached a decision, or how much it edited a photo, what information it used to draft a property description, or why it approved a tenant or prospective buyer for a viewing, you’re risking fines and your hard-won reputation.”
According to a Reapit survey of 624 UK property professionals in July 2025:
+ 62% believe AI can make decisions and learn independently
+ 29% see AI primarily as automation
+ 79% have encountered tools marketed as AI that were in fact basic scripting
Read the orginal article: https://propertyindustryeye.com/estate-agents-warned-ai-misuse-brings-legal-and-reputational-risks/