← Richard Sutcliffe

Your Board Is Making AI Governance Decisions Whether It Knows It Or Not

Every organisation deploying AI is making governance decisions. The question is whether those decisions are made deliberately, in advance, or by default, under pressure, after something has gone wrong. Speed is not the issue. Sequence is.

In the space of a few weeks earlier this year, ThinkTribal built and shipped two production-grade AI products into the UK social housing sector. We have no team of engineers, no QA department, no dedicated security function. Both products are tested, independently security-audited, and ready for pilot deployment. Neither skipped a single governance step to get there.

This post is about what made it possible, and what it means for any executive thinking seriously about where AI fits in their organisation.

The two products, briefly

TenantShield is an AI repairs and rights companion for social housing tenants. A tenant describes their issue. The product identifies the relevant legislation, walks them through their options, and generates a formal letter with the correct statutory references included. It tracks the complaint against the legal deadlines a landlord is required to meet and helps the tenant build the evidence record an Ombudsman investigation requires. The legislation it draws on is a fixed, version-controlled set including Awaab's Law and the Housing Ombudsman Complaint Handling Code. Tenant access is free. Housing providers licence the platform to demonstrate proactive compliance with consumer standards.

The Housing Ombudsman Case Intelligence Tool is for sector professionals. It works through every published Ombudsman determination since 2018 and makes thousands of cases searchable in plain English. A compliance manager searches by landlord, complaint type, or outcome. A board preparing for a consumer standards inspection sees where their organisation sits against the sector pattern.

The speed challenges an assumption

The pace tests an assumption still common at board level. Responsible AI development has to be slow AI development. It does not. The arithmetic has changed. A small team using AI as the primary build partner now produces working software at a pace previously requiring a much larger engineering resource.

Speed is only useful if what you produce is trustworthy. Trustworthiness in AI products, particularly in a regulated sector serving vulnerable users, depends almost entirely on the governance decisions made before the first user arrives. The discipline was not about slowing down. It was about getting the order right.

The governance you do not get to defer

Security was designed in, not bolted on. Before the AI layer existed, the data layer was secured at the database level rather than the application level. If there is a bug in the application, the database still does not return data to a user without authorisation. One tenant's record is not visible to another. One housing provider's dashboard shows only their own properties. An independent security audit was completed and significant findings remediated before either product went near a deployment environment.

The AI does not go beyond its brief. The legislation TenantShield draws on is a fixed, controlled document set. The model does not search the open web. It does not speculate about what the law might say. When a question falls outside the documents it has been given, it says so and directs the user to Shelter or Citizens Advice. An AI giving a vulnerable tenant incorrect legal guidance is not a helpful product. It is a liability.

No sensitive data is held unnecessarily. TenantShield does not retain a transcript of the tenant's conversation with the product. The conversation collects structured information about the complaint, and once the structured record exists, the conversation itself is discarded. A tenant's words, including their description of a landlord's behaviour and its impact on their family, are not sitting in a database for someone to read later.

The product was tested as users experience it. Every significant user journey was tested end-to-end against the real system, not a simplified version designed to make the tests easier to pass. A test passing on a simplified version tells you the code works in controlled conditions. A test passing on the real system tells you the product works for the user.

A Data Protection Impact Assessment was completed before deployment. The advertising strategy, which uses display advertising for early-stage cost recovery, was reviewed to exclude predatory financial services and debt consolidation products from appearing alongside content aimed at people in housing difficulty. The exclusion is built into the activation process, not added afterwards.

Every meaningful product decision is documented with the reasoning, the alternatives considered, and the trade-offs accepted. When the product is handed to a future team, reviewed by a regulator, or revisited in eighteen months, the reasoning is available.

What this looks like at board level

The capability to build AI products quickly now exists at a scale and cost not available two years ago. The governance requirements have not shifted with the economics. If anything, faster development makes governance more important, because the faster you build, the faster you produce something causing harm.

The questions a board should ask about any AI programme are not primarily technical. What is the AI permitted to do, and what is it explicitly prevented from doing? How would we know if it stepped outside those boundaries? What data does the product hold, and what happens to it? What was tested, by whom, against what standard? Who reviewed the security before users arrived?

These are governance questions, not development questions. The organisations getting the most from AI over the next three years will be the ones with the discipline to move quickly without cutting the corners worth keeping. The difference will show up in the products they deploy and the trust those products earn.