Seventy-four percent of HR leaders say their function is adopting AI faster than any other department in their organisation. That statistic is usually presented as a success story. It shouldn't be.

Speed is only an advantage when the thing moving fast has somewhere specific to go. In most mid-sized companies right now, HR is moving fast on AI while governance ownership sits in the wrong place, procurement happens before anyone with risk awareness sees the product, and an entire layer of informal AI use operates with no visibility at all. The result isn't innovation at pace — it's liability accumulation at pace, with the invoice arriving later.

For many companies with 20 to 500 employees, the structural gap between AI adoption and governance infrastructure already exists. The question is whether it gets addressed deliberately or under pressure.

---

The Structural Gap No Policy Document Closes

SHRM's 2026 data shows that in 37% of organisations, AI governance is primarily owned by legal and compliance. In another 29%, it sits with a cross-functional task force. That leaves a small minority of organisations where the function doing the most AI adoption — HR — has clear, named ownership of the governance structures that apply to its own tools.

The immediate response is often "HR should work with legal and IT." That's true but insufficient. Legal and compliance typically learn about AI deployments after the procurement decision has already been made. They review the data processing agreement, flag standard contractual risks, and sign off. What they almost never review — because nobody puts it on the checklist — is the model's bias audit history, the composition of the training data, or whether the vendor's compliance posture maps to the regulatory regime the deploying company is actually operating under.

Task forces produce the same diffusion problem by different means. When accountability is shared across HR, IT, legal, and a senior stakeholder, it is effectively owned by nobody. When something goes wrong — a candidate complaint, an EEOC inquiry, or a performance review an employee successfully contests — the question of who was responsible produces a genuinely unclear answer. That ambiguity is the governance gap.

Policy documents don't close it, because policy documents describe principles. They don't create accountability. The accountability vacuum determines who has veto power over tool deployment, who has ongoing visibility into model outputs, and who is responsible when a decision influenced by an AI system produces an indefensible outcome. Until that question has a named answer with real authority behind it, governance frameworks are largely decorative.

---

Procurement Is Where Governance Fails First

The most common pattern in companies under 500 employees follows a predictable sequence: an HR director sees a compelling demo of an AI recruiting or performance management tool, secures budget through a standard SaaS approval path, legal reviews the DPA, IT checks the security posture, and the contract is signed. Six months later, the tool is embedded in the workflow, the vendor relationship is established, and switching costs are real.

What didn't happen anywhere in that sequence: someone asked the vendor for its bias audit history. Nobody reviewed what demographic data the model was trained on, or whether that training data reflected the workforce being assessed. Nobody verified whether the tool's compliance posture covered applicable jurisdictions — including regulations the company might not currently operate under but could be subject to within 12 months.

The IAPP has documented this pattern precisely: organisations face practical governance challenges when AI procurement decisions occur without early legal involvement. "Early" means before the demo impresses anyone — not before the contract is signed, because by that point the purchase psychology has already closed.

Governance frameworks built after procurement decisions are retrospective risk management. They document what's already in place and create policies around tools that are already live. That's not governance; it's paperwork that gives the appearance of governance. The actual intervention point is the procurement workflow itself.

Before any HR AI tool contract is signed, three questions should be non-negotiable:

1. What did the bias audit actually find? Not the existence of an "ethical AI" policy — documented results from a third-party audit, including any areas where the model underperformed across demographic groups.

2. What is the training data composition, and has the model been validated against relevant workforce demographics? A model calibrated on a different sector's workforce or a different geographic labour pool may behave inconsistently against your candidate or employee population in ways that aren't visible until they produce a complaint.

3. Which specific regulatory frameworks is the tool compliant with, and does that coverage extend to every jurisdiction the company operates in or is likely to expand into within two years? Vendors routinely cite GDPR and CCPA. Those are the floor. NYC Local Law 144, the EU AI Act's high-risk classification for employment tools, and emerging state-level regulations each impose distinct requirements that generic compliance postures don't automatically satisfy.

These aren't complex technical questions. They're due diligence questions that most HR teams simply haven't been taught to ask, because AI procurement has been treated as equivalent to any other SaaS purchase. It isn't.

---

The Risks Most Teams Are Systematically Underweighting

Performance AI Gets Less Scrutiny Than Hiring AI — and Carries Comparable Exposure

There is significant industry awareness around bias in hiring algorithms. NYC's Local Law 144 has made bias audits a legal requirement for AI tools used in employment decisions, and that regulatory model is being adopted by other jurisdictions. Hiring AI is, appropriately, under scrutiny.

Performance evaluation AI is not — or not to the same degree. Tools that inform performance ratings, flag employees for development or exit planning, or score productivity receive markedly less governance attention at deployment than recruiting tools do. The exposure is comparable. In some respects it is greater, because the affected population is your existing workforce — people with institutional relationships, protected characteristics, and legal rights — not external candidates who can walk away from a process they distrust.

A concrete example of what this looks like in practice: a productivity monitoring tool flagging remote employees for low engagement based on login patterns and document activity. If the model was trained predominantly on in-office behaviour patterns — and many were, given the pre-2020 composition of most training datasets — it may systematically score certain work styles, role types, or caregiving schedules as underperformance. That output then informs a manager's performance review. Nobody reviews the model's scoring logic. The employee receives a development plan. If that employee is in a protected class, and the pattern holds across similarly situated employees, the company has a defensible problem it has no mechanism to detect.

Transparency and contestability requirements for performance AI are directionally where regulation is heading. Companies that treat performance monitoring tools as lower-risk than hiring tools are misallocating their governance attention.

The Informal AI Layer Is the Largest Unmanaged Risk

The governance discussion in most companies focuses on AI platforms procured through official channels. But a significant and growing share of HR AI use is informal: HR professionals using ChatGPT, Claude, or similar tools to draft job descriptions, write performance improvement plans, summarise engagement survey data, or generate interview questions. This is happening at scale, largely without visibility, and it carries risks that vendor tool governance doesn't touch.

Sensitive employee data — compensation details, performance notes, disciplinary records — is being pasted into consumer LLMs with no data classification controls and no organisational visibility. The outputs are being used to inform HR decisions without any review cadence or accountability structure. This isn't a hypothetical risk profile; it's the current operating reality in most organisations that haven't explicitly addressed it.

Consider what this looks like at a practical level: an HR manager uses ChatGPT to draft a performance improvement plan, pastes in several months of performance notes, the employee's compensation history, and a summary of prior disciplinary conversations. That data leaves the organisation's systems entirely, with no retention control, no audit trail, and no way to verify what the model did with it. The PIP goes out. The employee escalates. Legal asks where the language came from. Nobody has an answer.

Vendor tool governance is important, but it addresses the visible portion of the risk. The informal use layer is larger, less structured, and harder to govern precisely because it doesn't appear in any procurement record.

Model Drift Goes Unmonitored

AI tools degrade. Their accuracy and fairness can erode as workforce composition changes, role requirements evolve, or market conditions shift the relevance of training data. A hiring model calibrated to your 2022 workforce may be producing systematically different outcomes against your 2025 candidate pool in ways nobody has reviewed.

Most mid-market companies deploy AI tools and then apply no ongoing monitoring cadence. There is no named owner reviewing model outputs periodically for anomalies, no feedback loop for flagging outcomes that look inconsistent with manager expectations, and no threshold that triggers a vendor review. The result is a slow divergence between what the tool was evaluated on and what it's actually doing — invisible until it surfaces in a decision someone contests.

Governance that covers only deployment is incomplete governance. It addresses the risk of the initial tool choice while ignoring the risk that accumulates from that point forward.

---

What Structural Governance Actually Looks Like

One useful architectural pattern comes from the retail sector, documented in the HR Certification Institute's research on frontline hiring transformation. A retail organisation under pressure to reduce time-to-offer for frontline managers made a deliberate structural choice: the governance question — specifically, "how do we hire faster without increasing compliance exposure?" — was defined before tool selection began. Cross-functional alignment across IT, legal, compliance, procurement, and operations was established as a precondition for evaluating any vendor. The governance framework shaped the tool selection criteria, not the other way around.

This is the inverse of the default sequence most organisations follow. It sounds obvious stated plainly. It is not, in practice, how AI procurement works in most HR functions — because the default path of enthusiasm, budget approval, and retrospective governance is structurally easier in the short term and more expensive over time.

A second pattern worth translating comes from federal guidance on Chief AI Officer mandates. Federal agencies are now required to designate named individuals with clear authority over AI risk and adoption decisions. Most mid-sized private companies don't need a dedicated Chief AI Officer. But the principle translates directly: governance requires a named individual with real authority and visibility, not a task force with shared ownership. Shared ownership of risk is a reliable mechanism for ensuring nobody owns it.

For a company with 20 to 500 employees, minimum viable governance in HR AI looks like four checkpoints, each with a named owner:

Procurement review — No HR AI tool proceeds past vendor selection without documented answers to the three questions above. Owner: HR lead and legal, before contract. Timeline: required before any demo advances to a shortlist.

Deployment sign-off — No tool goes live without a basic data flow map showing where employee data goes, how long it's retained, and who has access. Owner: IT or engineering. Timeline: required before any employee data enters the system.

Informal use policy — Explicit written guidance on which employee data categories cannot be processed through consumer AI tools, reviewed annually or when a new tool category becomes widely used. Owner: HR lead with legal input. Timeline: in place before any informal use guidance is provided to the team.

Ongoing monitoring cadence — A named person reviews AI-assisted decision outputs quarterly for anomalies or consistency failures. Not a full audit — a structured check against pre-defined flags. Owner: whoever has day-to-day accountability for the tool. Timeline: first review within 90 days of deployment, quarterly thereafter.

None of these require dedicated headcount or enterprise governance infrastructure. They require decisions about ownership that most companies haven't made.

---

The Specific Action Worth Taking This Week

Pull the last three HR AI tools your team has purchased or is actively evaluating. For each one, identify whether anyone in your organisation can answer these questions today: What did the bias audit find, and was it conducted by a third party? Where did the training data come from, and has the model been validated against your workforce context? Who is responsible for reviewing the tool's outputs on an ongoing basis, and when did that last happen?

If those questions don't have clear answers — or if the answers reveal that no audit history was reviewed before signing, that no monitoring cadence exists, or that informal use of consumer AI tools is happening with no data classification guidance — you have a precise map of where your governance gap actually sits.

That's not a reason to slow down AI adoption. It's a reason to make the next procurement decision differently: governance framework first, tool selection second, and a named owner on each checkpoint before the contract is signed rather than after the first complaint arrives.