ResearchGongCensuswide

Gong Research: 58% of Enterprise AI Projects Stalled by Trust Deficit

Gong and Censuswide find 58% of enterprises have stalled AI projects due to trust deficits in data security, explainability, and model transparency.

6 min readPublished 2026-04-20

What happened

Gong published new research on April 15, 2026, revealing that 58% of enterprises have stalled AI projects — not because of budget constraints, but because of a fundamental deficit in trust. The study, conducted by Censuswide among 2,056 business leaders at medium and large businesses across the United States and United Kingdom, found that data security concerns, lack of explainability, and insufficient model transparency are the primary barriers preventing organizations from moving AI initiatives from pilot to production.

The trust gap is costing organizations real money. According to the findings, 46% of planned AI investments are currently paused — 44% in the US and 47% in the UK. Three-quarters of leaders surveyed said their organizations lag behind in realizing AI benefits, with US leaders feeling this more acutely (80%) than their UK counterparts (70%). The data was collected between January 6–9, 2026.

To complement the survey, Gong Labs analyzed over 25 million sales interactions from January through December 2025, finding that one in four customer calls now explicitly references security concerns. Uncertainty over AI's foundational data sources and learning mechanisms ranked as the most commonly discussed topic in those conversations — a signal that trust concerns are surfacing directly in revenue conversations, not just in IT procurement reviews.

Why it matters for practitioners

For competitive intelligence and revenue intelligence practitioners, Gong's research reframes the AI adoption conversation. The conventional narrative — that AI adoption is primarily gated by budget, skills, or organizational readiness — is contradicted by the data. Trust is the bottleneck, and the specific trust concerns cited by respondents map directly to how CI and enablement teams evaluate, adopt, and champion AI-powered tools.

1. Data security is the dominant concern, but it's more nuanced than "we don't trust AI." At 34% overall (36% US, 31% UK), data privacy and security ranked as the top barrier. For sales enablement teams evaluating AI tools, this translates to concrete questions: Where does customer conversation data go? Who can access it? How is it retained? CI teams recommending AI-powered tools to their organizations need to be prepared to answer these questions — or risk having their recommendations stall alongside the 58% of projects already frozen.

2. Explainability is now a deal requirement, not a differentiator. Thirty percent of respondents cited lack of explainability as a trust barrier, and 26% said explainability of AI outputs was the single most important assurance they needed before proceeding. For practitioners building competitive intelligence programs for sales teams, this has direct implications: AI-generated battlecards, competitive alerts, and deal recommendations need to show their reasoning. Black-box outputs that surface a recommendation without showing the underlying signals will face increasing resistance from the very stakeholders CI teams are trying to serve.

3. Transparency creates a measurable competitive advantage for vendors. Chris Peake, Gong's Chief Trust Officer, framed the findings bluntly: "Security and AI trust are no longer back-office conversations; they are revenue conversations." For CI practitioners evaluating platforms like Crayon and other AI-powered competitive tools, the implication is that vendor transparency around data handling, model training, and output explainability should be weighted as a selection criterion alongside feature sets and pricing. Vendors that proactively address trust concerns will close faster; those that treat trust as a compliance checkbox will stall alongside their customers' internal projects.

4. The trust gap varies by geography — and so should adoption strategies. US organizations report higher rates of stalled projects (63% vs. 52% in the UK), yet UK organizations have paused a slightly larger share of investments (47% vs. 44%). For CI teams operating in multinational organizations, this suggests that AI adoption messaging and change management strategies should be tailored by region, with US stakeholders needing more assurance on security and UK stakeholders needing more clarity on tangible ROI before unlocking paused budgets.

Key details

  • Report: "Unlocking the Trust Barrier for Enterprise AI"
  • Publisher: Gong, conducted by Censuswide
  • Published: April 15, 2026
  • Survey respondents: 2,056 business leaders at medium and large businesses (US and UK)
  • Data collection period: January 6–9, 2026
  • Supplemental data: Gong Labs analysis of 25+ million sales interactions (Jan–Dec 2025)
  • Stalled projects: 58% overall (US: 63%, UK: 52%)
  • Paused investments: 46% of planned AI spend (US: 44%, UK: 47%)
  • Top barriers: Data privacy/security (34%), explainability (30%), model transparency (28%), regulatory uncertainty (27%)
  • Top assurances needed: Explainability of outputs (26%), data guardrails articulation (25%), built-in security guarantees (23%), third-party audits (23%)
  • Conversation signal: 1 in 4 sales calls reference security concerns

Market implications

Gong's trust barrier research arrives at a pivotal moment for the revenue intelligence and competitive intelligence markets. As AI-native features become table stakes across CI platforms, the research suggests that the next wave of competitive differentiation will not be about who has the most advanced AI — it will be about who can demonstrate that their AI is trustworthy.

This has specific implications for the CI tool market. Vendors that invest in transparency infrastructure — explainable outputs, clear data provenance, SOC 2 attestations, and third-party audits — will have a structural advantage in enterprise sales cycles. Vendors that treat trust as a future roadmap item will face the same stalling dynamic their customers are experiencing: 58% of evaluations that never reach a decision.

For CI practitioners, the research validates an increasingly common experience: recommending an AI-powered tool internally and having the evaluation stall not because the tool lacks features, but because security review, legal review, or executive skepticism about AI's decision-making process creates months of delay. The data gives practitioners a concrete framework for preemptively addressing these concerns — surfacing vendor certifications, requesting explainability documentation, and framing AI adoption in terms of the revenue cost of inaction rather than the feature benefits of adoption.

The finding that one in four sales conversations now reference security also has implications for how CI teams monitor competitive positioning. When prospects raise security concerns in calls, they are often comparing vendor approaches to trust and transparency. CI teams that track and analyze these trust signals — and feed them back into competitive battlecards and objection-handling guides — will help their sales organizations navigate a buying environment where trust has become the primary gating factor.

Related resources

  • Revenue Intelligence — the discipline at the center of Gong's AI and trust research
  • Sales Enablement — how trust barriers affect AI-powered enablement adoption and rollout
  • CI for Sales Teams — practical guide to building CI programs that address sales team adoption challenges
  • Crayon Alternatives — evaluating CI platforms with transparency and trust as selection criteria