By Collins Van Liew | Conintento Consulting
The statistic is difficult to reconcile with the scale of investment. According to McKinsey’s January 2025 “Superagency” report, 92% of companies plan to increase their AI spending over the next several years, yet only 1% of C-suite respondents describe their organizations’ generative AI deployments as “mature,” meaning AI has been fully integrated into workflows and is driving substantial business outcomes.^1
In healthcare, this gap between expenditure and impact is especially stark. Health systems have deployed ambient documentation scribes, revenue cycle automation platforms, clinical decision support tools, patient communication chatbots, prior authorization accelerators, and coding assistants at an unprecedented pace. Yet the measurable enterprise-wide impact of these investments remains limited. The question is no longer whether healthcare organizations should adopt AI. It is why so many have invested heavily without reaching maturity, and what structural decisions separate the 1% from the rest.
Our work with healthcare organizations across the operational improvement spectrum reveals four consistent patterns that prevent health systems from translating AI experimentation into enterprise-wide value. These are not technology failures. They are architectural and organizational choices that compound over time.
The most common misstep is treating AI adoption as a procurement exercise rather than an architectural decision. Departments independently purchase point solutions that address narrow functional needs: a scribe tool for physicians, a denial management accelerator for the revenue cycle team, a scheduling optimizer for operations, a chatbot for patient access. Each tool may perform well within its silo, but collectively they produce what analysts describe as “walled gardens” of trapped functionality.
The consequences are predictable. Data does not flow between systems. The AI in the documentation platform has no awareness of the AI within the scheduling application, which has no connectivity to the AI powering claims management. Employees must learn different interfaces for each tool and craft platform-specific prompts, all while manually bridging information gaps between systems. The organization has adopted AI in multiple functions while achieving maturity in none.
McKinsey’s broader research confirms this dynamic at the enterprise level. While 88% of organizations report using AI in at least one business function, the majority remain in experimentation or piloting stages, with only approximately one-third reporting that they have begun to scale their AI programs.^2
Most healthcare AI tools operate reactively. They wait for a user to open an application, locate the relevant tool, enter a prompt, or initiate a query. This model places the burden of engagement entirely on the end user, who must first recognize that an AI tool exists, then determine which tool is relevant, then navigate to the correct interface, and then construct an effective prompt.
For clinicians and administrative staff already managing cognitive overload across 950 or more enterprise applications,^3 this friction is prohibitive. The result is that many AI tools are deployed but underutilized, generating adoption metrics that mask a fundamental engagement deficit. A tool that sits unused in a workflow is not an AI investment. It is a subscription cost.
The organizations approaching maturity have recognized that AI must be proactive rather than reactive. Instead of waiting for users to seek assistance, mature AI implementations surface contextual recommendations within the applications employees already occupy, without requiring behavioral change, additional navigation, or prompt engineering expertise.
Healthcare’s regulatory complexity makes AI governance uniquely challenging, yet most organizations have treated it as a secondary consideration rather than a foundational requirement. The result is what industry observers have termed “shadow AI,” where clinical and administrative staff adopt consumer-grade AI tools without organizational oversight, creating unmanaged data exposure and compliance risk.^4
When AI tools are procured independently across departments, each vendor introduces its own security posture, data handling policies, compliance certifications, and model training practices. IT and compliance teams are left attempting to govern a portfolio of disconnected solutions, each with different approaches to data storage, access controls, retention policies, and model training methodologies. This distributed governance model is difficult to sustain under normal circumstances. Under HIPAA and SOC-2 requirements alongside the emerging patchwork of state-level AI regulations, it becomes untenable.
The 1% of organizations that have achieved maturity share a common characteristic: they centralized AI governance from the outset. Rather than managing security policies across a dozen vendors independently, these organizations adopted platforms that embed governance into the architecture itself, with permission-aware data integrations and unified audit logging alongside centralized controls for AI agent behavior.
The current generation of AI assistants was designed primarily for individual use. A clinician interacts with a scribe tool. An analyst queries a chatbot. A manager prompts a writing assistant. A scheduler consults an optimization engine. Each interaction is isolated, generating value for a single user without compounding across the team or the organization.
This design limitation explains why healthcare organizations can report high individual AI usage while experiencing minimal enterprise-wide impact. The AI does not learn from organizational context, nor does it connect insights across departments. It does not facilitate collaborative workflows between team members and AI agents working toward shared objectives.
Maturity requires AI that functions as a team multiplier, not merely an individual productivity aid. This means platforms that provide shared workspaces where teams and AI collaborate with access to enterprise-wide intelligence and context, rather than standalone tools that serve one user at a time in isolation.
Superhuman’s AI-native productivity suite is designed to address each of the structural patterns described above. Rather than adding another point solution to an already fragmented stack, Superhuman provides an integrated platform that unifies proactive AI, collaborative workspaces, AI-native email, and a growing ecosystem of specialized agents within a single governance framework.
Addressing fragmentation through Superhuman Go. Go serves as a proactive AI layer that embeds across more than one million web, desktop, and mobile applications. Instead of requiring healthcare workers to navigate to a separate AI tool, Go surfaces contextual assistance within the EHR, billing platform, email client, project management workspace, or any other application where work occurs. Go’s context layer connects to enterprise data systems, including databases, CRMs, data warehouses, and business intelligence platforms, enabling AI to draw on organizational knowledge rather than operating within a single application’s silo. For healthcare organizations running hundreds of disconnected applications, Go transforms fragmented AI experiences into a unified, context-aware layer that operates wherever employees already work.
Replacing reactive AI with proactive intelligence. Go continuously identifies opportunities to assist without requiring prompts or explicit user requests. It delivers ready-to-apply recommendations based on the content a user is viewing and the communication they are drafting, drawing on organizational context to inform each suggestion. In a healthcare context, this means a compliance officer reviewing a policy document receives relevant regulatory updates without searching for them. A revenue cycle manager drafting a payer appeal receives contextual data points drawn from organizational systems. The behavioral change required from end users is effectively zero, which directly addresses the adoption gap that prevents most healthcare AI deployments from reaching scale.
Centralizing governance by design. The Superhuman platform operates within a unified security architecture. All products maintain SOC-2 Type 2 and GDPR compliance, with HIPAA compliance available through Coda and Grammarly. Data integrations are permission-aware, ensuring employees access only information they are authorized to view. The platform commits to no data storage without explicit organizational permission, no AI training on customer data, end-to-end encryption, and BYOK encryption options. AI agent usage is governed through centralized logging and auditing alongside comprehensive usage reporting. For health systems navigating HIPAA alongside an increasingly complex landscape of state-level AI legislation, this consolidated governance posture eliminates the vendor-by-vendor compliance burden that characterizes fragmented AI portfolios.
Enabling team-level AI through Coda and collaborative agents. Coda provides an all-in-one collaborative workspace with more than 800 integrations, combining documents, databases, tracking, and application logic in a single environment. Healthcare teams can build shared operational hubs, collaborative decision documents, quality improvement trackers, and cross-departmental dashboards that function as living systems rather than static files. The Superhuman Agent Marketplace extends this collaborative model through specialized agents for writing, sales, product, and enterprise functions, with integrations to platforms such as Salesforce, Jira, GitHub, Google Drive, Box, and Outlook. The Go Agent Builder and SDK enables organizations to create custom agents tailored to healthcare-specific workflows, such as compliance monitoring, credentialing support, patient outreach automation, or internal policy enforcement. These agents inherit Go’s proactive capabilities and operate within the platform’s unified governance layer.
Superhuman Mail rounds out the suite by addressing one of healthcare’s highest-volume communication channels. The platform automatically prioritizes inbox content, drafts contextual replies, manages follow-ups, schedules meetings, and enables team collaboration on email threads before messages are sent. Reported outcomes include responding one day sooner on average and handling 2.35 times more emails, while recovering four or more hours per week. For healthcare leadership teams, physician liaisons, and administrative staff managing high-stakes correspondence with payers, regulators, referring providers, and partner organizations, these efficiency gains translate directly into faster decision cycles and reduced communication bottlenecks.
The distance between the 99% and the 1% is not measured in the number of AI tools deployed. It is measured in the degree to which AI is architecturally integrated into how work is performed across the enterprise. Health systems that continue to accumulate disconnected point solutions will continue to report AI adoption without AI maturity. The investment will grow. The impact will not.
The path forward requires healthcare leaders to make four deliberate shifts. First, move from procurement-driven AI adoption to architecture-driven platform strategy. Second, replace reactive, prompt-dependent AI tools with proactive systems that surface intelligence within existing workflows. Third, centralize AI governance within a unified security and compliance framework rather than distributing it across a portfolio of independent vendors. Fourth, evolve from individual-user AI tools to collaborative platforms where teams and AI agents share context and intelligence within a framework of unified accountability.
These shifts are not incremental improvements. They represent a fundamental change in how healthcare organizations conceptualize and deploy artificial intelligence. The 1% of organizations that have reached maturity understood this distinction early. For the other 99%, the window to close the gap remains open, but it is narrowing as the competitive and operational consequences of AI fragmentation continue to compound.
McKinsey & Company. Superagency in the workplace: empowering people to unlock AI’s full potential at work. Published January 28, 2025. Accessed April 12, 2026. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work
McKinsey & Company. The state of AI: how organizations are rewiring to capture value. Published March 2025. Accessed April 12, 2026. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
McKinsey Global Institute. The social economy: unlocking value and productivity through social technologies. Published July 2012. Accessed April 12, 2026. https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-social-economy
Wolters Kluwer Health. 2026 healthcare AI trends: insights from experts. Published December 15, 2025. Accessed April 12, 2026. https://www.wolterskluwer.com/en/expert-insights/2026-healthcare-ai-trends-insights-from-experts
Gartner Inc. Gartner predicts 40% of enterprise apps will feature task-specific AI agents by 2026, up from less than 5% in 2025. Published August 26, 2025. Accessed April 12, 2026. https://www.gartner.com/en/newsroom/press-releases/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025
Poon EG, Lemak CH, Rojas JC, et al. Adoption of artificial intelligence in healthcare: survey of health system priorities, successes, and challenges. J Am Med Inform Assoc. 2025;32(7):1093-1100. doi:10.1093/jamia/ocaf065
*Collins Van Liew is the Director of Strategic Enablement at Conintento Consulting, a business process improvement consultancy specializing in healthcare operational excellence. For more information, contact Collins@conintento.com.*