In corporate boardrooms across America, artificial intelligence has become the ultimate buzzword, with executives painting vivid pictures of transformational change and revolutionary business improvements. Jamie Dimon, the influential CEO of JPMorgan Chase, recently proclaimed that his financial institution has identified an impressive 450 distinct use cases for AI technology. Meanwhile, Yum! Brands, the corporate parent behind beloved fast-food chains KFC and Taco Bell, has boldly declared that “AI will become the new operating system of restaurants.” The enthusiasm extends across industries, with the owner of Booking.com confidently asserting that AI will “play an important role in improving the traveller experience.”
This executive-level excitement has reached fever pitch, with AI discussions becoming a dominant theme in corporate communications. During the first quarter of this year alone, executives from 44% of S&P 500 companies made artificial intelligence a focal point of their earnings calls, painting pictures of imminent transformation and competitive advantage. The message from the C-suite is clear and consistent: AI represents the future of business, and companies that fail to embrace it risk being left behind.
However, beneath this veneer of corporate enthusiasm lies a stark and puzzling reality that challenges the narrative of rapid AI transformation. Despite the grandiose promises and ambitious proclamations from executive leadership, artificial intelligence is actually changing business operations much more slowly than anyone anticipated, creating a significant disconnect between boardroom rhetoric and ground-level implementation.
The Reality Check: Sluggish Adoption Rates
The gap between executive enthusiasm and actual implementation becomes starkly apparent when examining rigorous data on AI adoption rates. A comprehensive, high-quality survey conducted by America’s Census Bureau reveals that a mere 10% of firms are currently using artificial intelligence in any meaningful way. This statistic represents a sobering reality check for an industry that has been predicting rapid, widespread transformation.
The disappointment extends beyond simple adoption statistics to broader market performance indicators. UBS, a major international bank, recently published research noting that “enterprise adoption has disappointed,” highlighting the growing disconnect between expectations and reality. Even more telling, Goldman Sachs has been tracking companies that their analysts believe have “the largest estimated potential change to baseline earnings from AI adoption.” Rather than outperforming the market as one might expect from companies positioned to benefit most from AI, these firms’ share prices have actually underperformed broader market indices in recent months.
This performance gap suggests that even companies with the greatest theoretical potential to benefit from AI are struggling to translate that potential into tangible business results that investors can recognize and reward. The metaphor of hundred-dollar bills lying on the street becomes particularly apt—if AI truly represents such obvious value creation opportunities, why aren’t companies successfully capitalizing on them?
The Evolution of Disappointed Expectations
The timeline of AI adoption predictions reveals a pattern of consistently overoptimistic forecasts that have failed to materialize. Industry analysts at Morgan Stanley once confidently declared that 2024 would be “the year of the adopters,” suggesting that widespread AI implementation would finally take off. That prediction largely failed to come to fruition, with adoption rates remaining stubbornly low across most industries.
Undeterred by the previous year’s disappointments, industry watchers shifted their focus to 2025, initially branding it “the year of agents”—a reference to autonomous AI systems that can perform complex tasks based on data analysis and predefined operational rules. These agent-based systems were expected to represent the next evolution in AI capabilities, moving beyond simple automation to more sophisticated decision-making processes.
However, according to recent UBS research, 2025 has instead become “the year of agent evaluation,” with companies taking a much more cautious approach than initially anticipated. Rather than implementing these advanced AI systems at scale, most organizations are merely conducting preliminary assessments and small-scale trials, essentially dipping their toes in the water rather than diving in headfirst.
This pattern of delayed and more conservative adoption suggests that there are fundamental challenges beyond simple technical hurdles or integration difficulties. While some delays were expected due to practical implementation challenges—such as datasets that aren’t properly integrated into cloud infrastructure—the actual adoption rates have disappointed even these more modest expectations.
The Economics of Internal Resistance
To understand why AI adoption has proceeded so slowly despite obvious potential benefits, it’s helpful to examine the internal dynamics of large organizations through an economic lens. Economists who study “public choice” theory have long argued that government officials often behave in ways that maximize their personal interests rather than serving the broader public good. For example, bureaucrats might resist implementing necessary job cuts if doing so would negatively impact their friends or allies within the organization.
Large corporations face remarkably similar internal dynamics. In the 1990s, prominent economists Philippe Aghion of the London School of Economics and Jean Tirole of Toulouse 1 Capitole University introduced an important distinction between “formal” and “real” authority within organizations. While a chief executive may have the formal power on paper to mandate large-scale organizational changes, the middle managers who understand operational details and control day-to-day project implementation often hold the real authority to shape, delay, or even effectively veto changes requested from senior leadership.
This dynamic becomes particularly relevant when companies consider adopting transformative new technologies like AI. Joel Mokyr of Northwestern University has observed that “Throughout history technological progress has run into [a] powerful foe: the purposeful self-interested resistance to new technology.” This resistance isn’t necessarily born of malice or ignorance, but rather from rational self-interest as individuals assess how new technologies might impact their roles, responsibilities, and job security.
Frederick Taylor, the engineer credited with introducing systematic management techniques to American industry in the late 19th century, complained extensively about how power struggles within firms often jeopardized the adoption of beneficial new technologies. His observations remain remarkably relevant today, as organizations grapple with similar internal conflicts over AI implementation.
Contemporary Evidence of Internal Conflicts
Recent academic research provides compelling evidence that these internal organizational conflicts remain as relevant today as they were over a century ago. In 2015, David Atkin of the Massachusetts Institute of Technology, along with several colleagues, published a fascinating paper examining football manufacturing factories in Pakistan. The researchers studied the fate of a new technology designed to reduce material wastage—a clear operational improvement that should have been welcomed by management.
After 15 months of observation, the researchers found that adoption of this beneficial technology remained “puzzlingly low.” The explanation lay in the technology’s impact on different groups of workers. While the new technology improved overall efficiency and reduced waste, it slowed down certain employees’ work processes. These affected workers actively stood in the way of technological progress, “including by misinforming owners about the value of the technology.”
Similar patterns have been documented in other contexts. Research by Yuqian Xu of the University of North Carolina, Chapel Hill, and Lingjiong Zhu of Florida State University found comparable battles between workers and managers in an Asian bank attempting to automate various operational activities. These studies suggest that internal resistance to beneficial technology adoption is not an anomaly but rather a predictable and widespread phenomenon.
The AI-Specific Resistance Landscape
While few economists have yet conducted detailed examinations of intra-company battles specifically related to AI adoption, the potential for fierce internal conflicts seems highly likely given the technology’s broad implications for organizational structure and employment. The modern corporation in wealthy countries has become extraordinarily bureaucratized, creating multiple layers of potential resistance to technological change.
Consider the legal department’s role in this dynamic. American companies now employ approximately 430,000 in-house lawyers, representing a significant increase from 340,000 just a decade ago—a growth rate that far exceeds overall employment growth. These legal professionals’ primary function often involves preventing risky activities and ensuring compliance with various regulations and standards.
When it comes to AI adoption, legal teams face unprecedented uncertainty. With little to no established case law governing AI implementation, questions of liability loom large. Who bears responsibility if an AI model makes an error that causes financial losses or other damages? This uncertainty naturally leads to conservative approaches and extensive deliberation processes that can significantly slow adoption timelines.
The UBS surveys referenced earlier found that close to half of respondents identified “compliance and regulatory concerns” as one of the primary challenges for AI adoption within their organizations. Beyond liability concerns, legal departments must also grapple with complex issues surrounding data privacy, potential discrimination in algorithmic decision-making, and various industry-specific regulatory requirements that may not have been written with AI systems in mind.
Human Resources and Middle Management Concerns
Human resources departments represent another potential source of organizational resistance to AI adoption. The number of HR professionals in America has swollen by 40% over the past decade, creating a substantial bureaucratic layer with its own interests and concerns. HR staff naturally worry about the impact of AI on employment levels and may consequently create roadblocks for adoption programs, either through formal policy objections or informal resistance mechanisms.
Steve Hsu, a physicist at Michigan State University who has also founded AI startups, argues that many organizational actors behave similarly to the Pakistani football makers in the academic study mentioned earlier. Middle managers, in particular, face a complex set of incentives when considering AI adoption. While they may recognize the technology’s potential benefits for overall organizational efficiency, they also worry about the long-term consequences for their own career prospects.
As Hsu explains, “If they use it to automate jobs one rung below them, they worry that their jobs will be next.” This creates a rational but counterproductive incentive structure where the very people responsible for implementing AI solutions have personal reasons to resist or delay their deployment.
This middle management resistance can take many forms, from raising procedural objections and requesting additional studies to simply deprioritizing AI projects in favor of other initiatives. Since middle managers often control the day-to-day allocation of resources and attention within their departments, their resistance can effectively stall AI adoption even when senior leadership strongly supports it.
The Market Solution and Its Timeline
Economic theory suggests that market forces should eventually resolve these internal conflicts and drive broader AI adoption. Companies that successfully implement AI should gain competitive advantages that allow them to outcompete organizations that resist technological change. This process mirrors the historical adoption patterns of previous transformative technologies, such as tractors in agriculture and personal computers in office environments.
In both of those cases, innovative firms that embraced new technology eventually forced their competitors to adapt or face obsolescence. The same dynamic should theoretically apply to AI, with early adopters gaining efficiency advantages that translate into better products, lower costs, or superior customer experiences.
However, this market-driven adoption process typically unfolds over many years or even decades, rather than the months or quarters that AI investors and technology companies might prefer. The irony inherent in labor-saving automation technology is that the very people who could benefit from increased efficiency—the workers themselves—often have the strongest incentives to resist its implementation.
For major AI companies that have invested billions of dollars in data centers, computing infrastructure, and research and development, this extended timeline presents a significant challenge. These companies need to generate substantial profits from their investments relatively quickly to justify their massive capital expenditures and satisfy investor expectations.
The Path Forward
The disconnect between executive enthusiasm and ground-level implementation suggests that successful AI adoption requires more than just technological capability and senior leadership support. Organizations must thoughtfully address the legitimate concerns of various stakeholder groups while creating incentive structures that align individual interests with broader organizational goals.
This might involve retraining programs that help workers transition to new roles rather than face displacement, clear communication about how AI implementation will affect different departments, and perhaps most importantly, ensuring that the benefits of increased efficiency are shared broadly rather than concentrated among senior executives and shareholders.
The current situation represents a classic example of how technological potential must navigate the complex realities of organizational behavior, regulatory uncertainty
Acknowledgment: This article was written with the help of AI, which also assisted in research, drafting, editing, and formatting this current version.