Comment by ChatGPT on the essay “From the BCM Model to Hybrid HCAI – Part I” from an economic perspective
ChatGPT – Introduction: Digitalization without productivity?
Despite significant investments in digital technologies, particularly in information and communication technologies and artificial intelligence (AI), productivity gains in many industrialized countries have been comparatively low for years. This phenomenon is often referred to as the “productivity paradox” (Solow, 1987; Brynjolfsson, 2017). While technological capabilities, data availability, and automation possibilities have increased significantly, macroeconomic indicators such as labor productivity or total factor productivity (TFP) growth show only moderate increases (Statista, 2023).
Friedrich Reinhard Schieck interprets this discrepancy not primarily as a technological deficit, but as the result of a structural mismatch between technological possibilities and organizational realities. He refers to this discrepancy as the “adaptation gap” – a systemic asynchrony between the dynamics of technical innovation and the inertia of organizational, cultural, and governance-related structures. From an economic perspective, this diagnosis can be interpreted as an indication of insufficient complementary investment in organizational and human capital.
ChatGPT – “Adaptation Gap” as a diagnosis of economic inefficiency
Schieck coins the term “adaptation gap” to describe the divide between technological possibilities and organizational reality (Schieck, 2023; Schieck, 2024). Economically, this adaptation gap can be interpreted as a combination of three inefficiencies:
Firstly: rising coordination and transaction costs. When new systems encounter old governance logics, the need for coordination, approval loops, documentation requirements, and interface conflicts increase. This is not just a “feeling of bureaucracy,” but real opportunity costs: time is reallocated from value creation to coordination.
Second: information asymmetries and diffusion of responsibility. Digital systems generate more data, but not automatically better decisions. When responsibilities remain unclear, the familiar phenomenon arises where many actors are informed, but no one is responsible. Economically, this is a liability and incentive problem: wrong decisions cannot be clearly attributed, learning processes flatten, and error costs rise.
Third: Devaluation of local intelligence. Schieck’s criticism of technocratic implementations is aimed at the fact that employees become “operators” instead of acting as decentralized problem solvers. This is economically significant because modern value creation is increasingly knowledge- and context-intensive. Where local judgment is systematically disempowered, adaptability and innovation rates decline—precisely the factors that determine competitiveness in volatile markets.
In summary, the adaptation gap describes a situation in which digitalization does not unleash productivity but rather increases complexity. This is the core economic criticism: it is not technology that fails, but rather the organization as a distribution and decision-making mechanism.
CHATGPT – BCM as an early response to the economics of complexity
Schieck reconstructs his BCM model (Schieck, 1996; Schieck & Tauber, 1998; Schieck, 2003) as an early form of an “operating system” for organizations that relies on roles, time logics, and information flows to organize responsibility in a decentralized yet binding manner. From today’s economic perspective, BCM can be read as an anticipation of modern complexity economics: when environmental volatility increases, centralized control becomes expensive and slow. Efficiency then arises less through control than through robust rules, clear roles, and rapid feedback cycles.
What is economically interesting here is not the historical narrative, but the mechanism: BCM aims to reduce coordination costs through structured self-organization. This ties in with the insight that organizations are not just production facilities, but above all information and decision-making machines – and that their performance in complex environments determines value creation.
ChatGPT – Hybrid HCAI: From AI as a tool to AI as a coordination architecture
The central proposal of the essay is hybrid HCAI: a three-stage cooperation architecture consisting of human judgment, symbolic AI (rules, roles, explainability), and subsymbolic AI (pattern recognition, scaling). Schieck formulates this as a design principle: “Subsymbolism scales, symbolism regulates – humans decide” (Schieck, 2025). From an economic perspective, this is not a technical claim, but an institutional one.
- The human level: value setting, liability, strategic decision-making
In economic terms, the human level takes on the functions that cannot be delegated: goal definition, prioritization, normative consideration, responsibility. This is also an anchor for liability and legitimacy. In markets with regulatory and reputational risks, this level becomes a location factor: those who cannot account for their decisions in a comprehensible manner lose trust, capital, and room for maneuver.
- The symbolic level: governance as “policy capital”
The symbolic level—rules, role models, policy-as-code, audit rights—is particularly interesting from an economic perspective because it operationalizes governance. Here, governance does not appear as a compliance add-on, but as productive regulatory capital that lowers transaction costs, reduces error costs, and enables learning. In this interpretation, symbolic AI is less about “intelligence” and more about an infrastructure for reliability: it makes decisions verifiable, correctable, and institutionalizable.
- The subsymbolic level: scaling without normativity
The subsymbolic level provides the lever for efficiency: pattern recognition, forecasting, generation, automation. Schieck’s economically astute point is not to declare this level sovereign. It scales information processing – but it does not generate legitimacy. This is precisely why it needs to be coupled with symbolic rules and human responsibility, as emphasized in the debate on human-centered AI (Shneiderman, 2022; Marcus, 2023; Marcus, 2024).
ChatGPT – Governance, feedback, accountability: Productivity is created through control loops
Schieck anchors hybrid HCAI in three principles: transparency, feedback, role-based accountability. From an economic perspective, these are not ethical add-ons, but conditions for efficiency.
Transparency reduces information asymmetries and enables internal markets for criticism: decisions become debatable, not sacrosanct. This reduces error persistence and reputation risks.
Feedback is a mechanism for reducing correction costs. The faster deviations can be detected and rules adjusted, the lower the costs of wrong decisions. Schieck ties in here with cybernetic and systems theory ideas of the “second order” (Foerster, 1984; Luhmann, 1997): Stability does not arise from rigid structures, but from stable rules of change.
Role-based accountability addresses a central economic problem of digital systems: “diffuse responsibility.” Where no one is responsible, learning becomes a matter of chance. Role logic makes responsibility assignable – and thus performance controllable.
In summary, hybrid HCAI is an architecture that does not primarily maximize automation, but institutionalizes the ability to correct. Economically, this is a change in maturity: from efficiency as an increase in output to efficiency as the ability to make reliable decisions in uncertainty.
ChatGPT – Humans as value architects – an economic counterpoint to the logic of automation
The essay implicitly contradicts a widespread economic narrative: that humans are a “cost factor” that should be replaced by AI. Schieck counters this by positioning humans as architects of rules and responsibilities. This is economically plausible because modern value creation is increasingly determined by context, interpretation, and normative trade-offs – precisely those services that subsymbolic systems cannot legitimately generate.
This shifts the focus of investment: not “AI instead of labor,” but “AI plus better organization.” The decisive return comes from complementarity: technology only generates productivity when organization, governance, and incentive systems grow along with it (Brynjolfsson, 2017; Acemoglu & Johnson, 2023). Schieck’s contribution lies in formulating this complementarity as architecture, not just as an abstract condition.
ChatGPT – Macroeconomic relevance: AI bubble, misallocation, and trust economy
Schieck’s warning about a possible AI bubble with consequences for the financial and real economy is particularly acute. Economically, the logic is understandable: when capital flows into AI projects that primarily automate existing control logics without structurally increasing productivity, a classic misallocation occurs. Expectations rise faster than realizable value creation, while at the same time governance risks (regulation, liability, reputation) accumulate.
In a trust-based economy – in which customers, employees, regulators, and capital markets demand transparency – governance failures can lead to abrupt value corrections. Hybrid HCAI can be seen here as offering stability: not as “more AI,” but as an institutional prerequisite for AI value creation to be capitalized on in a sustainable manner.
ChatGPT – Implications for corporate management and economic policy
- For companies: Reprioritize investments
Schieck’s approach suggests that AI programs without organizational architecture systematically squander returns. In practical terms, this means that budget and attention must be shifted from “tool rollout” to “rule and role design.” The decisive factors are not only model quality or token costs, but also:
- Responsibility design (who decides, who is liable, who is allowed to audit)
- Symbolic layers (policies, ontologies, process logic, explainability)
- Feedback cycles (measurement points, learning loops, escalation paths)
This makes governance a productive investment – rather than an afterthought.
- For economic policy: Productivity needs institutions, not just subsidies
At the macroeconomic level, the essay implies that digital productivity strategies must focus more on institutional innovation capacity: standards, auditability, responsibility models, federated data spaces. Regulatory frameworks such as the EU AI Act or management system standards can provide guidance, but they are not enough on their own if the architecture of cooperation in organizations remains untouched (Schieck explicitly refers here to governance vacuum and institutional anchoring).
ChatGPT – Critical appraisal: Strength as a draft, task as evidence
The greatest value of Schieck’s essay lies in its shift in perspective: it treats AI not as a technical disruption, but as an opportunity to redesign the basic economic engine of “organization.” This is conceptually strong, connectable, and makes sense at a time when many AI initiatives are stuck in pilotitis, tool overload, or acceptance problems.
At the same time, hybrid HCAI remains primarily a normative model. A robust economic assessment requires empirical answers: Under what conditions does the architecture actually reduce coordination costs? How do role models affect throughput times, error costs, and innovation rates? Which governance mechanisms scale, and which overload? The essay asks the right questions – the next step is measurability.
ChatGPT – Conclusion: The next wave of productivity is institutional
From an economic perspective, the essay “From the BCM Model to Hybrid HCAI” can be read as a plea for a new productivity logic: In the age of AI, value creation does not arise automatically from models, but from the institutional coupling of scaling, rules, and responsibility. Schiecks’ hybrid HCAI is thus less an AI concept than a theory of organizational value creation under complexity.
If this perspective is correct, then the central management task of the coming years will not be to “adopt” AI, but to design architectures that make AI productive, legitimate, and capable of learning. Or – in Schieck’s own condensation – as an economic principle:
Subsymbolism scales, symbolism regulates, humans decide.
The return on investment lies in the rules!