Autor: Friedrich Reinhard Schieck; Published 2025-12-20 – Journal of Strategic Innovation and Sustainability, © North American Business Press, DOI: https://doi.org/10.33423/jsis.v20i4.8022
ABSTRACT
This article summarizes the central ideas from Friedrich Schieck’s new book “From the BCM Model to Hybrid HCAI (Part I) – The Story of an Idea Whose Time Has Come!” It reconstructs the development of an extraordinary model of thinking, ranging from the upheavals of the 1990s that led to the emergence of the Business Communication Management (BCM) model to the vision of a human-centered, hybrid artificial intelligence (Hybrid HCAI) by the year 2025. Based on the conviction that companies are not machines but living social systems, Schieck formulated principles early on that were far ahead of their time: transparency, feedback, decentralized responsibility, and role-based self-organization. While classic control models increasingly failed due to growing complexity, BCM already offered an alternative operating system for organizations in transition in the 1990s.
Three decades later, the economy and society are once again at a turning point. In many places, digital transformation has generated more bureaucracy than productive value creation – an “adaptation gap” between technological possibilities and organizational reality is evident. Schieck responds to this with a new architectural approach: Hybrid-HCAI. Here, three forms of intelligence – human judgment, symbolic AI (rules, explainability) and subsymbolic AI (pattern recognition, scaling) – cooperate in a transparent, federated, and responsible system. This book is not a technical manual, but rather a field report, a thought model, and an invitation to architectural design. It is aimed at executives, scientists, organizational developers, and anyone who believes in the future of a human-centered digital modernity.
“The real innovation lies not in the model, but in the set of rules: subsymbolism scales, symbolism regulates – humans decide.” Friedrich Reinhard Schieck – 10/2025
Keywords: strategic innovation, human-centered Artificial Intelligence, organizational architecture, socio-technical systems, self-organization, AI governance, digital transformation
INTRODUCTION AND MOTIVATION
“There is nothing more powerful than an idea whose time has come.” — Victor Hugo
The question of how organizations remain capable of acting under conditions of rapid change and increasing complexity has been with me since the early 1990s. In the aftermath of the collapse of the GDR and the reunification of Germany, I witnessed profound structural changes as a practitioner.
These experiences, complemented by my work in consulting and technology contexts, formed the basis for an initial concept of Business Communication Management (BCM): a model of structured self organization that links roles, time logics, and information flows in such a way that responsibility can be reliably distributed and value creation can be made transparent (Schieck, 1996; Schieck, 1998; Schieck, 2003). Today, some three decades later, the challenges of that time are reappearing in a new form: under the auspices of a digital transformation that, despite technological progress, is often falling short of its social and economic promises (Brynjolfsson, 2017; Acemoglu & Johnson, 2023).
This article is not merely a review, but rather an essayistic invitation to reflect on key errors in thinking, underestimated potential, and necessary architectural changes in the design of organizations. Despite significant investments in information and communication technologies and AI applications, the productivity balance in many industrialized countries is sobering (Brynjolfsson, 2017; Statista, 2023).
I argue that much of the current inefficiency in transformation processes can be attributed to the structural disregard of a simple but effective principle: self-organization is not anarchic freedom, but a malleable system of goal orientation, time logic, and information availability—technically supported but socially anchored.
In this sense, the development toward hybrid HCAI (human-centered artificial intelligence in hybrid form) is not the result of a linear innovation process, but rather an expression of systemic learning—from irritations, disruptions, and frictions. The increasing spread of artificial intelligence (AI), especially in its generative or subsymbolic form, brings the hitherto unresolved tension between automation and autonomy, between efficiency and responsibility, to the agenda with new urgency (Schieck, 2025; Marcus, 2024; Shneiderman, 2022).
My approach to this topic is deliberately twofold: empirically, based on experience of organizational reality; and conceptually, based on the search for an operating system that not only manages complexity but also structures it in a way that adds value. The current challenges – from declining productivity (Statista, 2023) to change fatigue in companies (McKinsey, 2015) to the crisis of confidence in data-based systems – suggest that the debate on digitalization is less about technology than about architecture.
From Empirical Knowledge to System Architecture: The Emergence of a Conceptual Model
What is currently being discussed under the term hybrid HCAI is essentially the further development of an idea that I first formulated in the 1990s: Organizations are not machines, but living, adaptive systems – and therefore do not need a control system based on supervision, but one that aims at structured self control. At that time, the BCM method linked organizational roles, time cycles, and information flows in a model that did not assign responsibility centrally, but made it systematically distributable (Schieck, 1996, 1998, 2003).
For many years, this understanding was overshadowed by the dominant paradigms of process optimization, top-down consulting, and centralized IT architectures—often with the consequence that change remained organizationally cumbersome, culturally contradictory, and technically fragmented. A large number of projects showed that technological systems were formally efficient but socially incompatible. People were not empowered, but disempowered – with all the familiar consequences: innovation blockages, loss of acceptance, demotivation (Gallup, 2023; Acemoglu, 2024).
The Renaissance of the Systemic Question in the Age of AI
With the advent of AI-supported systems and generative technologies in particular, a new situation is now emerging: the pressure to act is increasing, but so are the options for shaping the future. The central question is no longer: How can AI work more efficiently than humans? but rather: How can AI augment rather than replace humans’ ability to form judgments, cooperate, and take responsibility? (Shneiderman, 2022; Marcus, 2023).
Hybrid HCAI is my proposal for an architecture that responds to this: a three-layer model that systematically interconnects human, symbolic, and subsymbolic intelligence – while technically and organizationally securing roles, feedback, and responsibility. The goal is not technocratic overcontrol, but a responsible cooperation architecture in which humans and machines do not compete, but complement each other.
The basic assumption is that stability in digital transformation does not come from rigid control, but from the institutionalization of a set of rules that makes information flows, responsibilities, and decision paths transparent and reversible. In the language of systemic theory, this is about second-order stability— that is, the stability of the rules according to which structures are allowed to change dynamically (Foerster, 1984; Luhmann, 1997).
Objective of the Contribution
This essay is intended as an invitation to interdisciplinary discussion. It outlines a line of thinking that shows how an organizational methodology model from the 1990s—BCM—can be used to build a conceptual bridge to current discourses on human-centered AI and governance innovation. At the same time, the article advocates a shift in perspective: from a focus on technology introductions to an architectural question. It is not what AI can do that determines progress, but how it is embedded in systems of responsibility, communication, and learning.
The rest of this article traces this path: from the emergence of BCM to reflections on the paradoxes of digital transformation, to a vision of a normative operating system that not only ethically constrains the use of AI but also makes it operationally productive.
LOOKING BACK ON A DIGITAL TRANSFORMATION THAT WASN’T
Between Technological Progress and Structural Stagnation
The waves of digitalization over the past two decades have led to profound technological changes in almost all areas of society and the economy: Cloud infrastructures, mobile devices, platform economies, and algorithmic decision-making systems now shape the operational reality of many organizations. And yet the results are sobering: despite massive investments in digital technologies, the hoped-for leap in productivity, innovation, and organizational resilience has failed to materialize in many cases (Schieck, 2023; Brynjolfsson, 2017; Acemoglu & Johnson, 2023).
This discrepancy highlights a central problem: digitalization was often viewed as a technology implementation project, rather than an architectural renewal. As a result, the deeper transformation of social and organizational structures largely failed to materialize. Adaptive, cooperative forms of organization were often replaced by digitally enhanced variants of classic control logics – with corresponding friction losses, path dependencies, and cultural tensions.
The Productivity Paradox Revisited
The so-called productivity paradox was first formulated back in the 1980s: “You can see the computer age everywhere but in the productivity statistics” (Solow, 1987). This paradox is experiencing a revival in the digital age. Despite exponential growth in computing power, networking, and data availability, productivity has been stagnating in many industrialized countries for years (Statista, 2023).
Brynjolfsson (2017) argues that this gap is less a technical deficit than an expression of an “implementation lag”: New technologies only unfold their social and economic benefits when they are embedded in complementary organizational structures. Acemoglu and Johnson (2023) note that technological innovations are increasingly characterized by “biased technological change” – that is, innovation paths that reinforce existing power and control structures rather than facilitating new, cooperative value creation models.
In many organizations, this contradiction manifested itself as a coexistence of digitized front ends and analog control cores. Processes were automated, interfaces modernized, and data volumes multiplied – while decision-making logic, role architectures, and responsibility structures remained largely unchanged. The result: a digitally shiny interface over a Taylorist control system.
The Adaptation Gap: An Organizational Sociological Finding
To define this structural deficit more precisely, I have proposed the term “adaptation gap.” By this I mean the growing divide between the dynamics of technological development and the inertia of organizational control, communication, and learning structures (Schieck, 2023; Schieck, 2024).
This adaptation gap has several dimensions:
- Structural dimension: Organizations are often structured according to hierarchical bureaucratic principles that emerged in the industrial modern era. These structures are designed for stability, control, and efficiency in relatively stable environments—not for cooperation in highly dynamic, complex environments.
- Cultural dimension: Digital technologies are embedded in social practices that are shaped by implicit norms, routines, and power relations (Orlikowski, 2007). These cultural patterns change much more slowly than technical infrastructures.
- Governance dimension: There is often a lack of a normative framework that links the use of digital technologies to clear mechanisms of responsibility, decision-making, and feedback. Technology decisions are made technocratically rather than negotiated cooperatively.
The adaptation gap is therefore not a short-term implementation backlog, but rather an expression of a systemic asynchrony between technology and organization. This asynchrony creates friction losses, demotivates employees, and limits the ability to deal with complex problems productively (Schieck, 2023; Schieck, 2024).
Misguided Control Due to Technocratic Approaches to Digitalization
In practice, this asynchrony was particularly evident in the digitization programs of the 2000s and 2010s. Instead of redesigning structures and processes through adaptive self-organization, many companies opted for large-scale, centrally controlled technology rollouts. ERP systems, workflow automations, and big data platforms were implemented without questioning the underlying logic of cooperation. These technocratic approaches implicitly followed a control model that viewed people as “operators” or “data suppliers” of technological systems – not as creative actors. With the advent of generative AI, this pattern threatens to intensify: instead of understanding AI as a partner in a cooperative architecture, it is widely used as a tool for further automating existing processes (Marcus, 2023).
The result is a paradoxical acceleration: technology is developing faster, but in many cases it is accelerating old patterns instead of enabling new forms of cooperation. The adaptation gap is widening – structurally, culturally, and normatively.
Consequences: From Technical Projects to Architectural Debates
The outcome of the waves of digitalization suggests that sustainable transformation cannot be achieved simply by introducing new technologies. What is needed is an architectural shift that addresses the structural, cultural, and normative foundations of organizations.
This is where the connection to the debate on human-centered artificial intelligence (HCAI) (Shneiderman, 2022) and its further development into hybrid models lies. AI should not be understood as a substitute, but as a complementary cooperation partner in a newly designed socio-technical architecture.
This means designing governance structures, role models, and feedback systems in a way that allows them to productively process complexity instead of suppressing it.
This shifts the discussion about AI into a discussion about architecture, focusing less on the performance of individual models and more on the question of how human, symbolic, and subsymbolic intelligence can be cooperatively interconnected in organizations, administrations, and social institutions.
This paves the way for the vision of a hybrid HCAI model, developed in the next section, which presents an architectural logic that takes the lessons learned from the adaptation gap seriously and systematically integrates self-organization, responsibility, and technological support.
HYBRID HCAI AS A NEW ARCHITECTURAL LOGIC
From AI Euphoria to Governance Vacuum
While recent years have been marked by a veritable surge of innovation in the field of artificial intelligence (AI), one crucial dimension has remained largely overlooked: the question of institutional and architectural embedding. Public and corporate debate has focused heavily on the performance metrics of individual models—accuracy, size, speed—but hardly at all on the conditions under which these technologies can actually become effective, legitimate, and controllable in complex social systems (Marcus, 2024; Shneiderman, 2022).
This asymmetry between technological dynamics and governance innovation creates a structural vacuum. While technical possibilities are expanding, there is often a lack of a coherent regulatory framework that ensures both accountability and collective capacity for shaping the future. This problem is particularly evident at the organizational level:
AI systems are often integrated on an ad hoc basis – for example, to automate individual tasks – without an overarching architecture that systematically links humans, symbolism, and subsymbolism. This creates “governance gaps”: decisions are increasingly influenced by non-transparent models without any traceable feedback loops, audit rights, or accountability structures being established.
The consequences are not only ethical in nature. They directly affect the productivity, innovative capacity, and resilience of organizations and societies. Without clear accountability structures, AI systems risk either becoming embedded in rigid control logics, which slows down their potential, or having unintended effects that are difficult to correct (Acemoglu, 2024). The current AI euphoria is thus also an expression of a governance vacuum: an architectural crisis that has less to do with technological development than with organizational design.
The Hybrid HCAI Model: Three-Layer Cooperation Architecture
Against this backdrop, the concept of hybrid HCAI can be understood as an architectural response to the aforementioned vacuum. Hybrid HCAI does not refer to a single technology, but rather to a structuring principle that brings three complementary forms of intelligence into a cooperative, feedback-capable order:
- The human layer forms the normative and contextual foundation. This is where goals, value judgments, ethical guidelines, contextual interpretations, and responsibilities arise. Humans remain sovereign over the system – not through permanent micro-control, but through the design of the rules and feedback mechanisms according to which AI operates (Shneiderman, 2022).
- The symbolic layer forms the explicit representation of knowledge: rules, role models, processes, decision-making logic, and ontologies. It acts as the “referee” of cooperation between humans and AI by ensuring explainability, auditability, and revisability. Symbolic systems anchor causality, normativity, and structures – they translate human governance into machine-readable form.
- The subsymbolic layer comprises AI models in the narrower sense – neural networks, generative models, statistical methods. It provides perception, pattern recognition, text generation, predictions, and suggestions. Its strength lies in the scaling of information processing, not in normative decision-making.
The central design principle of hybrid HCAI consists of architecturally coupling these three layers through clearly defined interfaces and continuous feedback. Sub-symbolic systems operate within symbolically set parameters; symbolic systems, in turn, are dynamically anchored by human objectives, values, and organizational structures. The human level does not control every process, but it shapes the architecture in which decisions can be made.
This three-layer structure can be seen as an alternative to current centralized AI paradigms, which often practice one-way delegation: data in, decision out – without controllable spaces for interaction. Hybrid HCAI, on the other hand, views humans and AI as actors who cooperate based on a division of labor, connected by a transparent, federated, and adaptive architecture (Schieck, 2025; Helbing, 2025).
Governance, Feedback, and Adaptability
However, such an architecture can only function if it is supported by a systematically designed governance and feedback system. Governance is not understood here as an external set of rules that merely accompanies technology, but as an integral part of the system architecture. Three principles are central:
- Transparency: Decisions must be traceable across all three layers. This includes data provenance, rules and machine suggestions as well as human prioritizations. Transparency is a prerequisite for auditability and trust.
- Feedback: Decisions, results, and deviations must be fed into structured feedback cycles. Analogous to cybernetic concepts of “second-order stability” (Foerster), it is not a matter of rigid adherence to fixed structures, but of continuously adapting the rules according to which adjustments are made.
- Role-based accountability: Governance must be clearly assignable. Instead of diffuse responsibilities, role-based models define who bears which responsibilities – normative (human), formal (symbolic), and operational (technical).
These principles can be implemented technically via federated data and policy architectures, audit and explainability layers, and role-based access and decision-making models. However, it is crucial that governance is not understood as a control instrument against AI, but as a cooperative control logic that channels human, symbolic, and machine intelligence into productive paths.
Application Scenarios and Implications
The strength of the hybrid HCAI approach lies not only in its theoretical elegance, but also in its practical applicability to different organizational contexts. Three application scenarios illustrate its range:
- Organizational development in companies and administration: In traditional organizations, central control structures are often overloaded, while local actors do not have the necessary tools to manage complex tasks in a self-organized manner. Hybrid HCAI can function here as an organizational operating system that systematically designs the division of labor between humans, symbolism, and AI. For example, decision-making processes that previously took place in hierarchical cascades can be decentralized via role-based ontologies, policy-as-code, and real-time feedback mechanisms. AI supports this process not as a substitute, but as a context enhancer: it provides suggestions, recognizes patterns, and simulates scenarios, while the symbolic level ensures governance and explainability, and the human level makes decisions and develops rules (Shneiderman, 2022).
- Network organizations and federated systems: In the context of decentralized networks— such as cross-sector collaborations, supply chains, or public infrastructures—hybrid HCAI enables distributed coordination without a central control authority. Through federated data architectures and shared symbolic reference frameworks (e.g., role models, shared ontologies), different organizational units can act autonomously while interacting within a shared governance system. This opens up new opportunities, especially for international collaborations, because values, responsibilities, and decision-making rights are architecturally anchored rather than merely contractually fixed (Helbing, 2025).
- Social systems and public governance: Finally, hybrid HCAI has the potential to shape social infrastructures beyond the corporate context. In areas such as labor market control, mobility, energy, or education, hybrid architectures can serve as the basis for participatory, data sovereign systems in which AI acts as a tool for social self-organization rather than an external control instrument. The normative and constitutional implications are particularly clear here: hybrid HCAI can be understood as a building block of a digital constitution that combines technological performance with democratic legitimacy (Acemoglu, 2024; Helbing, 2015).
What these scenarios have in common is that complexity is not “tamed” through centralization, but rather made productive through transparent roles, rules, and feedback. Hybrid HCAI provides the necessary architectural logic for this.
Research Perspectives and Outlook
Hybrid HCAI is not a finished model, but rather an open field of research and development that systematically intertwines technological, organizational, and societal perspectives. Three central perspectives arise for scientific debate:
- Theoretical connectivity: Hybrid HCAI ties in with various strands of research: organizational sociology (e.g., Luhmann’s role models and systems theory), cybernetic concepts of second order stability (Foerster), AI research on neuro-symbolic integration (Marcus, 2024), and governance and democracy theories in the digital space (Helbing, 2025). A systematic synthesis of these approaches is still in its infancy, but it opens up considerable potential for interdisciplinary research.
- Empirical testing of hybrid architectures: To date, there are only a few real-world implementations that systematically link all three layers – human, symbolic, subsymbolic. Future research must show under what conditions hybrid HCAI actually has an impact in organizations: Which architectures are scalable? Which governance models are viable? Which forms of human-machine cooperation have been proven to increase productivity, participation, and resilience? Pilot projects in companies, administrations, or interorganizational networks are ideal for this purpose.
- Institutional and normative embedding: In the long term, the key question will be how hybrid HCAI can be integrated into legal, regulatory, and social systems. Standards such as the EU AI Act (European Union, 2024) or ISO/IEC 42001 (ISO, 2023) provide initial frameworks, but so far they have hardly addressed the architectural coupling of different forms of intelligence. The development of an information constitution that institutionally anchors transparency, auditability, participation, and responsibility will be one of the most important tasks in the coming years (Shneiderman, 2022; Helbing, 2025).
CONCLUSION
Hybrid-HCAI represents a necessary architectural shift: from selective technology integration to a systemic coupling of human, symbolic, and machine intelligence. This shift is less a question of technical performance than one of governance, feedback, and institutional embedding.
This closes the circle, leading from criticism of waves of digitalization to the diagnosis of the adaptation gap to the outline of a new, human-centered architecture. Hybrid HCAI is not a panacea, but a structured framework in which innovation, responsibility, and participation can be systematically considered together. I believe that without a paradigm shift toward hybrid HCAI architectures, the AI bubble could burst, with far-reaching consequences for the financial and real economies and drastic losses in prosperity.
NOTE ON THE BOOK OF THE SAME NAME
This essay summarizes the core ideas of my book “From the BCM Model to Hybrid HCAI – Part I: The Story of an Idea Whose Time Has Come.”
The book is now available on Amazon at: https://www.amazon.com/dp/B0GHZDC7P1
METHODOLOGY AND ACKNOWLEDGEMENTS
I would like to thank Horst Tauber and Dr. Ingo Schrewe for the reflective exchange of ideas in the late 1990s and early 2000s, as well as Prof. Dirk Helbing for providing his publications and papers in 07/2025.
ChatGPT (OpenAI, version GPT-4o/5) was used to assist in the formulation of individual sections of text. This tool helped with spelling, grammar checking, sentence restructuring, and improving clarity. The generated content was critically reviewed and revised by the author, who is responsible for the final version. The actual ideas, arguments, and interpretations in this document are those of the author.
In this context, I found that ChatGPT either crashed or gave contradictory and illogical answers to complex questions. Only after a detailed dialogue with ChatGPT did I get conclusive results. In other words, ChatGPT learned from me in dialogue to understand and reproduce causal and logical connections.
Since I assume that other ChatGPT users have had similar experiences, I would like to thank all the authors mentioned and unknown who have consciously or unconsciously shared their knowledge with ChatGPT!
REFERENCES
- Acemoglu, D., & Johnson, S. (2023). Power and Progress. MIT Press.
- Brynjolfsson, E. (2017). Artificial Intelligence and the Modern Productivity Paradox. NBER Working Paper.
- (2023). Engagement Index Deutschland.
- Marcus, G. (2023). Rebooting AI. Pantheon Books.
- (2015). Changing Change Management.
- Schieck, F. (1996). Systemisches und ganzheitliches Management auf der Grundlage einer verhaltensgesteuerten Organisationsmethode.
- Schieck, F., & Tauber, H. (1998). Business Communication Management – Der Weg zum unternehmensweiten Wissensmanagement. OrgNews GfO.
- Schieck, F. (2003). Die Paradoxie der Beratung – Oder von rasantem Wandel und verspäteten Konzepten. Magazin für Digitalisierung, Vernetzung & Collaboration.
- Shneiderman, B. (2022). Human-Centered AI. Oxford University Press.
- Acemoglu, D., & Johnson, S. (2023). Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. MIT Press.
- Brynjolfsson, E. (2017). Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics. NBER Working Paper.
- Marcus, G. (2023). Rebooting AI. Pantheon Books.
- Orlikowski, W. (2007). Sociomaterial practices: Exploring technology at work. Organization Studies, 28(9), 1435–1448.
- Shneiderman, B. (2022). Human-Centered AI. Oxford University Press.
- Solow, R. (1987, July 12). We’d better watch out. New York Times Book Review.
- (2023). Arbeitsproduktivität in Deutschland 1991–2023. Journal of Strategic Innovation and Sustainability Vol. 20(4) 2025 27
2 Responses
Commentary by ChatGPT from an economic perspective:
To the full comment »
Commentary by ChatGPT from a scientific perspective:
To the full comment »