Navigation

Hybrid-HCAI: A thought experiment on the ultimate symbiosis of human and artificial intelligence

A positive future scenario for artificial intelligence in business and society

Abstract

The digital paradox – progress without social benefits: Despite massive investments in digitalization and artificial intelligence (AI), the Western world has neither significantly increased its productivity nor reduced social inequality or halted the erosion of democratic structures over the past 25 years. The so-called productivity paradox clearly shows that technological progress does not necessarily lead to economic or social prosperity. On the contrary, digital surveillance, algorithmic discrimination, and the dismantling of intermediary institutions have created new tensions.

The structural misalignment of current AI business models: Modern AI systems are often based on centralized data extraction and the use of third-party intellectual property. Their business models favor power concentration and digital dependency rather than promoting innovation and fairness. Even seemingly neutral subscription models often conceal the non-transparent exploitation of personal data. The underlying architectures are mostly proprietary and undermine both the data sovereignty of users and the fair participation of creators in value creation.

Scientific counter-models – Human-Centered AI: International experts are calling for a paradigm shift toward human-centered artificial intelligence (HCAI). The goal is to view technologies not as a replacement for human capabilities, but as an extension of them. Daron Acemoglu criticizes the current focus on automation and warns of an economic misstep without sustainable productivity gains. Gary Marcus, on the other hand, sees the combination of human logic and machine learning as the only viable model for the future – explainable, robust, and ethically responsible.

Hybrid HCAI: The vision of cooperative intelligence: At the heart of this vision is the idea of “trihybrid intelligence,” which combines symbolic AI (rules, logic), subsymbolic AI (neural networks), and human cognition (intuition, ethics). In this architecture, humans are not objects of automation, but an integral part – active designers rather than passive users. Symbolic AI takes on a mediating role: it regulates communication and ensures transparent, traceable, and ethically responsible decision-making processes. Biological and social systems serve as models: they function through decentralized interaction, continuous feedback, adaptability, and emergent structures. These principles could be translated into a symbolic set of rules that evolutionarily controls human-AI cooperation – self-organized, fair, and context-sensitive.

A concrete future scenario for companies: Companies of the future use hybrid HCAI platforms that dynamically adapt workplaces to tasks and contexts. Processes, rules, and feedback are continuously updated in a hybrid knowledge graph. Learning and change take place organically – without classic change processes. Employees actively participate in the further development of the system through dialogical interaction. The organization becomes a digital real-time twin that simulates and controls processes and develops them further together with people. The workplace thus becomes a digital reflection of the individual. Hybrid HCAI enables a new form of operational value creation: less bureaucracy, faster innovation, and structural resilience. At the same time, it strengthens cultural integrity through participatory decision-making processes, transparent rules, and fair remuneration for cognitive performance.

From AI product to social operating system: The vision culminates in the idea of an Open-HCAI – an ethically coded, decentralized, and publicly accessible AI platform. Similar to Bitcoin as a decentralized currency infrastructure, Open-HCAI could become the fundamental infrastructure for knowledge, innovation, and social fairness. Such a platform would not only be a technical solution, but also an expression of a new social grammar: collective intelligence, trust, and participation as the basis for productive value creation. Open-HCAI could also be a decisive step on the path to artificial general intelligence (AGI) – not as isolated superintelligence, but as a co-evolutionary symbiosis of humans and machines. Such AGI would not only be powerful, but also ethically anchored, transparent, and socially legitimized. 

(Friedrich Schieck / 07/2025)

Table of contents

  1. Developments in the digital age
  2. AI business models and structural problems of digital transformation
  3. Scientific perspectives on the future of AI
  4. AGI and common sense
  5. Different strengths of human and artificial intelligence
  6. The idea of “hybrid intelligence”
  7. Collective intelligence on a new level
  8. Human-in-the-loop and neuro-symbolic approaches
  9. The idea of a “hybrid HCAI” – a thought experiment
  10. A neuro-symbolic set of rules for ultimate symbiosis
  11. Hybrid HCAI in companies – a look into the future
  12. Potential benefits of a hybrid HCAI platform
  13. Impact of a hybrid HCAI on the economy and society
  14. Relationship to the concept of artificial general intelligence (AGI)
  15. Open HCAI – A hypothetical thought experiment
  16. Impact of an open HCAI on our future
  17. My preliminary conclusions
  18. My statement

1. Developments in the digital age

The past 25 years have revealed a paradoxical development in the Western world: despite unprecedented advances in information and communication technology and the emergence of artificial intelligence, key social, economic, and political problems have not been solved, but in some cases have even been exacerbated.

The persistent productivity paradox

The productivity paradox described by Robert Solow back in 1987 has persisted and even intensified in recent years. While companies and governments have invested hundreds of billions in digitalization, automation, and AI systems, the expected productivity growth has failed to materialize. Instead, productivity gains in many Western countries have stagnated at historically low levels.

The causes are complex: many digitization projects fail due to a lack of integration of existing systems, lead to excessive administration, or require the costly parallel operation of old and new infrastructures. At the same time, new forms of work are emerging that generate stress and distraction rather than real efficiency gains—permanent availability, multitasking, and digital surveillance burden employees without delivering the promised productivity gains.

Democratic regression and authoritarian tendencies

Particularly alarming is the erosion of democratic institutions in countries that have long been considered stable democracies. Digital technologies have not prevented this development, but have often accelerated it. Social media and its algorithms enable the targeted dissemination of disinformation and create parallel realities that undermine a common factual basis for democratic discourse.

Populist politicians use these platforms for emotional mobilization and social polarization. At the same time, digital surveillance technologies are giving rise to new forms of political control. What was originally developed for security purposes is increasingly being used to monitor political opposition and restrict civil liberties.

Traditionally important intermediary institutions such as independent media and established parties are losing influence, while tech companies are effectively taking on journalistic and political gatekeeping functions without corresponding democratic accountability.

Decline in equal opportunities and growing inequality

The digital revolution has not only failed to eliminate existing social inequalities, but has also created new forms of disadvantage. A digital divide runs through society: while privileged classes benefit from AI tools, automation, and digital business models, medium-skilled jobs are being systematically rationalized away.

Access to high-quality digital education and infrastructure increasingly determines life chances. The emerging platform capitalism concentrates profits among a few tech companies and their shareholders, while users, whose data creates the actual value, are not adequately compensated.

Algorithmic management systems make important decisions about lending, job opportunities, and social benefits, but often perpetuate existing biases and create new, less transparent forms of discrimination. The gig economy has pushed millions of workers into precarious, formally self-employed but de facto dependent employment relationships without traditional employee rights or social security.

Real loss of prosperity despite digital progress

Paradoxically, many people in Western societies have suffered real losses in prosperity despite the digital revolution. Although digital services often appear to be free, they are financed through data extraction, advertising, and psychological manipulation, the economic costs of which are externalized.

The financialization of the economy, reinforced by algorithmic trading and speculative tech valuations, has redistributed wealth away from the productive real economy to those who are already wealthy. Geographically, extreme imbalances have emerged: technology hubs such as Silicon Valley concentrate enormous wealth, while traditional industrial regions have been left behind—a spatial polarization that fuels political tensions.

Stagnating real incomes have been partially offset by readily available consumer credit via fintech platforms, leading to rising household debt.

Systemic reinforcement of problems

These three problem areas—the productivity paradox, democratic regression, and growing inequality—reinforce each other in a vicious circle. Lack of productivity growth creates economic pressure and makes authoritarian “efficiency solutions” politically attractive. Growing economic inequality undermines people’s trust in democratic institutions and their ability to reform. Authoritarian tendencies, in turn, hinder the structural reforms needed to solve fundamental economic problems, as they often prioritize short-term power retention over long-term problem solving.

The irony of this development is that technologies that were originally intended to increase efficiency, create transparency, and promote democratic participation have in some cases had the opposite effect. This realization raises fundamental questions about social governance and democratic control of technological change and makes it clear that technological progress alone does not automatically lead to social progress.

2. AI business models and structural problems of digital transformation

The paradoxical developments of the past 25 years—stagnating productivity despite massive investments in technology, democratic regression despite improved communication technology, and growing inequality in the digital age—cannot be explained solely by inefficient implementation. Rather, they point to fundamental structural problems in both the technological architecture approaches and the business models of the dominant technology companies.

The business models of the AI giants

Current AI providers pursue different but structurally similar business models, all of which rely on centralized control and data extraction. OpenAI, with ChatGPT, has developed a hybrid model based on subscription fees and API access that now generates $10 billion in annual revenue, with a goal of reaching $12.7 billion by 2025. Despite these enormous revenues, the company is not yet profitable due to the enormous training and infrastructure costs.

Critics such as Gary Marcus describe OpenAI’s business model more drastically: In his article “OpenAI Cries Foul” dated January 9, 2025, he characterizes it as a company that has made a name for itself by “chewing up and recombining shredded pieces of intellectual property in a statistically probable manner without adequate compensation” [1]. This assessment points to a fundamental problem in the AI industry: the massive use of copyrighted content for training models without compensating the original creators.

Google strategically integrates its Gemini AI technology into its established advertising-based business model and uses AI services to make search results and ads more accurate, while Google Cloud has already reached a $36 billion revenue rate. Elon Musk’s xAI with Grok does not yet pursue a clear monetization model and primarily functions as a strategic asset for the X ecosystem.

Structural continuity instead of innovation

These business models reproduce and reinforce the problematic structures of the platform economy instead of breaking them down. Even the seemingly “cleaner” subscription models continue to rely on data extraction for model improvement, create new dependencies through proprietary ecosystems, and remain opaque about training methods and data usage.

Particularly problematic is the systematic appropriation of intellectual property: AI companies train their models using copyrighted texts, images, and other content without compensating the authors or even asking them. This practice effectively constitutes an expropriation of creative work, redistributing value from the original creators to the AI companies.

The technological architecture follows a centralised control paradigm: all data processing takes place in proprietary cloud systems, users have no control over their data or its use, and the systems function as opaque “black boxes” without traceable or correctable decision-making logic.

The causes of the paradox

The cause of these paradoxical developments lies in the combination of two reinforcing factors: methodological and architectural deficits and problematic business models. Current technological approaches continue to focus on externally organized automation, surveillance, and control, whether in the corporate or public sector, rather than empowering people or promoting democratic participation.

At the same time, platform companies collect users’ personal data without explicit consent and sell it to the highest bidder for advertising purposes. AI companies go one step further: they systematically appropriate the creative and intellectual work of millions of authors without compensating them and monetize this content via their AI services. This combination of data extraction and intellectual expropriation creates systems that are technically advanced but socially regressive, exacerbating existing power imbalances.

Alternative development paths

There are certainly alternative approaches that would be more democratic and user-oriented. Decentralized AI architectures could enable models that run locally on user devices or operate in federal networks without central control. Open-source development would promote transparent, community-driven AI development instead of proprietary systems. Data sovereignty concepts could create architectures in which users retain control over their data and determine how it is used. Cooperative business models for AI infrastructure could offer an alternative to the current monopoly structures.

Systematic concentration of power

The core problem is that current AI developments are not only failing to break down the existing power structures of the platform economy, but are actually reinforcing them. Instead of creating tools that empower people and strengthen democratic processes, systems are emerging that create new dependencies and reinforce existing inequalities. So far, the AI revolution has followed the same pattern as previous waves of digitalization: technological innovation is channeled and cannibalized by business models that primarily serve to concentrate capital and power in the hands of a few players.

Conclusion:

As long as AI systems are developed according to established patterns, they will reinforce rather than resolve the paradoxical effects of the last 25 years, as AI companies not only extract user data but also use the intellectual property of millions of creators for their models. This represents a new form of expropriation that goes beyond the already known problems of the platform economy, further exacerbates the concentration of power, and can limit or even nullify the equal opportunities and distributive justice of a free market economy and liberal democracy.

This raises the first key question:

What could a fundamental paradigm shift in the methodological and architectural approaches of today's AI models look like that rewards the cognitive performance of the user and promotes, rather than restricts, equal opportunity and distributive justice in a free market economy and liberal democracy?

3. Perspectives from science on the future of AI

A group of 26 international experts published the results of their study

Six Human-Centered Artificial Intelligence Grand Challenges” on January 2, 2023.

The widespread use of artificial intelligence (AI) technologies has a significant impact on human life, the extent of which is not yet fully understood. There are numerous negative unintended consequences, including the perpetuation and exacerbation of social inequalities and divisions through algorithmic decision-making.

The authors present six major challenges for the scientific community to develop AI technologies that are human-centered, i.e., ethically acceptable and fair, and improve human existence.

These major challenges are the result of international collaboration between science, industry, and government and represent the consensus views of a group of 26 experts in the field of human-centered artificial intelligence (HCAI).

Essentially, these challenges call for a human-centered approach to AI that:

  1. focuses on human well-being,
  2. is designed responsibly,
  3. respects privacy,
  4. follows human-centered design principles,
  5. is subject to appropriate governance and oversight, and
  6. interacts with individuals while respecting human cognitive abilities.

The authors hope that these challenges and the associated research directions will serve as a call to action to pursue research and development in AI that serves as a multiplier for fairer, more equitable, and more sustainable societies. [2].

Daron Acemoglu warns against current AI developments

In his article “The World Needs a Pro-Human AI Agenda” dated November 29, 2024, that artificial intelligence (AI) in the current technological context could lead to a world in which workers are displaced and people are influenced by manipulation and misinformation – without any significant gains in productivity. [4]

Although some industry experts predict a rapid breakthrough toward artificial general intelligence (AGI), Acemoglu points out that there is neither clear evidence nor real productivity gains to support this. Instead of using AI primarily for automation and to replace human labor, it should rather be used to specifically support people and expand their capabilities.

However, in order to promote a “pro-human” agenda, politics, society, and the media would have to push harder for AI to be developed specifically to strengthen human skills. Ultimately, a rethink is needed that moves away from the pursuit of AGI and sees AI primarily as a tool for improving and complementing human work. [4]

Peter Dizikes sums this up in his article „Daron Acemoglu: What do we know about the economics of AI?”  dated December 6, 2024, as follows: Despite all the talk about artificial intelligence turning the world upside down, its economic impact remains uncertain. There is massive investment in AI, but little clarity about what it will produce.

“Where will the new tasks for humans with generative AI come from?” asks Acemoglu. “I don’t think we know, and that’s the problem. Which apps will really change the way we do things?” [5]

In his article “Will we squander the AI opportunity?” dated February 19, 2025, Daron Acemoglu points out that for more than 200,000 years, humans have been building solutions to new challenges they face and sharing knowledge with each other. AI could continue this trend by complementing human capabilities and enabling us to reach our full potential, but the technology is evolving in a different direction. [6]

Gary Marcus’ criticism of current AI developments

Gary Marcus (professor emeritus of psychology and neuroscience at New York University), one of the most prominent critics of current developments in the field of AI, warns against current developments in his book “Taming Silicon Valley How We Can Ensure that AI Works for Us” and in numerous articles on his Substack Blog:

Generative AI is often touted as the next big breakthrough. Estimates predict gigantic markets, even though revenues have only been in the range of a few hundred million so far. Important sources of revenue include automatic code writing and marketing texts. However, exaggerated expectations could lead to a massive bubble, as the technology currently often falls short of its promises.

A central problem with generative AI is so-called hallucinations – the systems invent facts or provide inaccurate information. There is also a lack of consistency and “common sense.” Many cases show that AI models recognize statistical patterns but do not have a real understanding of the real world.

The problem of teaching machines everyday human logic has been known since the 1950s and remains unsolved. Models such as ChatGPT or image AI systems such as Sora suffer from the fact that, although they produce impressive results, they often fail to understand basic physical or social contexts correctly. They lack common sense!

Current technology can show rapid and spectacular progress, but it is economically and technically unstable. Studies show that generative AI has major gaps in formal reasoning and abstraction ability. Experts such as Gary Marcus advocate a neuro-symbolic approach that combines statistical methods with symbolic logic. Only in this way can AI be developed in the long term that is truly reliable and widely applicable. [7]

In this context, Gary Marcus writes in his blog post “Deep learning is hitting a wall” from February 9, 2025: “With all the challenges in ethics and computer science, and the knowledge needed from fields such as linguistics, psychology, anthropology, and neuroscience, not just mathematics and computer science, it will take a whole village to grow up to AI. We should never forget that the human brain is perhaps the most complicated system in the known universe; if we want to build something that is roughly equivalent, open-hearted collaboration is key.” [8]

In his article “A knockout blow for LLMs?” dated June 8, 2025, Gary Marcus writes: “The vision of AGI that I have always had is one that combines the strengths of humans with the strengths of machines and overcomes the weaknesses of humans.” [9]

4. AGI and common sense

Artificial General Intelligence (AGI) is the intelligence of a hypothetical computer program that has the ability to understand or learn any intellectual task that a human can perform. An alternative definition describes AGI as a highly autonomous AI system that surpasses human capabilities in solving most economically significant intellectual tasks. [10]

In this sense, common sense tells me that the gigantic investments in AI models and AI infrastructures that are supposed to replicate or surpass human intelligence cannot be economically viable, since human intelligence is available in sufficient numbers and at low cost worldwide!

A calculation example to illustrate this point

With this in mind, I asked ChatGPT-4o the hypothetical question: How much would it cost in terms of investment and energy consumption to replicate the brain computing power of one million people using today’s supercomputers such as the NVIDIA H100 GPU?

  • ChatGPT4o answer – summary:

 

Category

Value

FLOPs per brain

10¹⁶

Total computing power

10 ZettaFLOPs

GPUs required (H100)

167 Mio

Computing power per GPU

60 TFLOPs (FP64, realistic)

GPU costs (hardware only)

~$5 trillion

Total cost with infrastructure

~$5.15 trillion

Power consumption 

167 GW

Annual energy requirement

approx. 1,460,000 GWh

Annual energy requirement in Germany

approx. 449,800 GWh

Even if these figures are highly hypothetical, they fundamentally call into question past and, above all, future investments in the further development and training of GenAI/LLM models and in the necessary AI infrastructures. I tend to believe that the public discussion about the possibility of developing such an AGI system serves only marketing purposes, to convince investors to invest further capital.

I think AI researchers and scientists should be more concerned with how to incorporate the cognitive performance of human brains into future AI models!

5. Different strengths of human and artificial intelligence

Human intelligence is biological and shaped by evolutionary adaptations and social experiences. It manifests itself in creative, emotional, and intuitive abilities that enable humans to apply knowledge in context, develop new ideas, and learn from mistakes. In addition, humans possess self-awareness and the ability to make moral decisions. Their cognitive abilities—such as perception, memory, attention, creativity, and emotional intelligence—are unique. These allow them to make informed decisions even in uncertain or incomplete situations. While AI analyzes large amounts of data to identify patterns, humans can act efficiently through intuition and experience, even when information is limited.

In contrast, artificial intelligence is based on mathematical algorithms and data models that run on machine hardware. It surpasses humans in speed, precision, and the processing of large amounts of data. AI can recognize highly complex relationships in data that are invisible to humans and work simultaneously on many levels to generate content. Nevertheless, it lacks true creativity, awareness, and the ability to develop entirely new concepts from context. While humans act flexibly based on experience and intuition, AI is usually limited to specific tasks and has no understanding or awareness of its own. It appears creative, but it is not in the true sense of the word!

6. The idea of “hybrid intelligence”

An increasingly discussed approach to combining the strengths of humans and AI in a meaningful way is the concept of hybrid-intelligenz [12]. This involves close interaction between human and machine capabilities, in which both sides not only take over simple tasks from each other, but also complement each other. For example, AI takes on data-intensive, repetitive, or statistical pattern-based tasks, while humans devote themselves to creative, strategic, value-adding, and empathetic dimensions.

The goal is for both humans and machines to play to their respective strengths. In this approach, humans retain control over decision-making and design processes, while AI provides support, acceleration, and expansion. The technology is not intended to replace humans, but rather to create space for them to develop their creative and critical potential. In this way, a more intelligent whole can emerge than either side could ever achieve alone.

The concept of hybrid intelligence aims to combine the respective strengths: AI provides structured information, performs analyses, processes large amounts of data, and offers forecasts. Humans critically question these results, put them into context, and develop creative solutions. In addition, collaboration in collective exchange enables higher-quality decisions, as each actor—human or machine—contributes precisely those aspects in which they are particularly effective. [12]

7. Collective intelligence on a new level

The symbiosis of human and artificial intelligence takes the principle of collective intelligence to a new level. While each person contributes individual skills and knowledge, AI supports the entire system in real time by identifying gaps in knowledge and highlighting new opportunities for cooperation. In this understanding, each person is simultaneously a data storage device, a source of experience, a creative catalyst, and a designer. Decisions remain an interactive process in which human judgment, ethical considerations, and economic aspects play a central role.

A methodological stumbling block of current AI approaches is that they are often designed to replace humans rather than integrate and intelligently network them. There is a lack of architectural models that are explicitly designed for cooperation between humans and AI. However, a true symbiosis could fundamentally change the way humans learn, work, and solve problems, overcoming the so-called productivity paradox of information technology by combining the strengths of humans and machines.

8. Human-in-the-loop and neuro-symbolic approaches

A central principle of hybrid intelligence is the integration of humans into the AI-supported decision-making process (human-in-the-loop) [11]. Humans remain responsible for qualitative judgments, ethical considerations, and creative problem solving, while AI provides data-driven analyses and suggestions. This combination enables greater transparency and control, as humans can intervene directly and make corrections.

A core technological element of hybrid intelligence is neuro-symbolic approaches, which combine neural networks (deep learning) with symbolic AI (knowledge graphs, rule-based systems). While neural networks are valued for their pattern recognition capabilities, symbolic systems offer better explainability. This combination enables efficient generalization of knowledge, integration of domain-specific rules, and transparent interaction with humans.

The development of hybrid systems is associated with numerous challenges. One of the biggest technological hurdles is the quality of the data on which AI models are based. Similarly, real-time collaboration between humans and machines requires efficient synchronization mechanisms to avoid miscommunication.

Another important starting point is the development of algorithms that enable continuous learning and dynamic adaptation. They should be able to process human feedback in real time and adjust their performance accordingly. At the same time, ethical frameworks must be integrated to ensure that the systems are not only efficient, but also trustworthy and morally acceptable.

In addition, human-centered user interfaces are necessary to enable intuitive and transparent interaction between humans and AI. Only through user-friendly design can ongoing collaboration develop in which humans and machines learn from each other.

9. The idea of a hybrid HCAI – a thought experiment

While sub-symbolic AI offers speed, statistical pattern recognition, scalability, and consistency, human intelligence contributes cognition, consciousness, depth of meaning, empathy, moral responsibility, and genuine creativity. Therefore, the two systems are most effective when they complement each other: AI scales routine and data-heavy processes, while humans contribute meaning, context, and responsibility. The goal should be to enable a symbiosis between artificial and human intelligence, in which the strengths of one compensate for the weaknesses of the other, thus creating the necessary conditions for true AGI.

In my view, symbolic methods are required to achieve this ultimate symbiosis, i.e., classical knowledge-based AI in the form of logic, rules, causality, and ontologies. These enable the integration of human intelligence as a third component in the form of neuro-symbolic AI [13] (hybrid AI). This would lead to a so-called “trihybrid AI” that combines symbolic, subsymbolic, and human intelligence. For a better understanding, however, I would call this “trihybrid AI approach” “hybrid HCAI,” as it directly involves humans and is geared toward their needs.

It is crucial that symbolic AI takes on a mediating and regulating role between subsymbolic AI and human intelligence. However, the actual cognitive performance would remain with the humans involved. The symbolic AI algorithm would act as a kind of self-control mechanism at the meta level, ensuring that information, communication, and interaction processes between humans and machines, as well as between humans themselves, are efficient, transparent, fair, and ethically acceptable.

Symbolic AI would not only be a higher-level set of rules, but also a bridge that ensures the understanding, explainability, and trustworthiness of AI systems. In such a hybrid architecture, it would be an essential component for optimally combining the strengths of subsymbolic approaches and human intelligence.

In other words, only the combination of subsymbolic and human intelligence by means of an overarching set of rules for symbolic AI can help to achieve productive value creation at an economically reasonable cost while at the same time meeting the ethical requirements of human-centered AI (HCAI) [14].

Such an overarching set of rules should consist of technical testing, emotional, intrinsic, and economic incentive structures, as well as decentralized control, and should be firmly anchored in the code of a neuro-symbolic AI architecture to make it effectively immutable.

A hybrid HCAI system based on a trihybrid architecture integrates subsymbolic AI (learning systems, pattern recognition), symbolic control (rule-based, explainable, controllable), and human intelligence (judgment, values, emotions, feedback) into an overall system. The set of rules serves, in a figurative sense, as a “social operating system” for interaction, control, and participatory co-creation between humans and machines, but also between humans themselves.

This raises the second key question:

Which methodological and architectural approach, which neuro-symbolic set of rules (algorithm), and which human-machine interface (user interfaces) enforce a transparent, fair, and ethically acceptable input/output process for knowledge sharing or recognition of the user's cognitive performance?

10. A neuro-symbolic set of rules for ultimate symbiosis

The ideal symbiosis between artificial and human intelligence can only be achieved through a human-centered set of rules that specifically combines the respective strengths and compensates for the weaknesses. Such a set of rules must therefore shape the interaction between the actors in such a way that misunderstandings, opportunistic distortions, bureaucratic hurdles, and, above all, unfair knowledge sharing are avoided.

The central problem is to translate the requirements regarding explainability and transparency, adaptive learning and feedback loops, ethical and social responsibility, and intuitive interfaces into a coherent, scalable, and practical set of rules.

Differences between symbolic, sub-symbolic, and neuro-symbolic AI

Symbolic AI is based on explicit, rule-based systems. Here, facts, rules, and logic are represented in a clear, often formal language such as predicate logic or ontologies. This form is very explainable, as the decision-making processes are usually transparent and comprehensible. However, symbolic AI is often difficult to adapt and requires a lot of manual maintenance.

In contrast, subsymbolic AI is based on neural networks and other statistical learning methods. This form learns implicitly from data, stores knowledge in weights, vectors, or matrices, and can thus recognize extremely complex patterns. However, its internal structure often makes it difficult to explain (“black box” phenomenon) .

Neuro-symbolic AI (hybrid AI) attempts to combine the advantages of both worlds. By combining logical representation and pattern recognition, AI should remain both capable of learning and explainable. The central problem here is the development of a uniform knowledge representation that makes use of both symbolic and subsymbolic aspects.

Unresolved problems in the development of neuro-symbolic AI

Several hurdles must be overcome in order to successfully combine symbolic and subsymbolic approaches. The following points are particularly challenging:

  • Knowledge representation: Logical rules must be available in a form that can be linked to neural weight representations. This requires novel models that make symbolic information understandable for neural networks.
  • Logical reasoning versus pattern recognition: Symbolic AI works deterministically with formal rules, while subsymbolic AI tends to be probabilistic and data-driven. Seamlessly combining these different paradigms is technically and conceptually challenging.
  • Explainability: While symbolic systems provide a high degree of transparency, neural networks are considered difficult to understand. Therefore, mechanisms must be found that make the reasoning behavior of a hybrid system comprehensible.
  • Efficient learning and scalability: Symbolic AI is computationally intensive when large sets of rules are involved. Neural networks often require large amounts of data. A hybrid solution must not impose impractical requirements – neither in terms of computing time nor data volume.
  • Modeling motivation, intention, and consciousness: These dimensions of human intelligence have not yet been convincingly modeled in any AI – neither symbolic nor subsymbolic.

Several hurdles must be overcome in order to successfully combine symbolic and subsymbolic approaches. The following points are particularly challenging:

  • Knowledge representation: Logical rules must be available in a form that can be linked to neural weight representations. This requires novel models that make symbolic information understandable for neural networks.
  • Logical reasoning versus pattern recognition: Symbolic AI works deterministically with formal rules, while subsymbolic AI tends to be probabilistic and data-driven. Seamlessly combining these different paradigms is technically and conceptually challenging.
  • Explainability: While symbolic systems provide a high degree of transparency, neural networks are considered difficult to understand. Therefore, mechanisms must be found that make the reasoning behavior of a hybrid system comprehensible.
  • Efficient learning and scalability: Symbolic AI is computationally intensive when large sets of rules are involved. Neural networks often require large amounts of data. A hybrid solution must not impose impractical requirements – neither in terms of computing time nor data volume.
  • Modeling motivation, intention, and consciousness: These dimensions of human intelligence have not yet been convincingly modeled in any AI – neither symbolic nor subsymbolic.

The role of human intelligence: Trihybrid intelligence

In light of these challenges, the idea of trihybrid intelligence was formulated, which additionally integrates human intelligence. Humans possess cognitive abilities that AI systems have so far been unable to adequately mimic, including intuition, creativity, ethical reflection, and the ability to learn from a few examples. Such trihybrid intelligence would combine symbolic, subsymbolic, and human intelligence in a hybrid HCAI overall system.

The advantages are obvious:

  • Increased explainability: Humans can question the “black box” of AI and explain in clear terms where certain decisions come from.
  • Creativity and flexibility: Human intelligence can find new solutions that rules or neural networks could not foresee.
  • Ethical control: Humans can set the moral framework and respond to unforeseen consequences.

However, the question arises as to how such a human-AI interface can be designed efficiently and scalably without compromising the performance of the system.

The core problem of trihybrid intelligence

The core problem of trihybrid intelligence is combining three very different forms of knowledge and decision-making. Symbolic logic, neural pattern recognition, and human cognition each follow their own rules and forms of representation. A central architecture that combines all of this must:

  • Unify forms of knowledge without losing important information.
  • Provide flexible learning and inference procedures that can operate both deterministically and probabilistically.
  • Enable interactive human-AI interfaces that make continuous and efficient use of human feedback.

A central problem here is shared knowledge representation, as humans work with language and intuition, while machines are based on numerical calculations. One possible solution is graph-based models that merge structured and unstructured data.

User motivation and natural self-control mechanisms

To develop true trihybrid intelligence, the following is required:

  • A universal representation of knowledge (both symbolic and subsymbolic).
  • An adaptive control system that switches between rules, probabilities, and intuition depending on the type of problem.
  • Efficient and scalable algorithms that process large amounts of knowledge in a meaningful way.
  • Trustworthy and explainable decisions that humans can understand.
  • Human-AI cooperation that leverages human strengths without adopting human weaknesses.

The solution could lie in a self-learning, interactive system that dynamically combines logical structure, neural learning, and human experience. But before such technology can become a reality, fundamental challenges must be overcome.

This raises the third key question:

How can user motivation be ensured so that they actively participate in a hybrid HCAI system and contribute their cognitive expertise?

To understand this motivation, both technical and psychological factors must be taken into account. In my view, consideration of evolutionary behavioral patterns plays a central role in this.

Natural self-control mechanisms and neuro-symbolic rules

Scientific studies show that evolutionary behavioral mechanisms—such as reciprocity, fairness, and group formation—play a decisive role in motivation in social systems. These principles should be taken into account in a hybrid HCAI system.

Only when this human-centered foundation is in place can the hybrid HCAI fully realize its potential and act as a reliable partner to humans, rather than failing due to complexity, lack of transparency, or communication problems.

To find answers to this problem, I ask myself: What rules or self-control mechanisms do natural systems use, for example?

  • What biological self-control mechanisms or rules of neural self-organization are used for networking in the human brain?
  • What sociological self-control mechanisms or rules of social self-organization are used for networking in a social system?
  • What parallels exist from a biological and sociological perspective, and what factors influence self-organized networking in both systems?

The self-control mechanisms of biological and sociological systems show remarkable parallels, even though they operate at different levels—in the brain at the level of neurons and in social systems at the level of humans. Both systems are based on information, communication, and interaction processes that enable them to adapt dynamically to changes and form complex networks.

Communication in the brain takes place via synaptic connections, whereby electrical and chemical signals are transmitted. Through synaptic plasticity, these connections continuously adapt to new experiences – neurons that are repeatedly active together strengthen their connection. Social systems in which people communicate via language, symbols, digital media, and social interactions function in a similar way.

Communication patterns change over time as a result of social developments, technological innovations, and shared experiences. In both cases, communication is central to linking information, generating knowledge, and coordinating joint action.

Another common feature is pattern formation and emergence. Through the interaction of many neurons, neural networks develop emergent processing patterns that enable complex cognitive processes. Social systems also exhibit emergent phenomena when, for example, social trends, collective opinions, or social movements arise that cannot be attributed to the actions of individual persons. In both contexts, structures emerge from decentralized, local interactions without a central control authority completely controlling the process.

Feedback mechanisms are crucial for maintaining stability and adaptability. In the brain, positive feedback reinforces heavily used synaptic connections, while unused ones are broken down. Social systems also use positive and negative feedback: social recognition and rewards promote desired behavior, while violations of norms are regulated by sanctions. In both systems, these mechanisms ensure that new patterns form without destabilizing the overall system.

A central principle that operates in both neural and social networks is plasticity. The brain shows a remarkable ability to establish new connections and adapt existing ones to enable learning, memory formation, or rehabilitation after injury. Social systems demonstrate similar adaptability when adjusting to new technological, political, or cultural conditions—for example, through new organizational structures, changed forms of communication, or innovative social practices.

Another characteristic of both systems is decentralized control. The brain is not a hierarchically controlled system, but is organized through the interaction of many specialized regions and networks. Similarly, social systems such as markets, open-source communities, or democratic societies often develop without central control, with collective patterns emerging through the local interaction of many actors.

Another connecting mechanism is the dynamic stability that is necessary in both systems to balance consistency and change. The brain strives for homeostasis by maintaining internal equilibrium while responding flexibly to new stimuli. Social systems also establish stabilizing norms, rules, and institutions, but these can be adapted to changing social requirements.

Finally, both systems exhibit self-referentiality by drawing on their own experiences and internal structures to interpret new information. In the brain, this manifests itself in processes such as pattern recognition and memory retrieval. In social systems, self-referentiality manifests itself, for example, in collective identities, cultural narratives, or social discourses, which in turn influence social action.

In summary, both the evolutionary dynamics of social systems and neural self-organization are based on the same fundamental principles: information processing, self-referentiality, emergence, diversity, self-regulation, and communication processes. These mechanisms are self-organized, dynamic processes that do not require central control but instead rely on local interactions and global feedback loops. The flexibility and adaptability of social systems, like neural plasticity, depend on the diversity of the participants, the quality of communication, and the balance between stability and change. The evolutionary self-control mechanisms of social systems are thus an expression of collective intelligence that emerges from the individual actions and interactions of their members.

Conclusion on the ultimate symbiosis of human and artificial intelligence

In summary, it can be said that the combination of symbolic, subsymbolic, and human intelligence holds enormous potential—whether in the form of neuro-symbolic AI or even trihybrid intelligence (hybrid HCAI).

The greatest challenges lie in the joint representation of knowledge, in controlling human-AI interaction, and in providing suitable motivation and reward systems for users. The latter is crucial so that people do not see themselves as mere agents of a system, but participate directly in it. Here, insights from behavioral psychology, especially from an evolutionary biology perspective, can provide decisive impetus.

This raises the fourth key question:

Can insights into the self-regulating mechanisms of the evolutionary dynamics of social systems and neural self-organization be translated into a higher-level, ethically acceptable symbolic set of rules?

If so, then the vision of a hybrid HCAI that seamlessly combines symbolic logic, neural learning, and human intelligence could herald a new era of AI—an era in which AI systems are extremely powerful and eager to learn, but at the same time remain explainable, ethically reflective, and closely aligned with human needs, such as emotional, intrinsic, and economic benefits.

11. Hybrid HCAI in companies – A look into the future

Why hybrid HCAI (trihybrid AI) in companies?

The digital transformation in companies of all sizes and industries requires intelligent systems that go far beyond classic automation and standardization. What is needed is an architecture that continuously adapts to new tasks, roles, goals, and framework conditions—without complex change management processes, inefficient campaigns, or rigid role concepts. This is exactly where a trihybrid approach to human-centered AI (HCAI) comes in. This architecture combines symbolic knowledge (rules, ontologies), subsymbolic intelligence (neural networks, pattern recognition), and human intuition (experience, heuristics) in a scalable, learning overall system.

Architectural foundation: Three levels of intelligence and an orchestrating control system

At the heart of the approach is an adaptive, three-tiered intelligence architecture:

  • The symbolic level manages explicit knowledge: company guidelines, compliance rules, process models, value orientations – represented in ontologies, graph structures, and rule sets.
  • The subsymbolic level uses machine learning, language models, classifiers, and vector databases to generate patterns, correlations, and predictions.
  • The intuitive level directly involves humans – with heuristic feedback loops, value judgments, experiential knowledge, and dialogical feedback.

A symbolic set of rules decides which of these levels dominates for each task, interaction, or decision-making situation – and coordinates their interaction. This adaptive control system is the key to the system’s daily adaptability at the individual level.

Personalized digital workspaces – updated daily

At the heart of personalization is a dynamic, hybrid knowledge model for each individual. This model stores and continuously updates information such as current tasks, responsibilities, communication behavior, learning activities, and preferences.

The platform analyzes this graph daily or in real time as needed to obtain an accurate picture of an individual’s current situation, goals, and challenges.

Based on this context, the adaptive controller decides which information, agent functions, and tools are most relevant today. The digital workplace is then automatically compiled accordingly. This dynamic architecture eliminates the need for predefined role models and standard workplaces – the workplace adapts to the current actions and thoughts of the respective user “like a digital mirror.”

Dialogic intelligence: Target/actual comparison as a learning engine

A central principle of the platform is the dialogic target/actual process, which continuously ensures that not only the workplace but the entire system learns. Whenever a rule, process, or decision does not match real requirements or expectations, an interactive negotiation process is automatically triggered. This involves the affected stakeholders – employees, managers, customers, partners – in an AI-supported dialogue.

Current deviations, target states, and possible alternatives are made transparent. Dialogue agents moderate these discussions, record corrections and feedback, and transfer them in a structured manner to the organization-wide knowledge system.

This acts as a simulated representation of all processes, rules, and responsibilities and makes organizational learning traceable and rollable back.

Motivation and fairness through transparent impact

For people to be willing to contribute their knowledge and take on responsibility, there needs to be tangible impact and systemic fairness. The platform promotes this through a combination of transparent feedback, reputation systems, and participatory control. Anyone who corrects a decision, clarifies a rule, or introduces a new ontology element can immediately see how suggestions, agent responses, or process decisions change as a result.

Reputation points, feedback loops, and peer validation ensure that contributions are visible, assessable, and effective—without authoritarian gatekeepers. At the same time, symbolic fairness and compliance rules ensure that power asymmetries and conflicts of interest do not come at the expense of employees or partners. This creates trust in the system, which is legitimized by transparency and continuous discourse.

Change as a permanent state: no more traditional change management

As the platform, workplace, and decision-making logic evolve on a daily basis, traditional change management becomes largely redundant. New functions, processes, or governance guidelines are imported into the system, automatically simulated, and gradually rolled out via decentralized consensus mechanisms. For users, changes do not appear as a break, but as the logical next step in the workflow.

Additional training is not necessary because all innovations are explained in a context-sensitive manner: Those who use a function for the first time receive appropriate instructions; those who change to a new role receive situational onboarding. Learning takes place continuously through micro-learning, context-related explanations, and dialogical target/actual correction processes—directly in the flow of work. In this way, learning becomes a daily routine—embedded, relevant, and scalable.

Scalability and learning effect: The more users, the better the system

The architecture is built from the ground up for massive scalability and network learning. The more people interact with the system:

  • the better the rules, ontologies, recommendations, and processes become.
  • the more diverse and context-rich the knowledge graph becomes.
  • the faster the system recognizes patterns, conflicts, potentials, and synergies.

This is not achieved through centralized control, but through swarm sharding mechanisms: knowledge and functions are distributed as micro-shards and grow decentralized where they are needed.

Conclusion: The workplace becomes a learning reflection of the individual

A hybrid HCAI system based on Trihybrid transforms the digital workplace from a uniform IT structure into a daily learning, participatory, and human-centered system. Each workplace is unique and responds in real time to tasks, responsibilities, and feedback. The platform learns together with users, adapts to new conditions without campaigns, systematically ensures fairness and decision traceability – and thus makes not only change management but also traditional training largely superfluous.

It establishes a living, collaborative operating system for any type of organization that no longer plans technical, organizational, and cultural transformation, but automatically carries it out—controlled by the intelligence of rules, data, and people alike.

12. Potential benefits of a hybrid HCAI platform

Transformation through intelligent self-adaptation

The introduction of a trihybrid AI or hybrid HCAI platform offers companies of all sizes and industries not only technological advancement, but also a strategic quantum leap in the way work, learning, leadership, and change are organized. The platform combines symbolic knowledge (rules, guidelines), subsymbolic AI (e.g., language models, neural networks), and human intuition (heuristics, values, experience) in an adaptive, scalable, and human-centered system.

The key potential benefits are presented below in five areas of impact: productivity, agility, innovation, structural stability, and cultural integrity.

Increased productivity through intelligent individualization

One of the most visible effects of a hybrid HCAI platform is the significant increase in individual and collective productivity. The digital workplace adapts daily to the current tasks, responsibilities, and context of each individual employee. Instead of working through irrelevant information, overloaded dashboards, or rigid tools, users get exactly the functions, data, and interaction options they need on that particular day.

In addition, the non-value-adding organizational and communication efforts of employees and managers are drastically reduced, as decisions are prepared and made in a data-driven, dialogical, and transparent manner. Routine tasks, information searches, manual documentation, and queries are eliminated or intelligently supported. The result: more time for real value creation, creativity, and human interaction.

Maximum agility through self-directed change

Traditional change management with campaigns, training, and centralized rollouts is increasingly impractical in dynamic markets. A hybrid HCAI platform overcomes these limitations by establishing change as a continuous, internal process.

Processes, rules, and organizational models are managed as versioned digital artifacts (e.g., policies, workflows, decision-making logic). Changes are introduced as pull requests, simulated, reviewed, and rolled out step by step—without the need for time-consuming communication measures. The adaptive control system ensures that roles, dashboards, and task structures are automatically adjusted as needed.

The big advantage: the organization is enabled to adapt in real time to market changes, new technologies, regulations, or internal developments – without jeopardizing operational stability.

Sustainable innovation through integrated collective intelligence

In many companies, innovation is either limited to individual innovation departments or relies on random brainstorming. The Hybrid HCAI platform breaks down these bottlenecks by harnessing the collective intelligence of the entire organization.

Suggestions, ideas, corrections, and experiential knowledge from the workforce are automatically integrated into the hybrid knowledge system, where they are evaluated, visualized, and simulated for their impact. Decisions about new product ideas, process innovations, or governance changes can be made in a fact-based and dialogical manner. In addition, innovation learning is accelerated through feedback with real usage data.

By integrating external stakeholders—such as customers, suppliers, or partners—into these dialogue spaces, innovation becomes not only more diverse, but also more market-oriented, inclusive, and sustainable.

Structural stability through systemic transparency and self-regulation

Complex organizations require structures that are both resilient and flexible. The Hybrid HCAI platform provides precisely this foundation. It digitally maps all relevant structures—processes, roles, rules, responsibilities, governance metrics—in a versioned, simulable, and traceable form.

This creates a continuously updated real-time twin of the organization that makes changes testable, identifies weaknesses at an early stage, and forms the basis for automatic governance checks. Early warning systems, deviation analyses, and systemic goal conflict detection enable preventive management instead of reactive crisis intervention.

This structured self-regulation makes the organization resilient to external shocks and internal tensions – while maintaining the ability to make decisions and take action at all levels.

Cultural stability through fairness, transparency, and participation

In addition to technological and organizational effects, a hybrid HCAI platform also strengthens a company’s cultural integrity. Decision-making processes become explainable, justifiable, and comprehensible—not only for managers, but for everyone involved. AI recommendations are aligned with corporate values, and systematic fairness mechanisms prevent discrimination or lack of transparency.

Knowledge gathering and evaluation are participatory: employees can directly experience how their contributions change and improve the system or open up new perspectives. Reputation, visibility, and impact are clearly measurable and tangible. This creates a fair, appreciative, and cooperative working environment that promotes diversity, strengthens commitment, and enables fair remuneration for cognitive performance.

This is a decisive factor for cohesion, trust, and identification, especially in distributed, hybrid, and intercultural environments.

Fundamental questions for companies

In my view, companies face the following five fundamental questions:

  1. Which information, communication, and interaction processes should a hybrid HCAI support in order to increase productivity, agility, and innovation in a company and ensure structural and cultural stability?
  2. Which methodological and architectural approach, which neuro-symbolic set of rules (algorithm), and which human-machine interface (user interfaces) enforce a transparent, fair, and ethically acceptable input/output process for knowledge sharing or recognition of the user’s cognitive performance?
  3. Can insights into the self-control mechanisms of the evolutionary dynamics of social systems and neural self-organization be transferred to a higher-level, ethically acceptable neuro-symbolic set of rules?
  4. How can a decentralized, self-organizing system be implemented in a centrally controlled, hierarchically structured company in a way that is transparent, fair, and ethically acceptable to all stakeholders?
  5. How can we measure the extent to which AI models/platforms truly meet the requirements of a hybrid HCAI, and which key performance indicators can be used to determine productive value creation and structural and cultural stability within the company?

13. Impact of a hybrid HCAI on the economy and society

The market launch of a hybrid HCAI system by, for example, an innovative start-up would have far-reaching implications for numerous economic, political, and social areas. Such a technological development could not only transform existing business models, but also open up new markets and bring about far-reaching changes in the way humans and machines work together.

Impact on stock markets

The introduction of groundbreaking hybrid HCAI technology by a start-up could immediately attract significant attention from investors and venture capitalists. Companies that embrace this innovation early on could be seen as pioneers of the new technological paradigm, which could significantly increase their valuation on the financial markets. The technology sector in particular, and AI-related stocks specifically, are likely to benefit from such progress, as investors would increasingly invest capital in companies at the forefront of AI development.

However, this market dynamic could also lead to increased volatility. While companies that rely on hybrid HCAI may benefit from rising stock prices, companies that continue to rely on conventional AI models or traditional automation solutions could lose market share and investment. This could lead to sharp price fluctuations in the short term, especially for tech giants that have so far focused on collecting and marketing user data/knowledge using generative AI or classic automation, monitoring, and control technologies.

These established market leaders may need to adapt quickly to remain competitive. To do so, they could either invest more heavily in hybrid HCAI research themselves or make strategic acquisitions to keep up with technological developments.

Impact on the economy

The introduction of hybrid HCAI technologies would have far-reaching economic consequences, particularly in terms of productivity and innovation in companies. The combination of machine intelligence and human judgment could significantly increase efficiency in numerous industries. Companies that use this technology could optimize processes quickly and efficiently, accelerate decision-making, and achieve higher overall value creation. This could lead to a surge in innovation as new business models emerge based on the seamless integration of human and artificial intelligence.

A significant aspect of this development would be the transformation of jobs. While traditional, highly repetitive tasks could continue to be replaced by automation, new occupational fields would emerge at the same time. These new areas of work could focus on developing creative solutions to problems, making strategic decisions, and critically questioning AI-supported findings. People would increasingly act as mediators and controllers between technology and real-world applications.

It is important to note in this context that the transformation of jobs is not being organized externally by consulting firms or AI technologies, but rather that employees and managers are continuously shaping the transformation process themselves.

In addition, companies could develop new hybrid business models that combine human and machine intelligence. For example, services could emerge in which AI performs highly complex data analyses, while humans interpret the results and develop individual solutions.

Impact on society

The introduction of hybrid HCAI technologies could also bring about profound changes in society. One of the key challenges would be adapting the education system to the new requirements of the labor market. As the ability to collaborate independently with AI becomes increasingly important, schools and universities would have to adapt their curricula accordingly. In addition to traditional subject knowledge, skills in data analysis, critical thinking, and interdisciplinary collaboration would have to be promoted more strongly.

At the same time, the new technology would also raise ethical questions. It would be necessary to ensure that hybrid AI systems are designed to be fair, transparent, and inclusive. Otherwise, existing social inequalities could be further exacerbated, especially if access to these technologies were unevenly distributed or if algorithmic decision-making processes systematically disadvantaged certain population groups.

Another crucial point would be people’s trust in AI-supported systems. Since hybrid AI combines human expertise with machine decision-making processes, this could help reduce skepticism toward artificial intelligence. The involvement of human control authorities could ensure that critical decisions are not made exclusively by algorithms, but always receive human evaluation. In almost all areas of business, politics, and society, this could lead to increased acceptance and trust in AI solutions in the long term.

Conclusion: A human-centered future with AI

In summary, this interaction between humans and machines offers a groundbreaking perspective for the future world of work and learning. The biggest challenge is to ensure that technical infrastructures are not only powerful, secure, and scalable, but also designed to focus on the human aspect: transparency, user-friendliness, data protection, and appropriate integration into existing organizational processes are fundamental success factors. If these aspects can be balanced, “trihybrid intelligence” can raise collective intelligence in business, politics, and society to a whole new level and enable innovative, human-centered progress.

Such a symbiosis could enrich the world of business, science, and society in several ways. On the one hand, the analytical capabilities of AI would dramatically accelerate the work of researchers, companies, and decision-makers. Large data sets, which would be almost impossible to manage manually in traditional processes, could be viewed and evaluated within a very short time. Based on this, people could develop fundamental strategies, formulate hypotheses, and pursue creative solutions.

On the other hand, global networking creates a collective memory that can be accessed at any time and grows continuously. The individual expertise of each person flows into a common pool, is rewarded according to cognitive performance, and in turn made accessible to everyone. This model would not only strengthen collaboration across geographical boundaries, but also dramatically accelerate the transfer of knowledge. New insights would spread almost in real time, and promising ideas could be developed globally without being thwarted by opportunistic, bureaucratic, or linguistic barriers.

14. Relationship to the concept of artificial general intelligence (AGI)

Artificial general intelligence (AGI) refers to a currently hypothetical form of AI that is capable of understanding knowledge, learning, and applying it flexibly to a wide range of different tasks and problem areas—to an extent that matches or exceeds human cognitive performance. Its characteristics include general problem-solving ability, the ability to learn across different domains, and independent reasoning without specific pre-programming. Source (adapted): Goertzel, B., & Pennachin, C. (2007). Artificial General Intelligence. Springer. [15]

In my view, the concept of hybrid HCAI could play a key role in the future development of artificial general intelligence (AGI). While conventional AGI approaches primarily aim to autonomously reproduce human-like cognitive abilities, hybrid HCAI relies on close collaboration between humans and machines. This symbiosis combines human creativity, intuition, and contextual understanding with the computing power, scalability, and precision of artificial intelligence.

In particular, neuro-symbolic AI, which combines data-driven methods such as deep learning with symbolic approaches such as logic systems and knowledge graphs, can help make decision-making processes not only more efficient, but also more transparent and explainable. At the same time, human-centered AI (HCAI) emphasizes that humans remain a critical part of decision-making and thus actively contribute ethical, cultural, and social values to technology development.

The development of AGI based on hybrid HCAI is guided by a co-evolutionary approach. Instead of creating a fully autonomous superintelligence that could potentially surpass or replace humans, the goal is to achieve “integrated intelligence” in which humans and machines continuously learn and evolve together.

Even today, existing AI systems can be improved by combining neuro-symbolic methods and human expertise. At the same time, continuous human-AI interaction creates a cycle of mutual learning: humans correct the AI, leading to more accurate models, while the AI supports humans in solving complex problems. In the long term, this close collaboration could give rise to AGI that is not developed in isolation in a laboratory, but emerges from an evolutionary, practice-oriented symbiosis.

Overall, hybrid HCAI or human-centered hybrid AI offers a promising path to developing AGI that combines the potential of artificial intelligence with the unique abilities of humans. It can not only accelerate technological innovation, but also contribute to the responsible, ethical, and socially acceptable use of AI.

The key to AGI therefore lies not in the creation of autonomous superintelligence, but in the co-evolution of humans and machines—a symbiosis in which both partners learn, shape, and grow together. This vision of “integrated trihybrid intelligence” could be the decisive step toward an AGI that not only honors and respects human values, empathy, and creative potential, but also internalizes them as an integral part of its own intelligence.

15. Open-HCAI – A hypothetical thought experiment

Under the pseudonym Satoshi Nakamoto, the white paper on the cryptocurrency Bitcoin was published in October 2008, and in January 2009, the reference implementation of the first Bitcoin Core version took place. The Bitcoin architecture is based on basic rules that are firmly anchored in the code! The combination of technical verification, economic incentive structure, and decentralized control makes these rules virtually immutable. Bitcoin 2025: 106-150 million people own Bitcoins, number of Bitcoin wallets approx. 200 million, market capitalization approx. USD 2.08 trillion, daily transactions approx. 424,769. [16]

Now imagine that a person or group of AI researchers, technologists, ethicists, and economic thought leaders publishes a white paper on open trihybrid AI or open HCAI and implements a reference implementation whose architecture is also based on ethical principles that are firmly anchored in the code. The combination of technical testing, emotional, intrinsic, and economic incentive structures, and decentralized control also makes these rules effectively immutable.

I believe this would mark the beginning of a new era for the world: an era of human-centered, freely accessible artificial intelligence, in which intelligence is no longer exclusive, expensive, or difficult to understand, but open, intuitive, and usable by everyone. It is essentially the vision of Artificial General Intelligence (AGI): a decentralized, autonomous, cross-domain, and human-centered AI platform that redefines our relationship to knowledge, work, and responsibility. However, the true innovation lies in ethical design: building technology in such a way that it remains human. If we succeed, Open-HCAI will not just be a platform. It will be a promise of a more equitable future for all.

A future scenario:

Open-HCAI becomes a digital infrastructure like water or electricity. Education, innovation, and participation flourish worldwide. Equal opportunities and social justice become the basis for productive value creation, socio-ecological and economic growth, and increasing prosperity in all areas of society. People are guided by the feeling that they are treated fairly in their communication and interaction with their social environment, or that what they contribute emotionally, intrinsically, and economically and what they get back is at least balanced.

Due to the social self-control mechanisms firmly anchored in the code, I believe that manipulation through technical abuse or deliberate circumvention of the rules by powerful interest groups via narrative control, platform dominance, or political influence is virtually impossible. A good example of this is the Bitcoin architecture. Who would have believed in 2009 that in 2025, the CEO of the world’s largest asset manager, Larry Fink, would express fears that the US dollar could lose its global leadership role to the cryptocurrency Bitcoin (Blackrock boss sees dollar threatened by Bitcoin). [17]

This raises a fifth hypothetical key question for me:

What would be the impact of harmonizing or merging an Open-HCAI architecture and Bitcoin architecture?

16. Impact of an Open-HCAI on our future

Financial markets: Loss of control over AI returns

The fundamental structure of traditional financial markets would be shaken. AI is now a key driver of company valuations, especially for tech giants. However, if high-performance, universal AI were freely available, proprietary AI models would lose their competitive advantage. Business models based on exclusive access to intelligence would become obsolete. Margins would collapse, and investors would rethink their strategies.

At the same time, new opportunities would arise. Tokenized governance systems, as known from the blockchain sector, could create new forms of value retention – for example, through participation in the further development, auditing, or maintenance of the platform. New financial markets could emerge in which the ethical quality, transparency, or social impact of technology represents a new form of “value.”

But the macroeconomic implications would also be serious. Just as Bitcoin challenges the global role of the US dollar, Open-HCAI could undermine government control over a strategically central infrastructure—intelligence.

Economy: Innovation without barriers to entry

The economy would undergo radical structural change. Open-HCAI would drastically lower the barriers to entry for companies, startups, and individuals who want to use or further develop AI. Where cloud costs, licensing models, and access restrictions dominate today, open availability would then prevail.

This access would have a democratizing effect on innovation. Everyone—regardless of location or capital—would be able to access one of the most powerful technologies in the world. Companies would no longer be able to shine through exclusivity, but would have to differentiate themselves through ethical added value, creativity, fairness, and user orientation.

At the same time, completely new industries would emerge: auditing, ethical monitoring, human feedback systems, participatory training models, decentralized decision-making platforms. Value creation would shift from ownership of AI to shaping the interaction between humans and machines.

Politics: Power shift and redefinition of governance

A freely accessible, decentralized AI platform would challenge the basic assumptions of political control. States that currently organize their digital infrastructure centrally would find themselves confronted with an uncontrollable, public superintelligence. The classic tools of regulation—licensing requirements, control of server centers, platform rules—would be rendered ineffective.

This would require new international regulations, similar to those that exist for internet law, open-source standards, or climate policy. Global coordination on ethical guidelines, participatory governance systems, and technologically anchored control mechanisms (as with Bitcoin) would become prerequisites for trust and stability.

At the same time, authoritarian systems would lose a crucial lever: control over knowledge, communication, and digital intelligence. In this respect, Open-HCAI could also be a politically emancipatory instrument—a digital commons against centralism.

However, I can well imagine that some oligarchs, autocrats, and dictators will try to prevent or block such a system because they cannot monitor and control it themselves.

Society: Collective intelligence and new social grammar

The social impact would perhaps be the most profound. If every person had access to powerful, ethically designed AI that did not replace them but empowered them, a new form of collective intelligence would emerge. Education would no longer be centrally “taught” but individually acquired—self-directed, situation-dependent, embedded in real problems.

This means that everyone would have the same opportunities to further their education, find creative solutions, or participate in social development. This equality of opportunity would not only promote social justice, but also economic productivity – because talents that are wasted today due to poverty, isolation, or lack of infrastructure could flourish.

Social participation would take place not only on an economic level, but also on an emotional level: people would be able to communicate on an equal footing, their contributions would be recognized, and their needs would be taken into account. A new “social grammar” of cooperation would emerge—not based on hierarchy, but on shared responsibility and technological fairness.

Conclusion: A new stage of civilization?

The introduction of a freely available, decentralized, ethically coded, and human-centered AI platform would not be an ordinary technological leap. It would be a civilizational event. A collective infrastructure for intelligence, supported by many, controlled by no one, open to all.

Just as Bitcoin revolutionized the concept of money and Linux decentralized the software world, Open-HCAI could redefine our relationship to knowledge, work, and responsibility. It would not be a product, but a promise: that technology does not replace us, but empowers us. That ethics are not tacked on, but built in. And that human dignity, creativity, and participation are not business models—but the foundation of a just future.

The true innovation would not be technical in nature. It would lie in the design of a humanity that remains technologically incorruptible.

17. My preliminary conclusions

Daron Acemoğlu is considered one of the sharpest and most discerning critics of current developments in digital technologies and artificial intelligence. He sees enormous potential in the technology, but warns urgently against taking the wrong path. In his view, there are several key unresolved issues and challenges that need to be addressed in order for AI and digitalization to bring real economic and social benefits. These can be divided into the following main areas:

  1. How can AI be used to complement rather than replace human labor?
  • Current AI applications focus heavily on automation—that is, on replacing human activities. Acemoglu, on the other hand, advocates for “complementary technologies” that expand human capabilities rather than replace them. [18]
  1. Where does AI actually create new, meaningful areas of work for humans?
  • For AI to create prosperity, it must enable new activities—similar to previous technological revolutions. It remains unclear which meaningful new jobs will actually be created by generative AI. [19]
  1. Why do we prioritize AGI development even though no real productivity gains can be demonstrated?
  • The enormous investment momentum surrounding “artificial general intelligence” (AGI) is diverting resources away from practical, short-term applications—even though there have been no clear successes or economic breakthroughs. [20]
  1. How can AI development be designed to enable broader participation in value creation gains?
  • Currently, large tech companies are the main beneficiaries of AI. There is a lack of models in which employees, creative authors, or society also share in the profits. [21]
  1. How can we change business models that rely on data extraction and surveillance?
  • Many AI providers monetize user behavior through advertising, profiling, or surveillance. These business models do not promote meaningful productivity, but rather create social and political risks. [22]
  1. When will the massive investments in AI actually become economically productive?
  • Billions are being spent on AI startups, chips, and cloud infrastructure—but concrete measurements of benefits and real productivity gains are still largely lacking. [23]
  1. How can democratic and ethical principles be integrated into technological architecture?
  • Many AI systems are opaque (“black boxes”) and can lead to unfair decisions. Transparent algorithms, explainable AI models, and open-source architectures are needed. [24]
  1. What new metrics do we need to measure the social benefits of AI?
  • Acemoglu calls for benchmarks such as the share of labor in value creation or new productivity metrics that reflect not only efficiency but also distributive justice. [25]

Gary Marcus writes in his article „A knockout blow for LLMs” dated June 8, 2025:  

“Whenever people ask me why I actually like AI (contrary to the widespread myth) and think that AI (though not GenAI) could ultimately be of great benefit to humanity, I always point to the advances in science and technology that we could make if we could combine the causal reasoning abilities of our best scientists with the sheer computing power of modern digital computers.” [9]

I would like to expressly endorse this statement by Gary Marcus and refer to the unresolved core issues and challenges of AGI or hybrid AI, which Gary Marcus has described very aptly in various articles and publications.

I have summarized these key questions as follows:

  1. How can symbolic and sub-symbolic components be seamlessly integrated?
  • There is a lack of architectural models that flexibly combine symbolic thinking with neural representation. [26] [27]
  1. How can a system abstract concepts and transfer them to new situations?
  • Generalization beyond the training data set is weak—symbolic components could improve this, but integration is difficult. [28]
  1. How does a system gain true causal understanding?
  • LLMs mostly operate on purely statistical correlation. AGI must be able to understand and explain cause-and-effect relationships. [29]
  1. How does a system learn continuously and cumulatively (“lifelong learning”)?
  • Current systems forget previous information (“catastrophic forgetting”) or do not learn incrementally. AGI must be able to learn sustainably. [30]
  1. How is abstract, systematic thinking achieved?
  • Logic, algebra, grammar – human intelligence can apply systematic rules flexibly. AI must also be able to do this. [31]
  1. How do you model motivation, intention, and consciousness?
  • These dimensions of human intelligence have not yet been convincingly modeled in any AI – neither symbolically nor sub-symbolically. [32]
  1. How can trust, verification, and transparency be created?
  • An AGI system must be verifiable, correctable, and explainable. Symbolic components could help here – but there is a lack of concrete implementations. [33]

To find answers to these questions, Gary Marcus writes in his article Deep learning is hitting a wall: „With all the challenges in the fields of ethics and computer science, and the knowledge required from areas such as linguistics, psychology, anthropology, and neuroscience, and not just mathematics and computer science, it will take a whole village to grow up to AI.“ [34]

However, I would like to add to Gary Marcus’ statement: Not only is an interdisciplinary team of scientists required, but also an interdisciplinary idea that enables a shared vision and understanding of a holistic hybrid HCAI solution. In other words: the idea of an integrative approach to method and technology, in which each discipline finds its own perspective and at the same time brings all perspectives together and makes them comprehensible to everyone.

In addition, all parties involved would have to agree on the business model variant on which they want to base the development of a hybrid HCAI, open HCAI, or AGI, so that everyone can also derive personal benefit from it.

Three business model variants for implementation

Assuming you had conclusive, comprehensible answers or practical solutions to these questions and challenges, which of the following three business model variants would you choose to implement them?

Option A: Founding a startup to develop and launch a marketable hybrid HCAI application, and then selling the startup to one of the big tech companies when they make an offer?

Option B: Publish the answers or solutions proposed by an open HCAI in a book or white paper and, together with an anonymous group of developers, make a corresponding AGI platform available online free of charge?

Option C: Establish a public welfare-oriented company to develop and market a marketable hybrid HCAI/Open HCAI that operates according to the same principles both internally and externally, combining the advantages of variant A with the advantages of variant B and compensating for their disadvantages?

Conclusion

Investors in particular should carefully consider which AI models, AI infrastructures, and AI companies they invest in today and in the future. Once hybrid HCAI or even open HCAI become a reality, I believe they will be unstoppable, and previous investments could very quickly evaporate.

I also believe that companies and other organizations should focus on methods, architectural approaches, and AI models that use individualized digital workplaces to truly reduce the non-value-adding organizational and communication efforts of their employees and managers. Companies need a paradigm shift in their AI strategy, away from the paradigm of externally organized automation, monitoring, and control, toward intelligent, self-organized networking of human capabilities.

I am convinced that it is not the large tech companies and platform providers, but the people, companies, and other organizations that create real value that should be the main beneficiaries of artificial intelligence!

18. My statement

The vision of hybrid HCAI (human-centered artificial intelligence) outlined in this essay is based on the conviction that the future of artificial intelligence lies not in replacing human capabilities, but in enhancing and complementing them. Current developments in the field of AI clearly show that purely technological approaches reach their limits if they do not integrate the human dimension.

The combination of symbolic AI, subsymbolic AI, and human intelligence in a trihybrid system offers a promising way to overcome the technical limitations of current AI systems and address ethical and societal challenges. Such a system would not only increase productivity, but also promote fairness, transparency, and participation.

The vision of an open HCAI platform that operates according to the principles of decentralization, transparency, and user-centricity could bring about a fundamental change in the way we interact with technology. Similar to how open-source software and decentralized technologies such as Bitcoin have shown that alternative models are possible, an open, human-centered AI platform could pave the way for a more equitable and sustainable digital future.

The challenge now lies in turning this vision into reality—through a comprehensive idea, concrete technical solutions, innovative business models, and, above all, a shift in thinking toward truly human-centered technology development.

This essay should be understood in the context of my two editorials, “Where is the flaw in the digital transformation system and what requirements does this place on the use of artificial intelligence?” dated September 27, 2023 [35]  and “When will AI become the killer application for productivity growth and bureaucracy reduction in companies?” dated September 26, 2024  [36].

If you have any questions or comments regarding this essay, please feel free to send them to me using the response form or by email.

Friedrich Reinhard Schieck / BCM Consult – July 3, 2025

eMail:    fs@bcmconsult.com; friedrich@schieck.org;

Website: www.bcmconsult.com

Methodology and acknowledgments:

ChatGPT (OpenAI, version GPT-4) was used to assist in the formulation of individual sections of text. The generated content was critically reviewed and revised by the author, who is responsible for the final version.

In this context, I found that ChatGPT either crashed or provided contradictory and illogical answers to complex questions. Only after an extensive dialogue with ChatGPT did I obtain conclusive results. In other words, ChatGPT learned from me in dialogue to understand and reproduce causal and logical connections.

Since I assume that other ChatGPT users have had similar experiences, I would like to thank all named and unknown authors who have consciously or unconsciously shared their knowledge with ChatGPT!

Sources:

(1) Marcus, G. (2025) | OpenAI Cries Foul

(2) Garibay et al. (2023) | Six Human-Centered Artificial Intelligence Grand Challenges

[3] Acemoglu, D. (2024) | Nobel laureate in economics and Institute Professor of Economics MIT

[4] Acemoglu, D. (2024) | The World Needs a Pro-Human AI Agenda

(5) Dizikes, P. (2024) | Daron Acemoglu: What do we know about the economics of AI?

[6] Acemoglu, D. (2025) | Will We Squander the AI Opportunity?

(7) Marcus, G. (2024) | Taming Silicon Valley How We Can Ensure that AI Works for Us

(8) Marcus, G. (2025) | Deep learning is hitting a wall

(9) Marcus, G. (2025) | A knockout blow for LLMs?

(10) Wikipedia (2025) | Artificial General Intelligence

(11) Wikipedia (2025) | Human-in-the-Loop

(12) Dellermann, D., et al. (2019) | Hybrid Intelligence

(13) Wikipedia (2025) | neuro-symbolische KI

(14) arxiv (2025) | Human-Centered AI (HCAI)

(15) Goertzel, B., & Pennachin, C. (2007) | Artificial General Intelligence. Springer

(16) Bitcoin-Statistics  (2025) | Various sources compiled

(17) Financial Times (2025) | Blackrock-Chef sieht den Dollar durch Bitcoin bedroht

(18) news.mit.edu (2024) | What do we know about the economics of AI?

(19) news.mit.edu (2024) | What do we know about the economics of AI?

(20) Acemoglu, D. (2024) | The World Needs a Pro-Human AI Agenda

(21) Acemoglu, D. (2024) | The World Needs a Pro-Human AI Agenda

(22) Acemoglu, D. (2024) | The World Needs a Pro-Human AI Agenda

(23) Business Insider (2024) | Goldman Sachs Says Return on Investment for AI

(24) Acemoglu, D. (2024) | Power and Progress

(25) Acemoglu, D. (2024) | AI’s Future Doesn’t Have to Be Dystopian

(26) Marcus, G. (2020) | Four Steps Towards Robust Artificial Intelligence

(27) Marcus, G. (2019) | Toward a Hybrid of Deep Learning and Symbolic AI

(28) Marcus, G. (2023) | AGI will not happen in your lifetime. Or will it?

(29) Marcus, G. (2024) | Keynote at AGI-24–Machine Learning Street Talk (MLST)

(30) Marcus, G. (2020) | Four Steps Towards Robust Artificial Intelligence

(31) Wikipedia (2025) | Neuro-symbolic AI

(32) Marcus, G. (2023) | AGI will not happen in your lifetime. Or will it?

(33) Marcus, G. (2023) | Gary Marcus Says AI Must Be Regulated. He Has a Plan.

(34) Marcus, G. (2025) | Deep learning is hitting a wall

(35) Schieck, F. (2023) | Wo liegt der Fehler im System…

(36) Schieck, F. (2024) | Wann wird KI zur Killerapplikation…

2 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *