Navigation

When will AI become the killer application for productivity growth and reducing bureaucracy in companies?

The perspective of a key account manager in the consulting and ICT sector.

Abstract

This article explores the potential of Artificial Intelligence (AI) to drive productivity growth and reduce bureaucracy in organizations. Although generative artificial intelligence (GenAI) is considered a promising technology, many companies fall short of expectations as the hoped-for productivity gains often fail to materialize. Studies show that GenAI tools are difficult to integrate for many users, which leads to an increased workload and new bureaucratic hurdles.

In this context, the article deals with the question of how artificial intelligence as a cross-sectional technology can help to drastically reduce the non-value-adding organizational and communication effort of employees and managers in order to improve productivity and agility in the company. In this context, the article examines the approach of Human-Centered Artificial Intelligence (HCAI), which calls for AI systems to be more user-centric and adaptable. The approaches of generative AI models and human-centered AI models are examined in more detail and compared.

While the GenAI approach focuses more on individual tasks and processes of specific user groups, the HCAI approach focuses on the individualizability, adaptability and scalability of the entire working environment of a company’s users. It is argued that a fundamental paradigm shift is required in the methodological and architectural approach as well as an algorithm for implementing human-centered design principles, in which AI models adapt to the way users work, rather than the way users work adapting to the AI models.

The conclusion of the article emphasizes that the full potential of AI technologies can only be exploited through further development towards human-centered AI. Only a new methodological and architectural approach to the intelligent networking of human intelligence through artificial intelligence will be the key to making AI a “killer application” for productivity growth and the reduction of bureaucracy. Such an approach would not only be technologically groundbreaking, but would also redefine the relationship between man and machine by putting people at the center of AI usage and increasing productivity without displacing human input.

(Friedrich Schieck / 09/2024)

Table of contents

  • Productivity development yesterday, today and tomorrow
  • Status quo of today’s AI models and applications
  • Perspectives from the consulting and ICT sector
  • Two perspectives from the scientific community
  • Requirements for future methods and AI models
  • The concept of Human-Centered Artificial Intelligence (HCAI)
  • The main distinguishing features of GenAI and HCAI
  • GenAI and HCAI characteristics in terms of knowledge input & knowledge output
  • Requirements for a holistic HCAI method & architecture approach
  • My preliminary conclusion
  • My statement
  • Sources

Productivity development yesterday, today and tomorrow

A look into the future requires a review of the past and an analysis of what has changed to date and what developments can be expected in the future. Looking back, analysts at the end of the 1990s predicted continued productivity growth for the following two decades. In particular, digitalization, the use of information and communication technologies (ICT) and the advent of the internet were seen as the driving forces behind this development. It was expected that these technologies would increase efficiency and lead to sustainable productivity growth.

Today, however, we know that productivity per employee in Germany declined between 1991 and 2023 despite increasing investment in consulting, digitalization and information and communication technologies [1]. In addition, bureaucracy and the dissatisfaction of many employees have increased. Yet productivity growth is crucial for the profitability and competitiveness of entire economies and for the prosperity of the population.

Optimistic forecasts for the future:  

Studies from the last five years once again paint an optimistic picture for the next two decades. According to the McKinsey Global Institute Report of June 14, 2023, The economic potential of generative AI: The next productivity frontier, “the impact of generative AI on global economic productivity could add trillions of dollars in value.

Our latest study estimates that generative AI could add the equivalent of $2.6 to $4.4 trillion annually in the 63 use cases we analyzed [2].“ For Germany, McKinsey assumes in a publication from November 24, 2023,

Our latest study estimates that generative AI could add the equivalent of $2.6 to $4.4 trillion annually in the 63 use cases we analyzed.

Skills shortage: GenAI can alleviate acute demand for highly qualified jobs, that the early introduction and use of GenAI could increase gross domestic product (GDP) by up to 585 billion euros (13%) and increase productivity by 18% by 2040 [3].

Doubts about the feasibility of these forecasts:

However, recent publications put these optimistic forecasts into perspective. Despite global private investment in the field of artificial intelligence of over 395 billion US dollars [4], a study by Upwork Research [5] from July 2024 comes to the following conclusion: “Almost half (47%) of employees who use AI say they have no idea how to achieve the productivity gains their employers expect. More than three quarters (77%) report that AI tools have reduced their productivity and increased their workload in at least one area [5].“

In this context, the investment bank Goldman Sachs raised the question in its June 2024 report GEN AI: TOO MUCH SPEND, TOO LITTLE BENEFIT? as to whether the high level of investment in artificial intelligence makes economic sense at all. Experts such as Daron Acemoglu from MIT, Brian Janous from Microsoft and Jim Covello, Kash Rangan and Eric Sheridan from Goldman Sachs discuss the economic viability of these investments [6].

First signs of AI sobering up:

Gary Marcus, professor emeritus at New York University, goes even further in his July 2024 article AlphaProof, AlphaGeometry, ChatGPT, and why the future of AI is neurosymbolic, writing: “I firmly believe that the generative AI bubble will begin to burst in the next twelve months, for many reasons: 

The current approach has reached a plateau, there is no killer app, hallucinations remain (i.e., AI continues to generate false or fabricated information), serious errors persist, no one has a moat (i.e., no sustainable competitive advantage), and people are starting to realize all of this [7].“

The current approach has reached a plateau, - There is no killer app, - Hallucinations remain, - Bullshit mistakes remain, - No one has a moat, - People are starting to realize all this.

On August 5, 2024, Markus Diem Meier from Handelszeitung.ch writes in his article Why the mood has radically changed: about the change in sentiment. Shares in Alphabet, Amazon and Microsoft have already slumped by double-digit percentages in the past month; the same applies to Nvidia [8].

Conclusion: hype or the future of productivity?

In view of such expert opinions, study results and current stock market developments, the question arises: What are the causes of these contradictory perspectives and where will the market for artificial intelligence develop in the coming years? Is artificial intelligence just hype, or is it actually a future technology for productivity growth and bureaucracy reduction?

Status quo of current AI models and applications

Stanford University’s AI Index Report 2024  [9] provides an impressive overview of developments in the field of artificial intelligence. The researchers counted 149 new AI models last year [10] and a further 50 new AI models this year alone [11]. These figures make it clear that the hype surrounding AI technologies continues unabated, while large tech companies such as Microsoft, OpenAI, Alphabet, Meta, Apple and Amazon continue to invest huge sums in the development of new AI technologies.

The success of the various AI models in companies depends crucially on the business applications that are intended to increase productivity in the day-to-day operations of employees and managers. One of the best-known areas of application for artificial intelligence is content generation. Companies hope that the right AI assistants will optimize their processes, increase productivity and create innovative content. These tools should generate content such as text, images or speech on the basis of large AI models and communicate interactively with users.

One of the most frequently used AI applications in companies is Microsoft 365 Copilot, an innovative tool designed to take human-machine interaction to a new level. This “Copilot” is integrated into various Microsoft 365 apps such as Word, Excel, PowerPoint and Teams and also offers Microsoft Business Chat. This chat uses company data and Microsoft 365 apps to process instructions in natural language and use them to create reports or emails, for example. There are also a number of other intelligent AI assistants that are used in various areas. The 2024 annual report of the Work Trend Index by Microsoft and LinkedIn provides a comprehensive overview of the current development and use of generative artificial intelligence (GenAI) [12].

In a Harvard Business School podcast entitled Microsoft’s AI perspective: From chatbots to reengineering the organization, Jared Spataro, Corporate Vice President of Modern Work and Business Applications at Microsoft, explains:

“Based on our telemetry data, nearly 60 percent of the average information worker’s time is spent communicating and coordinating-just to get the rest of their work done. This happens, for example, in meetings, chats or emails. This percentage continues to rise every month, and there is no sign of it slowing down.

Based on our telemetry data, nearly 60 percent of the average information worker's time is actually spent communicating and coordinating - just to get the rest of their work done.

What people actually tell us in the qualitative studies is: ‘I barely have time to do the job I was hired to do’ [13].“

It is clear that AI assistants theoretically have the potential to increase productivity. Nevertheless, there are considerable doubts as to whether they can significantly reduce the non-value-adding organizational and communication effort of employees and managers in day-to-day operations and thus substantially increase productivity growth. Many studies, such as those mentioned above, indicate that the successful integration of these technologies into everyday working life represents a major challenge for many users.

Perspectives from the consulting and ICT sectors

Against the backdrop of increasing doubts about the economic viability of generative AI technologies (GenAI), the consulting and ICT sectors are making different recommendations on how companies can meet these challenges.

Tom Davenport from Deloitte Analytics and John J. Sviokla from PwC describe in their article The 6 Disciplines Companies Need to Get the Most Out of Gen AI from July 8, 2024 that many companies are beginning to question whether AI can create enough economic value to justify its high costs.

They conclude that AI can do so, but only if companies develop certain disciplined capabilities. These include behavior change, controlled experimentation, measuring business value, data management, human capital development and systems thinking [15]. These skills should help companies to select the right AI projects and implement them successfully.

These include behavioral change, controlled experimentation, measuring business value, data management, human capital development and systems thinking.

In its article AI accelerates upheavals in the labor market, McKinsey highlights the importance of the rapid deployment of new technologies. McKinsey argues that rapid deployment of AI technologies could increase productivity growth by up to three percent per year. However, this requires extensive training and retraining of employees. Without such measures, AI would not be able to develop its full potential [16].

The ICT Workforce Consortium, founded under the leadership of Cisco, has launched an initiative to focus on upskilling and reskilling workers who are most likely to be affected by AI. Leading companies such as Accenture, Google, IBM and Microsoft are participating and investing heavily in this endeavor [17].

These recommendations focus strongly on adapting the working methods of employees and managers to AI technologies. Extensive further education, training and advice should help to integrate the available AI solutions into practice. However, this raises the question of whether it would not make more sense to adapt the AI systems to the working methods and needs of the users rather than the working methods to the AI technology. Can the predicted productivity growth actually be achieved, or will the potential remain untapped because the requirements of employees are not sufficiently taken into account?

Two perspectives from academia

The first perspective comes from Daron Acemoglu, professor at the Massachusetts Institute of Technology (MIT), who examines the profound effects of artificial intelligence in his article The Simple Macroeconomics of AI from May 12, 2024. Acemoglu uses a task-based model that incorporates both the automation and complementarity of tasks to better understand the effects of AI. He writes: “My assessment is that generative AI, which is a promising technology, does indeed promise much greater gains. But these gains will be difficult to achieve unless there is a fundamental reorientation of the industry. This reorientation could involve a significant change in the architecture of current generative AI models, such as Large Language Models (LLMs). The focus should be on reliable information that increases the marginal productivity of different types of workers, rather than the current emphasis on human-like conversational tools. The general-purpose nature of the current generative AI approach may prove unsuitable for providing such reliable information [18].“

Acemoglu thus criticizes the current focus on generative AI models that are heavily geared towards conversational capabilities and human-like interactions. Instead, he argues for a greater focus on models that enable real productivity gains through access to reliable information, which is essential for increasing work performance in various areas.

The second perspective comes from Ethan Mollick, Professor of Innovation and Artificial Intelligence at the University of Pennsylvania. In his article Latent Expertise: Everyone is in R&D – Ideas come from the edges, not the center from 20 June 2024, Mollick highlights the disadvantages of centralized, efficiency-focused systems. He writes: “Starting with centralized, efficiency-focused systems not only risks stifling growth, but has other disadvantages as well.

Right now, no one-neither consultants nor typical software vendors-has a one-size-fits-all answer to how AI can be used effectively to unlock new opportunities in specific industries. Companies that rely on centralized solutions that are treated like traditional IT projects are therefore unlikely to find breakthrough ideas, at least for now [19].“

Right now, no one-neither consultants nor typical software vendors-has a one-size-fits-all answer to how AI can be used effectively to unlock new opportunities in specific industries.

Mollick’s criticism is directed at companies that primarily rely on standardized and centralized solutions in order to achieve efficiency gains. He argues that ground-breaking innovations and ideas often come from the margins and not from the center of a company. This means that companies should be more flexible and experimental with new technologies, rather than seeing them merely as efficiency-enhancing tools.

Summary of perspectives:

Acemoglu and Mollick both argue that the current use of AI is not realizing the full potential of the technology. While Acemoglu calls for a reorientation of the technology towards more reliable and productivity-enhancing applications, Mollick emphasizes that companies may be missing out on innovative opportunities due to their fixation on centralized, efficiency-oriented systems. Both point out that current structures and approaches need to be rethought in order to unleash the true potential of AI to increase productivity and innovation.

Requirements for future methods and AI models

As already highlighted in my editorial Where is the flaw in the digital transformation system…. [20], there is an urgent need for a fundamental change in the methodological and architectural approaches of today’s AI models in order to achieve higher productivity growth and less bureaucracy in companies. This need is based on the following theses, among others:

  1. The failure of transformation and change management initiatives is not so much due to the behavior or mindset of employees and managers, but rather to the methods and digital technologies (AI models) used.
    • The focus should not be on people alone, but on the tools used for change.

 

  • A paradigm shift is required: away from the classic, Tayloristic approach of externally organized automation, monitoring and control (today’s GenAI approach) towards a collaborative approach of self-organized automation, monitoring and control (future HCAI approach).
    • This new approach should not only promote productivity, agility and innovation, but also ensure structural and cultural stability.

 

  • The goal should not be the externally organized replacement of human intelligence with artificial intelligence, but rather the intelligent, self-organized networking of human intelligence with artificial intelligence.
    • This is the only way to create the necessary conditions for an ongoing, efficient transformation process.

 

  1. Every methodical and AI-technical architectural approach must be measured by the extent to which it involves all employees and managers in the transformation process and at the same time reduces non-value-adding organizational and communication costs instead of increasing them.
    • The aim is to focus on actual value creation and remove unnecessary bureaucratic hurdles.

 

  1. An ongoing transformation process requires that the time needed to adapt organizational, information and process structures to new framework conditions and customer needs must be shorter than the time available.
    • Otherwise, a so-called adaptation gap arises, which makes the transformation process counterproductive.

 

While some analysts and scientists share these theses, there are still many unanswered questions regarding the concrete design of such a fundamental change of approach in methods and AI models. One promising path could lie in the increased research and development of human-centered AI (HCAI) models. These models focus on people and aim to design technologies in such a way that they complement and improve human work rather than replace it. This approach could provide an answer to how future AI systems need to be designed in order to overcome the challenges described and achieve a higher level of productivity and efficiency in the long term while reducing bureaucracy.

The concept of Human-Centered Artificial Intelligence (HCAI)

Prof. Ben Shneiderman from the University of Maryland aptly summarizes the requirements for human-centered AI in his article Human-Centered Artificial Intelligence: Three Fresh Ideas from September 2020. He proposes an alternative vision of AI based on the creation of reliable, safe and trustworthy systems. These should enable people to benefit from the power of AI while remaining in control [21].

Ben Shneiderman advocates an AI approach that designs and develops systems in a way that supports human self-efficacy, encourages creativity, clearly defines responsibility and facilitates social participation. These basic principles should help designers to achieve technical goals such as privacy, security, fairness, reliability and trustworthiness.

Ben Shneiderman advocates an AI approach that designs and develops systems in such a way that they support human self-efficacy, promote creativity, clearly define responsibility and facilitate social participation.

These basic principles should help designers to achieve technical goals such as privacy, security, fairness, reliability and trustworthiness. With his statement that “HCAI is a second Copernican revolution”, he underlines the central importance of this approach [22].

In the article An HCAI Methodological Framework: Putting It Into Action to Enable Human-Centered AI, the scientists Wei Xu, Zaifeng Gao and Marvin Dainoff present a comprehensive methodological framework for the development of human-centered AI. This consists of seven central components: Design goals, design principles, implementation approaches, design paradigms, interdisciplinary teams, methods and processes. The aim is to design, develop and implement intelligent HCAI systems in a practical way to support the use of human-centered AI in the real world [23]. Wei Xu and Zaifeng Gao extend this idea in another paper, An intelligent sociotechnical systems (iSTS) concept: Toward a sociotechnically-based hierarchical human-centered AI approach. Here, a concept for intelligent sociotechnical systems (iSTS) is developed that is tailored to the requirements of the AI age. The iSTS concept emphasizes joint optimization at the individual, organizational, ecosystem and social levels in order to solve sociotechnical challenges holistically [24].

Another example of HCAI can be found in Dr. Janika Kutz’s dissertation entitled: Human-centered industrial artificial intelligence: approaches to designing accepted and trustworthy AI-based services in production (2024). Her aim is to design AI-based services in production environments in such a way that they are accepted and used by employees. Two models have been developed to support developers in the co-creative design of these services: the “Generic Role Model” and the “Process Model for the Use of Design Principles”. These models promote greater involvement of employees, especially end users, in the development process [25].

The Centred Artificial Intelligence Research Group led by Prof. Ernesto William De Luca from Otto von Guericke University Magdeburg is also investigating various research areas in connection with HCAI and Human-Centered Design (HCD). The focus is on Responsible AI, Ethical AI, Machine Learning, Natural Language Processing, Human-Computer Interaction, User-Adaptive Systems and Usability.

They emphasize the importance of user modelling, adaptation and personalization to ensure that AI systems focus on human needs, values and experiences. The iterative process of human-centered design (HCD) ensures that users are involved at every stage of the design process, which continuously improves usability [26].

They emphasize the importance of user modeling, customization and personalization to ensure that AI systems focus on human needs, values and experiences.

Leading companies such as Microsoft, OpenAI, Alphabet, Amazon, IBM, Meta, SAP and Aleph Alpha, etc. are also increasingly investing in the development of human-centered AI models, especially in the field of generative AI (GenAI). These companies have set themselves the goal of developing technologies that support and improve human work without creating unnecessary complexity – a key feature of human-centered AI.

Conclusion: The vision of human-centered artificial intelligence (HCAI) focuses on the development of systems that are not only technically efficient, but also take human needs, values and experiences into account. This represents a significant difference to many current approaches to generative AI, which are often technology-driven and do not take sufficient account of the user experience. HCAI offers a long-term perspective on the development of AI systems that are not only more powerful, but also ethical and socially responsible.

The main distinguishing features of HCAI and GenAI  

The differences between the ideal vision of Human-Centered Artificial Intelligence (HCAI), the current HCAI research results and today’s generative AI models (GenAI) can be summarized as follows in the context of productivity growth and bureaucracy reduction:

1. Individualizability – task orientation and user centricity

 

Ideal Vision (HCAI):

The ideal HCAI strives to develop AI systems that offer specific, task-centered support and are tailored to the individual needs of employees. This leads to a significant increase in productivity, as employees receive targeted help that is precisely tailored to their activities. Such systems can reduce bureaucracy by automating administrative processes.

 

Current Research (HCAI):

The research by Xu, Gao and others emphasizes the need to create AI systems that improve organizational processes and can be adapted to individual requirements. This would mean replacing bureaucratic processes with more efficient, AI-supported solutions that still remain transparent and traceable. This would make a significant contribution to increasing productivity.

 

Current GenAI models:

Today’s GenAI models already offer tools for automating and optimizing tasks that can increase productivity. However, user-centricity is often neglected as these systems are designed to provide generic solutions for a broad user group. This can lead to new bureaucratic hurdles if the solutions are not well integrated into existing processes or if tasks are constantly changing.

2. Adaptability – adaptivity and learning ability

 

Ideal Vision (HCAI):

AI that continuously adapts to new requirements and learns from interactions with employees could help to reduce bureaucracy. It would recognize outdated tasks and processes and suggest how these can be simplified or automated, leading to significant productivity growth.

 

Current Research (HCAI):

The research underlines the importance of adaptive systems for minimizing bureaucracy. Through automatic learning and adaptation, companies could reduce bureaucratic hurdles caused by constantly changing tasks, processes and regulations. The challenge is to design such systems so that they function effectively in dynamic and complex working environments.

 

Current GenAI models:

While GenAI models can adapt to new data and learn from user interactions, their ability to recognize and optimize complex bureaucratic structures is limited. They often focus on data analysis and optimization within existing structures without fundamentally questioning them.

3. Scalability – expansion and integration

 

Ideal Vision (HCAI):

A scalable and well-integrated HCAI architecture could significantly boost productivity growth by enabling seamless collaboration between different departments and levels of an organization. A flexible system that can be extended to a large number of users without compromising the individual user experience could reduce inefficient processes and help increase productivity.

 

Current Research (HCAI):

Research recognizes the importance of scalability and integration, but emphasizes the difficulties of achieving these in a way that remains user-centric and efficient at the same time. Scholarly approaches such as the iSTS framework emphasize the need to scale HCAI at different levels of an organization to increase efficiency and productivity. This research acknowledges the complexity of integration and scaling, but points out that such an approach has the potential to reduce bureaucracy.

 

Current GenAI models:

GenAI systems are often scalable and can be deployed in large organizations to achieve productivity benefits. However, the integration of such systems can require additional bureaucracy, especially if existing processes and infrastructures need to be adapted to support the new technologies.

4. User-friendliness and intuitive design

 

Ideal Vision (HCAI):

A user-friendly and intuitive HCAI design is crucial to ensure that employees can work productively without major hurdles. This would minimize training efforts and support the reduction of bureaucratic processes by making the systems easily accessible for all users.

 

Current Research (HCAI):

Research supports the need for user-friendly systems, but recognizes that making complex systems accessible to all user groups is a challenge. A user-friendly design is crucial to reduce bureaucracy as it facilitates interaction with the systems.

 

Current GenAI models:

Modern GenAI systems have made significant advances in usability, but complexity often remains an issue. If these systems are too difficult to use, they can create additional layers of bureaucracy as users are forced to seek extensive training or support.

5. Usability and User Experience (UX)

 

Ideal Vision (HCAI):

A high level of usability and a positive user experience are crucial to ensure that employees enjoy using the systems and can work more productively. This can directly contribute to reducing bureaucratic barriers, as a user-friendly system is less prone to errors and brings greater efficiency.

 

Current Research (HCAI):

Research emphasizes that a positive user experience is crucial to maximize the effectiveness of AI systems. However, it is pointed out that usability is often difficult to achieve in complex systems and may require compromises in functionality.

 

Current GenAI models:

Large companies invest heavily in UX, which can boost productivity. However, focusing on general usability without considering specific tasks and work processes can lead to a less optimal user experience, which in turn can increase bureaucracy and complexity.

6. Explainability and transparency

 

Ideal Vision (HCAI):

In an HCAI model, AI systems would be designed in such a way that their functioning and decision-making processes are comprehensible to all employees. This would not only strengthen trust in the systems, but also promote the reduction of unnecessary bureaucracy by making processes more transparent and easier to understand.

 

Current Research (HCAI):

Research emphasizes the importance of explainability and transparency to increase acceptance and trust in AI systems. This could lead to a reduced need for bureaucratic control, as transparent systems are easier to monitor and understand.

 

Current GenAI models:

While many gene AI systems today offer a certain degree of transparency, explainability is often limited to technical aspects. However, actual optimization and the reduction of bureaucracy require deeper integration and explainability, which is often still lacking at present.

7. Ethical and social responsibility

 

Ideal Vision (HCAI):

Ethical AI systems that prioritize social responsibility could promote long-term productivity growth and reduce bureaucracy by supporting fair and inclusive processes. Systems that adhere to ethical principles minimize the risk of conflicts and compliance issues that often lead to bureaucratic oversight.

 

Current Research (HCAI):

Research recognizes the need to integrate ethical principles into the development process to avoid negative impacts on the working environment. This includes ensuring that the reduction in bureaucracy does not come at the expense of fairness or employee rights.

 

Current GenAI models:

Although companies are trying to integrate ethical principles into their AI systems, there are concerns that commercial interests are being placed above ethical considerations. This could lead to bureaucracy not being fully reduced, as additional control mechanisms are needed to address ethical concerns.

8. Data and data protection

 

Ideal Vision (HCAI):

An HCAI model would prioritize data protection while enabling efficient work processes. Bureaucracy could be reduced by establishing secure and transparent data usage practices that give employees confidence and eliminate the need for administrative checks.

 

Current Research (HCAI):

Research shows that data protection is a key challenge that is often in direct conflict with the efficiency of AI systems. The reduction of bureaucracy must go hand in hand with the responsible handling of data in order to overcome legal and ethical concerns.

 

Current GenAI models:

Data protection remains a key concern in the implementation of GenAI systems. While great progress has been made, the question often remains as to whether data protection is fully guaranteed. This leads many companies to introduce additional bureaucratic measures to ensure compliance.

Conclusion: The ideal vision of an HCAI model supports both productivity growth and bureaucracy reduction in organizations by creating user-centric, adaptive, transparent and ethically responsible systems. Current research is striving to put this vision into practice, but still sees significant challenges in scaling and integrating such systems in complex organizational environments.

While the GenAI models of today’s large technology companies have the potential to be productivity-enhancing, they could also create new bureaucratic hurdles if they cannot be developed in a user-oriented way and, above all, adapted to current requirements in a timely and efficient manner. There is a need to align these models more closely with the principles of HCAI in order to sustainably increase productivity and reduce bureaucracy.

GenAI and HCAI characteristics in terms of knowledge input & knowledge output

In the context of knowledge input and knowledge output, human-centered artificial intelligence (HCAI) and generative AI (GenAI) offer different approaches to interaction with users. HCAI models emphasize the bidirectional relationship between users and AI as well as between users themselves, with a continuous exchange of knowledge, while GenAI models traditionally take a more static approach. The main differences between HCAI and GenAI in terms of knowledge input, knowledge output, dynamic knowledge exchange and integration into working and learning environments are described below:

1. Permanent knowledge input

HCAI-Models:

HCAI models are designed to integrate continuous knowledge input. They use various mechanisms to gather information from users, including direct feedback, analyzing usage data and observing interactions. This continuous feedback loop allows HCAI to continually adapt and evolve, improving both productivity and user experience. HCAI systems can learn from every user interaction and dynamically tailor the input to the specific needs of the user.

GenAI-Models:

GenAI models process large amounts of data during the training phase, but their knowledge input often remains static during the application phase. This means that they rarely respond to user input in real time and can only adapt to a limited extent. GenAI models are designed to use pre-trained data and are therefore not always able to continuously absorb and apply new knowledge from users.

2. Knowledge output: Provision of information    

HCAI-Models:

HCAI strives to deliver outputs that not only respond to the immediate request, but also provide learning-based, contextualized and personalized information based on previous interactions and the user’s learned understanding of tasks and responsibilities. The knowledge output in HCAI systems is designed to proactively provide the user with information that matches their needs and current situation, supporting personal development and efficiency.

GenAI-Models:

GenAI systems provide answers or solutions based on the data or queries entered. The output is often static in the sense that it does not draw on previous interactions or adapt dynamically to the context. The information is usually standardized and not specifically adapted to the user’s individual learning progress or needs.

3. Dynamic knowledge sharing and adaptability

HCAI-Models:

HCAI enables a dynamic, continuous exchange of knowledge between users and the system as well as between users themselves. This bidirectional exchange allows the system to constantly evolve and learn from user input. HCAI models are adaptive and personalized and can make contextual suggestions to users before they explicitly request them. This ability to predict and adapt helps to remove barriers to productivity and make workflows more efficient.

GenAI-Models:

While GenAI models offer some degree of adaptability, they are often less dynamic in their continuous adaptation and knowledge sharing. While some models have adaptive learning systems, most GenAI systems are not able to continuously optimize knowledge sharing over long periods of time or provide predictable support.

4. Integration into the working and learning environments

 

HCAI-Models:

HCAI models are characterized by their seamless integration into users’ everyday working and learning environments. They are able to understand contextual cues and adapt their functions accordingly. This enables deeper user support by providing relevant and context-sensitive information tailored to the specific working environment.

GenAI-Models:

GenAI models are often less closely linked to users’ specific working and learning environments. They offer generic solutions that in many cases function in isolation from the user’s actual environment. This can result in them not being optimally adapted to the specific needs or work context of the users, which can reduce efficiency and create new barriers.

Conclusion: The main differences between HCAI and GenAI in terms of knowledge input and output can be summarized as follows:

 

  • Knowledge input: HCAI models continuously integrate user data and adapt dynamically. GenAI models, on the other hand, mainly process predefined data and rarely learn in real time.

 

  • Knowledge output: HCAI provides personalized and contextualized information that adapts to the user, while GenAI systems provide standardized, static information that is less adapted to individual needs.

 

  • Dynamic knowledge exchange: HCAI enables a continuous exchange of knowledge that constantly improves the system, while GenAI is often limited in its adaptability.

 

  • Integration into working and learning environments: HCAI adapts seamlessly to specific work and learning contexts, while GenAI models often act in a more isolated way and are not optimally integrated into individual work environments.

 

Overall, HCAI models offer a more dynamic and personalized user experience that is better tailored to users’ unique needs. This leads to a smarter, more adaptable and more productive interaction, while GenAI models focus more on generic solutions and less on individual user requirements.

Requirements for a holistic HCAI method and architecture approach

To date, there are still considerable differences between the theoretical approach of Human-Centered Artificial Intelligence (HCAI) and its practical implementation. There is still a lack of a comprehensive methodological and architectural approach as well as a well thought-out procedure, process and role model that integrates the principles of human-centered design (HCD) and ensures the successful implementation of HCAI in practice.

The focus should be on creating a framework that significantly reduces the non-value-adding organizational and communication effort for employees and managers. At the same time, a high level of motivation for cooperative knowledge sharing must be ensured in order to both increase productivity and efficiently reduce bureaucracy.

In my view, the following key aspects are essential to enable a holistic implementation of HCAI:

 

  1. Individualizability – task orientation and user centricity

An HCAI system must tailor the entire working environment to the needs of the user by developing a comprehensive understanding of their tasks and responsibilities. This means that the system does not just focus on individual tasks, but develops a comprehensive understanding of the user’s context and needs.  The system must understand the user’s entire working context and create a customized working environment. This includes providing relevant data, information, knowledge sources and applications in a structured and individualized form that they need to perform their current tasks and responsibilities. Only the AI-supported individualization of the digital workplace can create the necessary conditions for productive value creation.

    • Dynamic individualization: Every user needs an individualized working environment that is tailored to their specific tasks and responsibilities. This creates the basis for productive work and greater efficiency.

 

  1. Adaptability – adaptivity and learning ability

An HCAI system should learn from every interaction, communication and piece of information in order to adapt dynamically to the changing tasks and responsibilities of users. This requires that the system adequately involves each user in the organizational and process design in order to promptly and efficiently adapt their working environment to their needs. This requires advanced machine learning techniques and continuous feedback loops. The system must be flexible enough to quickly recognize and respond to changes in work requirements. It should proactively suggest adjustments and efficiently adapt the user’s digital workplace to their needs.

    • Adaptive learning: HCAI systems must be able to dynamically adapt each user’s individualized work environment to new tasks and responsibilities by continuously learning from user communication and interaction.

 

  1. Scalability – expansion and integration

The HCAI system must be able to extend the individualizability and adaptability of the work environment to any number of users without compromising the quality of the user experience. This means that the system must be able to create a personalized working environment for every user in a large company without increasing the organizational and communication effort. Optimal scalability is achieved when the system becomes more efficient as the number of users increases and makes individual adjustments dynamically and precisely.

    • Expansion and integration: The system must be scalable to any number of users, with the result that the more users use the system, the better the individualization and updating of the working environment. This is crucial for success in large organizations.

 

  1. Fairness – Ethical and social responsibility

From the perspective of a company, Human-Centered Artificial Intelligence (HCAI) means developing AI systems that focus on people and support fair, transparent and responsible decisions. This includes full disclosure of the sources of the data and information provided as well as the involvement of all affected employees and managers in the TARGET/ACTUAL dialog and decision-making process for cooperative knowledge sharing.

A trustworthy AI ensures a fair balance between what each individual contributes and what they get back. The motivation to share knowledge is strongly based on the perception of fairness; employees are more likely to share their knowledge if they feel it is fair and their contribution is recognized. Through this fairness in dealing with knowledge, HCAI promotes greater trust in the system, increases the willingness to work together and strengthens the corporate image.

    • Ethical and social responsibility: The system must ensure that knowledge exchange and decision-making processes are transparent and fair so that users build trust in the system and are willing to share knowledge.

 

  1. Usability / intuitive design

An HCAI system should be usable from the start without extensive training. The technology must fit seamlessly into the natural way users work and at the same time learn from their interactions in order to dynamically adapt to changing requirements. Usability is a continuous optimization process that promotes the efficiency and acceptance of the system. An intuitive design ensures that the digital workplace remains uncomplicated and flexible without burdening users with additional complexities.

Intuitive use and user-friendliness: From the outset, the system should be user-friendly and operable without training. User-friendliness must be continuously improved in order to adapt to the changing needs of users.

    • Intuitive use and user-friendliness: From the outset, the system should be user-friendly and operable without training. User-friendliness must be continuously improved in order to adapt to the changing needs of users.

 

  1. Explainability / Transparency

Explainability is a key factor for trust in a HCAI system. Users must understand why the system makes certain suggestions or recommends actions. A transparent system that explains its decision logic not only increases acceptance but enables users to use the technology better. This transparency should also be present in the areas of customization, adaptability and scalability so that users know how and why the system responds to their specific needs.

    • Explainability and transparency: Users must be able to understand at any time why the system makes certain changes in order to build trust and optimize the use of the system.

 

  1. Compliance / Data Protection

Given the large amount of data collected about the tasks, responsibilities and working practices of employees and managers, the HCAI system must comply with all operational and legal compliance and data protection guidelines. Protecting this sensitive data, especially in different legal contexts, is of key importance. An HCAI system must be designed to make data processing transparent and secure in order to increase user trust and meet legal requirements.

    • Data protection and legal compliance: The system must comply with all necessary data protection guidelines and legal regulations to ensure the protection of sensitive data.

 

A holistic HCAI method and architecture approach must take several key aspects into account in order to be successfully implemented in practice. The first step involves clarifying the individualizability, adaptability and scalability of the system. On this basis, the second step can answer questions about fairness, user-friendliness, explainability and compliance with compliance and data protection requirements. These aspects are crucial not only to increase productivity but also to promote user trust and motivation.

The goal of such an approach is to create a dynamic system that flexibly adapts to the needs of users while the users retain control over the system. At the same time, the organizational and communication effort should be minimized. This helps to ensure that the HCAI system is not only effective, but also trustworthy and user-friendly.

My preliminary conclusions

Many analysts and investors predict that generative AI could add between 2.6 and 4.4 trillion dollars a year to the global economy over the next 20 years [28]. In contrast, leading economists question such forecasts. [29] [30]

My thesis: Artificial intelligence can still far exceed the McKinsey forecasts for economic growth, but only if the methodological and architectural approach of generative AI models continues to develop in the direction of human-centered AI. In other words: Artificial intelligence must adapt to the needs of the users and not the needs of the users to the AI technology!

My thesis: Artificial intelligence can still far exceed the McKinsey forecasts for economic growth, but only if the methodological and architectural approach of generative AI models continues to develop in the direction of human-centered AI.

This thesis is based on the idea that HCAI models help to minimize non-value-adding tasks by providing efficient, intuitive and supportive technologies that increase both individual and organizational performance. In an HCAI-enabled digital workplace, AI acts as a seamless extension of the human user without the need for training, change management or cultural change.

The system learns and reacts to the individual needs of users and supports them in their tasks and responsibilities. By intelligently networking users, it enables efficient information, communication and interaction processes and thus significantly reduces non-value-adding organizational and communication costs. This leads to higher productivity, agility and innovative capacity as well as structural and cultural stability in companies.

The challenge in the practical implementation of the Human-Centered Artificial Intelligence (HCAI) approach lies in finding the right methodological and architectural approach as well as suitable algorithms that actually implement the principles of Human-Centered Design. An HCAI-based digital workplace must meet the three central requirements – individualizability, adaptability and scalability – regardless of the size of the company or the number of workstations. Whether it is 100 or 10,000 workstations, the chosen AI technology, be it Generative AI (GenAI) or HCAI, must be able to meet these criteria while significantly reducing the non-value-adding organizational and communication effort for employees and managers.

The solution could lie in a simple but effective methodological and architectural approach that is supported by an easy-to-understand algorithm. This algorithm should implement the principles of human-centered design by connecting human intelligence instead of replacing it. In this way, the HCAI approach creates a system that empowers people while laying the foundation for sustainable productivity growth and a consistent reduction in bureaucracy.

Such an approach would not only be technologically groundbreaking, but would also redefine the relationship between humans and machines by putting people at the center of AI use and increasing productivity without displacing human input. To get closer to this perspective, I am convinced that companies should first ask themselves the question:

Such an approach would not only be technologically groundbreaking, but would also redefine the relationship between humans and machines by putting people at the center of AI use and increasing productivity without displacing human input.

What impact would it have on productivity, agility and innovation capabilities as well as structural and cultural stability if an HCAI-supported workplace were to sustainably reduce the non-value-adding effort of employees and managers by (for example) half? I believe this would help companies to adopt a different perspective on the use of artificial intelligence.

In this context, companies should ask themselves the following key questions:

  1. Which processes contribute to the non-value-adding organizational and communication efforts of employees and managers?
    • Companies must identify and analyze these processes regardless of role and function.

 

  1. How can the non-value-adding effort be measured?
    • A concrete metric is required to quantify the current effort and create a basis for improvement.
  1. How and in what way do these processes impact productivity, agility and innovation, as well as structural and cultural stability within the organization?
    • Understanding these impacts is critical to prioritizing HCAI initiatives.

 

  1. How can these non-value-adding processes be automated or eliminated through HCAI?
    • HCAI should serve as a lever to improve inefficient processes through AI-powered automation.

 

  1. What must a methodological and architectural approach look like in order to fulfill HCAI requirements?
    • A holistic approach that ensures individualizability, adaptability and scalability must be developed.

 

  1. Which algorithms guarantee the implementation of HCD principles?
    • Algorithms must make it possible for any number of digital workstations to adapt efficiently to individual user needs.

 

  1. How can companies develop a clear AI strategy and implementable roadmap based on HCAI?
    • It is necessary to develop a practice-oriented strategy and roadmap in order to implement it successfully in the company.

 

  1. How can the ROI of HCAI projects be measured?
    • Companies must be able to capture the real added value of HCAI and present the return on investment (ROI) transparently.

 

However, it is crucial that we really want sustainable productivity growth and a fair reduction in bureaucracy through HCAI. It seems to me that there is often a wide divergence of interests here. Large AI companies and their investors should also consider whether investing billions in the development of Artificial General Intelligence (AGI) makes economic and ethical sense at all if such a form of intelligence already exists in eight billion people. In my opinion, these investments would be better invested in the development of an HCAI model that intelligently networks the existing human intelligence and thus leads to a genuine, human-centered AGI.

A hypothesis for the future: I am convinced that the HCAI model (Human-Centered Artificial Intelligence) has the potential to promote sustainable productivity growth, reduce bureaucracy and prosperity and raise awareness of socio-ecological challenges. Only people who are characterized by prosperity gains develop a deeper understanding of these challenges.

The question remains as to who will be the first to adopt the HCAI model: the tech giants in Silicon Valley, a start-up in the EU or the Chinese government. China could use HCAI as a supplement to the social credit system to promote self-determined work and create a balance between individual autonomy in working life and state control in private life. China’s autocratic structure allows for rapid and efficient development and implementation of HCAI, which could potentially give China a strategic advantage over Western democracies.

If China follows this path and subsidizes HCAI systems to increase productivity, Western companies will come under pressure. A Chinese technocratic lead could challenge the ability of Western companies and democracies to adapt and innovate and lead to tangible disadvantages.

Conclusion: HCAI could become a model that combines surveillance and control in social systems with self-determined work in companies, thereby promoting both increased productivity and prosperity. For autocratic systems such as China, this could bring considerable strategic advantages, while Western companies and democracies would face the challenge of responding appropriately.

My statement

This article should be understood in the context of my editorial “Where is the flaw in the digital transformation system and what are the resulting requirements for the use of artificial intelligence? from September 27, 2023.

With this article, I hope to initiate a further discussion that will give the topic of artificial intelligence a new perspective. If you have any questions or comments on this article, please feel free to send them to me using the reply form or by email.

Friedrich Reinhard Schieck / BCM Consult – 26.09.2024

eMail:    fs@bcmconsult.com; friedrich@schieck.org;

Website: www.bcmconsult.com  

(The article has been machine translated)

Sources:

(1) Statista (2023) | Change in productivity per hour worked until 2022

(2) McKinsey (2023) | The economic potential of generative AI: The next productivity frontier

(3) McKinsey (2023) | Skills shortage: GenAI can alleviate acute demand for highly skilled jobs

(4) Statista (2024) | Worldwide, private investments in the field of artificial intelligence by 2023

(5) Upwork Research (July 2024) | From Burnout to Balance: AI-Enhanced Work Models

(6) Investment bank Goldman Sachs (June 2024) | GEN AI: TOO MUCH SPEND, TOO LITTLE BENEFIT?

(7) Gary Marcus (2024) | AlphaProof, AlphaGeometry, ChatGPT, and why the future of AI is neurosymbolic

(8) Markus Diem Meier – Handelszeitung (2024) | Why the mood has radically changed

(9) Stanford University (2024) | AI Index Report 2024 / Measuring trends in AI

(10) Stanford University (2023) | 149 new AI models – ecosystem graphs

(11) Stanford University (2024) | 50 new AI models – ecosystem graphs

(12) Microsoft and Linkedin (2024) | 2024 annual report of the Work Trend Index

(13) Harvard Business School (2024) | Microsoft’s AI perspective: From chatbots to reengineering the organization

(14) Michael Bradley University of Oxford (2024) | The AI Efficiency Paradox

(15) Davenport (Deloitte), Sviokla (PwC) (2024) | The 6 Disciplines Companies Need to Get the Most Out of Gen AI

(16) McKinsey (2024) | AI accelerates upheavals in the labor market: productivity boost of 3% possible

(17) Cisco (2024) | AI-Enabled Information and Communication Technology (ICT)

(18) Daron Acemoglu / MIT (2024) | The Simple Macroeconomics of AI

(19) Ethan Mollick (2024) | Latent Expertise: Everyone is in R&D – Ideas come from the edges, not the center

(20) Friedrich Schieck (2023) | Where is the Fault in the Digital Transformation System….

(21) Ben Shneiderman / University of Maryland (2020)| Human-Centered Artificial Intelligence: Three Fresh Ideas

(22) Ben Shneiderman / University of Maryland (2020)| Human-Centered Artificial Intelligence: Three Fresh Ideas

(23) Xu, Gao, Dainoff (2023) | An HCAI Methodological Framework

(24) Wei Xu, Zaifeng Gao (2024) | An intelligent sociotechnical systems (iSTS) concept

(25) Janika Kutz / University of Kaiserslautern-Landau (2024) | Human-centered industrial artificial intelligence

(26) Ernesto W. De Luca / University of Magdeburg (2024) | Research group “Centered Artificial Intelligence Research Group”

(27) denkfabrik-bmas.de (2022) | Working with artificial intelligence

(28) McKinsey (2023) | The economic potential of generative AI: The next productivity frontier

(29) Investment bank Goldman Sachs (2024) | GEN AI: TOO MUCH SPEND, TOO LITTLE BENEFIT?

(30) Luisa Bomke – Handelsblatt (2024) | Is the AI hype a bubble?

2 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *