#### *“AI is more of an occasion for organizational redesign than it is a solution to that redesign. However, it’s a great amplifier—it will amplify your problems, and it will amplify good organizational design.”*
– Kai Riemer
##### About Kai Riemer
Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus, at the University of Sydney Business School. He works with boards and executives to bring foresight expertise and deep understanding of emerging technologies into strategy and leadership. Kai co-leads the Motus Lab for research on digital human technology, and co-author of The Global 2025 Skills Horizon) initiative.
LinkedIn: Kai Riemer)
Blog: byresearch.wordpress.com)
Google Scholar page: Kai Riemer)
Research Gate: Kai Riemer)
University Profile: Kai Riemer)
## What you will learn
Understanding AI’s role in organizational decision-making
How AI can enhance personal productivity for leaders
Using generative AI as a team facilitator and coach
The importance of upskilling for AI fluency
Addressing the risks of anthropomorphizing AI
AI as an amplifier for good and bad organizational design
Redesigning work structures to fully harness AI’s potential
## Episode Resources
IBM)
## Transcript
Ross Dawson: Hi. It is wonderful to have you on the show.
Kai Riemer: Thank you. Thanks for having me.
Ross: So for many years, you’ve been digging into the impact of AI and other technologies on organizations, on leadership, and how we can do things more effectively. So, just as a starting point, one thing about organizational decisions, particularly more complex decisions—where are we today in AI, being able to augment or improve, to assist humans in making better decisions in organizations?
Kai: Oh boy, that’s a big question. It obviously depends on what kind of AI we are talking about and at what level. I think we are in a place of great uncertainty when it comes to the future role of AI and generative AI. We still need to put in a lot of effort to educate people, particularly decision-makers, about what this technology can do, where it should be applied, and how it should be part of making decisions.
We often distinguish AI as a systems technology that we make part of organizational systems. We might have a bespoke chatbot that we train, fine-tune, and put into service with limited autonomy, providing information. On the other hand, AI for personal productivity involves how AI becomes part of people’s daily lives and decision-making as they work with the technology. It depends on how skillful the human is in working with AI. The lazy approach is to ask questions and accept whatever answer the AI provides, which typically results in average decision-making. Better approaches involve including AI in reflection tasks, asking it to question your thinking, and taking new aspects into account that AI provides.
Education is needed on two levels—getting decision-makers to understand AI beyond generative AI, because there’s still predictive AI, image recognition, and others that improve processes—and upskilling to use AI as a powerful assistant in daily work. Misunderstandings persist about how this technology works and how to use it productively. There’s no one-size-fits-all.
Ross: As you said, AI can assist in personal productivity for individuals at all levels. Are there any configurations for group decision-making, such as boards or executive teams, where both traditional AI and generative AI can assist?
Kai: I think generative AI has a lot to offer. Given that it encodes patterns from the corpus of human text, many management frameworks and tools are embedded in these networks, which we can make use of. In our team, we held a workshop session and used AI to help fill out the Business Model Canvas. The AI, in this case ChatGPT, asked us questions about each section, and we discussed them as a team. AI served as a coach or moderator, structuring the conversation. We weren’t drawing on AI for answers, but for guidance.
There are organizations doing similar interesting things, though some operate behind NDAs. For example, IBM’s global HR officer, Nicola Moreau, talked about their generative AI assistant, which helps employees ask questions about entitlements and HR policies. It increased inclusiveness, particularly in cultures where people hesitate to ask superiors questions.
Ross: You mentioned the Skills Horizon Report. With the shifting skills landscape, where do you see the most pointed need for skills or capability development?
Kai: The Skills Horizon is based on data analysis of emerging conversations and interviews with global leaders. Four main areas emerge: first, people need to speak the language of technology—AI, digital ethics, and even quantum computing. The second area involves geopolitics and Net Zero, with AI playing a role. The third area is values and trust—leaders need to build common ground in an increasingly diverse workforce. Lastly, leaders must think in new ways, especially with humanities like storytelling and futures thinking being critical in STEM fields. Leaders need to recognize that humans bring something unique that AI cannot.
Ross: That’s incredibly relevant. With leadership evolving, are there ways AI can assist leaders in becoming more effective?
Kai: Yes, especially in becoming more organized and working with information effectively. There are tools like Google’s NotebookLM, which allow leaders to organize ideas and reflections and have conversations with their own notes. AI models can critique your thinking and help leaders ask better questions. It’s less about getting answers and more about improving self-reflection.
Ross: That’s a valuable application. Some leaders may prefer feedback from AI, which feels less judgmental than human feedback.
Kai: Exactly, you don’t need to tell anyone you’re using it, and you can always blame the AI for being wrong! It’s non-judgmental, making it a great conversation partner.
Ross: On that note, how should we think about our relationship with AI? You’ve mentioned you dislike the term “human-AI collaboration.”
Kai: Yes, I do. The term “collaboration” puts AI on equal footing with humans, which it is not. We’ve optimized AI to mimic human conversation, but we are still dealing with a text prediction machine. When we anthropomorphize AI and use terms like collaboration, we lose sight of the differences between human and machine cognition. AI doesn’t have genuine agency, intent, or understanding.
Instead of human-AI collaboration, we should use terms like human-AI interaction. Precision in language is important because AI is a tool, not a collaborator. It depends entirely on how it’s prompted. If we fail to understand this difference, we risk treating humans in more mechanistic ways, which is dangerous.
Ross: That’s an insightful point. As we build complementary systems with AI, what distinctions should we focus on in augmenting human cognition?
Kai: First, remember that GPT models are pre-trained and fixed in time. They don’t learn but are trained on a vast corpus of human text. This makes AI powerful, but it lacks basic human understanding. AI’s value lies in its ability to do what humans cannot. We need to educate people about what’s happening under the hood, so they can use AI for tasks that enhance their human capabilities, rather than simply mimicking human interaction.
Ross: You’ve mentioned AI’s role in augmenting human cognition and creativity. Are there specific techniques or structures for AI to assist in ideation or creativity?
Kai: We’ve described generative AI as “style engines.” These systems don’t store text or images but encode likenesses, which can be combined in creative ways. This applies to text, where AI can transfer styles and generate creative outputs. But we’re currently optimizing AI for accuracy and human-like interaction, rather than for creative exploration. There’s untapped potential in using AI to expand creativity by combining patterns in novel ways.
Ross: That’s a great framing—hallucinations as a feature, not a bug! Moving on, what do you think is the future of organizations, especially with AI in the mix?
Kai: We’re in a post-pandemic world, still figuring out flexible and hybrid work, and now we’ve added generative AI. AI in asynchronous workspaces adds complexity, especially in bureaucratic organizations. Generative AI can automate tasks, but it’s an opportunity to rethink whether those tasks should exist in the first place. AI should amplify well-designed organizations, but poorly designed organizations will see their problems amplified as well. AI is more an occasion for organizational redesign than it is a solution, but it’s a great amplifier for improvement.
Ross: That’s an excellent way of looking at it. Thank you for sharing your insights, Kai.
Kai: My pleasure. Thank you, Ross.
The post Kai Riemer on AI as non-judgmental coach, AI fluency, GenAI as style engines, and organizational redesign (AC Ep67)) appeared first on amplifyingcognition).