cover of episode Alexandra Diening on Human-AI Symbiosis, cyberpsychology, human-centricity, and organizational leadership in AI (AC Ep71)

Alexandra Diening on Human-AI Symbiosis, cyberpsychology, human-centricity, and organizational leadership in AI (AC Ep71)

2024/11/27
logo of podcast Amplifying Cognition

Amplifying Cognition

Frequently requested episodes will be transcribed first

Shownotes Transcript

			#### *“It’s not just about the AI itself; it’s about the way we deploy it. We need to focus on human-centric practices to ensure AI enhances human potential rather than harming it.”*

– Alexandra Diening

			##### About Alexandra Diening

		
			
			
			
			
			

Alexandra Diening is Co-founder & Executive Chair of Human-AI Symbiosis Alliance. She has held a range of senior executive roles including as Global Head of Research & Insights at EPAM Systems. Through her career she has helped transform over 150 digital innovation ideas into products, brands, and business models that have attracted $120 million in funding . She holds a PhD in cyberpsychology, and is author of Decoding Empathy: An Executive’s Blueprint for Building Human-Centric AI and A Strategy for Human-AI Symbiosis).

**Website: **

Human-AI Symbiosis)

 

LinkedIn Profiles

Alexandra Diening)

Human-AI Symbiosis Alliance)

 

Book

A Strategy for Human-AI Symbiosis)

			## What you will learn

		
			
			
			
			
			
  • Exploring the concept of human-AI symbiosis

  • Recognizing the risks of parasitic AI

  • Bridging neuroscience and artificial intelligence

  • Designing ethical frameworks for AI deployment

  • Balancing excitement and caution in AI adoption

  • Understanding AI’s impact on individuals and organizations

  • Leveraging practical strategies for mutualistic AI development

              ## Episode Resources
    
          
              
              
              
              
              ### Organizations and Alliances
    
  • Human AI Symbiosis Alliance)

  • Fortune 500 companies)

Books

Technical Terms

Ross Dawson: Alexandra, it’s a delight to have you on the show.

Alexandra Diening: Thank you for having me, Ross. Very happy to be here.

Ross: So you’ve recently established the Human AI Symbiosis Alliance, and that sounds very, very interesting. But before we dig into that, I’d like to hear a bit of the backstory. How did you come to be on this journey?

Alexandra: It’s a long journey, but I’ll try to make it short and quite interesting. I entered the world of AI almost two decades ago, and it was through a very unconventional path—neuroscience. I’m a neuroscientist by training, and my focus was on understanding how the brain works.

Of course, if you want to process all the neuroscience data, you can’t do it alone. Inevitably, you need to incorporate AI. This was my gateway to AI through neuroscience. At the time, there weren’t many people working on this type of AI, so the industry naturally pulled me in.

I transitioned to working on business applications of AI, progressively moving from neuroscience to AI deployment within business contexts. I worked with Fortune 500 companies across life sciences, retail, finance, and more. That was the first chapter of my entry into the world of AI.

When deploying AI in real business scenarios, patterns start to emerge. Sometimes you succeed; sometimes you fail. What I noticed was that when we succeeded and delivered long-term tangible business value, it was often due to a strong emphasis on human-centricity. This focus came naturally to me, given my background in cognitive sciences.

This emphasis became even more critical with the emergence of generative AI. Suddenly, AI was no longer just a background technology crunching data and influencing decisions behind the scenes. It became something we could interact with using natural language. AI started capturing emotions, building relationships, and augmenting our capabilities, emerging as a kind of social, technological actor.

This led to our hypothesis that generative AI is the first technology with a natural propensity to build symbiotic relationships with humans. Unlike traditional technologies, there is mutual interaction. While “symbiosis” may sound romantic, it can manifest across a spectrum of outcomes, from positive (mutualistic) to negative (parasitic).

In business, I started to see the emergence of parasitic AI—AI that benefits at the detriment of humans or organizations. This realization began to trouble me deeply. While I was working for multi-billion-dollar tech companies, I advocated for Responsible AI and human-centric practices. However, I realized the impact I could have was limited if this remained a secondary concern in corporate agendas.

This led to the establishment of the Human AI Symbiosis Alliance. Its mission is to educate people about the risks of parasitic AI and to guide organizations in steering AI development toward mutualistic outcomes.

Ross: That’s… well, there’s a lot to dig into there. I look forward to delving into it. You referred to being human-centric, and I think you seem to be a very human-centric person. One point that stood out was the idea of generative AI’s propensity for symbiosis. Hopefully, we can return to that. But first, you did your Ph.D. in cyber psychology, I believe. What is cyber psychology, and what did you learn?

Alexandra: Cyber psychology, when I started, was quite unconventional and still is to some degree. It combines psychology, medical neuroscience, human-computer interaction, marketing science, and technology. The focus is on how human interaction and behavior change within digital environments.

In my case, it was AI-powered digital environments, like social media and AI avatars. Part of my research examined how long-term exposure to these environments impacts behavior, emotions, and even biology. For example, interacting with AI-powered technologies over time can alter brain connectivity and structure.

The goal was to identify patterns and, most importantly, help tech companies design technologies that uplift human potential rather than harm it.

Ross: Today, we are deeply immersed in digital environments and interacting with human-like systems. You mentioned the importance of fostering positive symbiosis. This involves designing both the systems and human behavior. What are the leverage points to achieve a constructive symbiosis between humans and AI?

Alexandra: The most important realization is that AI itself isn’t a living entity. It lacks consciousness, intent, and agency. The focus should be on our actions—how we design and deploy AI. While it’s vital to address biases in AI data and ensure proper guardrails, the real danger lies in how AI is deployed.

Deployment literacy is key. Many tech companies treat AI like traditional software, but AI requires a completely different lifecycle, expertise, and processes. Awareness and education about this distinction are essential.

Beyond education, we need frameworks to guide deployment. Companies must not only enhance employee efficiency but also ensure that skills aren’t eroded over time, turning employees into efficient yet unskilled workers.

Measurement is another critical aspect. Traditional success metrics like productivity and efficiency are insufficient for AI. Companies must consider innovation indices, employee well-being, and brand relationships. AI’s impact needs to be evaluated with a long-term perspective.

Finally, there are unprecedented risks with AI. For example, recent events, like a teenager tragically taking their life after interacting with an AI chatbot, highlight the dangers. Companies must be aware of these risks and prioritize expertise, architecture, and metrics that steer AI deployment away from parasitism.

Ross: One of the things I understand you’re launching is the Human AI Symbiosis Bible. What is it, what does it look like, and how can people use it to put these ideas into practice?

Alexandra: The “Human AI Symbiosis Bible” is officially titled A Strategy for Human AI Symbiosis. It’s already available on Amazon, and we’re actively promoting it. The book acts as a guide for stakeholders in the AI space, transitioning them from traditional software development practices to AI-specific strategies.

The content is practical and hands-on, tailored to leaders, designers, engineers, and regulators. It starts with foundational concepts about human-AI symbiosis and its importance. Then it provides frameworks and processes for avoiding common pitfalls.

What sets it apart is its practicality. It’s not a theoretical book that simply outlines risks and concepts. We include over 70 case studies from Fortune 500 companies, showcasing real-world examples of AI failures and successes. These case studies highlight lessons learned so readers can avoid repeating the same mistakes.

We also had 150 contributors, including 120 industry practitioners directly involved in building and deploying AI. The book synthesizes their insights and experiences, offering actionable guidance rather than prescribing a single “correct” way to develop and deploy AI. It’s a resource to help leaders ask the right questions, make informed decisions, and prepare for what we call the AI game.

Ross: Of course, everything you’re describing is around a corporate or organizational context—how AI is applied in organizations. You suggest that every aspect of AI adoption should align with the human-AI symbiosis framework.

Alexandra: Absolutely. The message is clear: organizations must go beyond viewing AI as merely a technological or data exercise. They need to understand its profound effects on the human factor—both employees and customers.

As we’ve discussed, generative AI inherently influences human behavior. Organizations must decide how they want this symbiosis to manifest. Do they want AI to augment human potential and drive mutual benefits, or allow parasitic patterns to emerge, harming individuals and the organization in the long term?

Ross: You and I might immediately grasp the concept of human-AI symbiosis, but when you present this in a corporate boardroom, some people might be puzzled or even resistant. How do you communicate these ideas effectively to business leaders?

Alexandra: It’s essential to avoid letting the conversation become too fluffy or esoteric. When introducing human-AI symbiosis, we frame the discussion around a tangible enemy: parasitic AI.

No company wants to invest time, money, and resources into deploying AI only to have it harm their organization. We start by defining parasitic AI and sharing quantified use cases, including financial costs and operational impacts. This approach grounds the conversation in real-world stakes.

From there, we guide leaders through identifying parasitic patterns in their organization and preventing them. By addressing the risks, we create space for mutualistic AI to thrive. This framing—focusing on preventing harm—proves very effective in getting leaders engaged and invested.

Ross: What you’re describing seems to extend beyond individual human-AI interactions to an organizational level—symbiosis between AI and the entire organization. Is it one or the other, or both?

Alexandra: It’s both. On the individual level, if you enhance an employee’s productivity but they become disengaged or leave the organization, it ultimately harms the company. Similarly, if employees become more efficient but lose critical skills over time, the company’s ability to innovate is compromised.

The connection between individual outcomes and organizational success is inseparable. Organizations must consider how AI impacts employees on a personal level and translate those effects into broader business objectives like resilience, innovation, and long-term sustainability.

Ross: It’s been almost two years since the “ChatGPT moment” that changed how many view AI. As AI capabilities continue to evolve rapidly, what are the most critical leverage points to drive the shift toward human-AI symbiosis?

Alexandra: It starts with literacy and awareness. Leaders, innovators, and engineers must understand that AI is fundamentally different from traditional software. The old ways of working don’t apply anymore, and clinging to them will lead to mistakes.

Education is the first pillar, but it must be followed by practical tools and frameworks. People need guidance on what to do and how to do it. Case studies are crucial here—they provide real-world examples of both successes and failures, demonstrating what works and what doesn’t.

Lastly, we need regulatory guardrails. I often use the analogy of a driving license. You wouldn’t let someone drive a car without proper training and certification, yet we have people deploying AI systems without sufficient expertise. Regulation must define minimum requirements for AI deployment to prevent harm.

Ross: That ties into people’s attitudes toward AI. Surveys often show mixed feelings—excitement and nervousness. In an organizational context, how do you navigate this spectrum of emotions to foster transformation?

Alexandra: The key is to meet people where they are, whether they’re excited or scared. Listen to their concerns and validate their perspectives. Neuroscience tells us that most decisions are driven by emotion, so understanding emotional responses is critical.

The goal is to balance excitement and caution. Pure excitement can lead to reckless adoption of AI for its own sake, while excessive fear can result in resistance or harmful practices, like shadow AI usage by employees. Encouraging a middle ground—both excited and cautious—creates a productive mindset for decision-making.

Ross: That’s a great way to frame it—balancing excitement with due caution. So, as a final thought, what advice would you give to leaders implementing AI?

Alexandra: First, educate your teams. Don’t pursue AI just because it’s trendy or looks good. Many AI proofs of concept never reach production, and some shouldn’t even get that far. Understand what you’re getting into and why.

Second, ensure you have the right expertise. There are many self-proclaimed AI experts, but true expertise comes from long-term experience. Verify credentials and include at least one seasoned expert in your team.

Third, go beyond technology and data. Focus on human factors, ethics, and responsible AI. Consider how AI will impact employees, customers, and society at large.

Fourth, establish meaningful metrics. Productivity and efficiency are important, but so are innovation, employee well-being, and long-term brand value. Measure what truly matters for your organization.

Finally, get a third-party review. Independent assessments can spot parasitic patterns early and help course-correct. It’s a small investment for significant protection.

Ross: That’s excellent advice. Identifying parasitic AI requires awareness and understanding, and your framing is incredibly valuable. How can people learn more about your work?

Alexandra: Visit our website at h-aisa.com). We publish resources, case studies, expert interviews, and event details. You can also find our book, A Strategy for Human AI Symbiosis, on Amazon or through our site.

We’re actively engaging with universities, conferences, NGOs, and media to spread awareness. We’ll also host an event in Q1 2025. For updates, follow us on LinkedIn and join the Human AI Symbiosis Alliance group.

Ross: Fantastic. We’ll include links to your resources in the show notes. Thank you for sharing your insights and for your work in advancing human-AI symbiosis. It’s an essential and positive framework for organizations to adopt.

Alexandra: Thank you, Ross. It was a pleasure.

 

 

The post Alexandra Diening on Human-AI Symbiosis, cyberpsychology, human-centricity, and organizational leadership in AI (AC Ep71)) appeared first on amplifyingcognition).