#### *“Those who are so afraid that they don’t want to touch anything related to AI are the ones who, A, are probably going to lose their jobs, or, B, worse, could create some unintended damage.”*
– Heidi Lorenzen
##### About Heidi Lorenzen
Heidi Lorenzen is Executive Producer and Director of The Humanity Code). She comes from an extensive background as a go-to-market leader for sector leaders such as Accela, Singularity University, CloudWords, and GlobalEnglish, working in a number of countries across three continents, as well as an executive advisor to fast-growth startups. Heidi has been named one of the Top 50 Most Powerful Women in Technology).
Website: www.thehumanitycode.ai)
**LinkedIn: **Heidi Lorenzen)
**Twitter: **@hlorenzen )
## What you will learn
Generative AI: introduction and impact (02:58)
Introduction of “The Humanity Code” – a documentary on AI and its human impact (04:46)
The essence and goals of “The Humanity Code” (07:05)
The need for a collective vision for humanity (10:05)
Integrating vision, governance, and long-term thinking in AI development (11:59)
Exploring key pillars for AI development and corporate strategy (18:14)
Reflecting on diverse perspectives and governance challenges in AI development (22:39)
Exploring AI’s role in enhancing humanity and work (27:00)
Role and responsibility in the age of AI (32:43)
## Episode Resources
BusinessWeek Magazine)Singularity University)Generative AI)Athena Alliance)Open AI)Dall-E)Midjourney)The Humanity Code)
PeopleEthan Shaotran)Joe Dispenza)Sam Altman)Ilya Sutskever)
## Transcript
Ross Dawson: Heidi, it’s a delight to have you on the show.
Heidi Lorenzen: Thank you so much for having me, Ross. I know we have a vision match on a lot of topics here, so looking forward to digging in.
Ross: You’ve had an illustrious corporate and related career making things happen in organizations and recently felt the need to go beyond that. Tell us why.
Heidi: Yes, that’s exactly right. That’s how we first met. I’ve had a 20-year career in tech as a go-to-market executive, CMO – Chief Marketing Officer, and the like. Before that, I was in media, in BusinessWeek Magazine for several years. One of the stops on my career tour, recently, about eight or so years ago, was at Singularity University which is renowned for educating entrepreneurs and executives on the crazy pace of technological change, that we are experiencing right now. Since then, I have been doing a lot of reflecting on what technology means for humanity’s future. As I’ve reached, it’s called a later stage in my career, with lots of experience, I’ve been thinking about the impact that I can create.
Ever since the introduction of Generative AI just a year ago, I’ve been thinking a lot about Okay, AI is in everybody’s hands right now, at this point in time. What we had been teaching people about at Singularity is basically here. There’s so much unknown, so much risk and so much potential. I just want to ensure that we’re focused on the potential and make the best happen.
Ross: One of your initiatives is a documentary. Tell us about that
Heidi: Yes. I didn’t just wake up and say, I want to make a documentary, it was more around this concept, this issue, that people must understand that, A, AI matters, they need to be thinking about it, and B, we have a window of time. There’s variance among researchers in how much time, whether it’s two years or 10 years. Some other friends say it’s too late, but not many believe that. There’s a limited amount of time for us to be intentional about how we’re shaping AI. I wanted to make sure that people understood that, also wanted to raise the awareness that AI, it’s not artificial, it’s actually very human because it has learned and is learning from us as humans. We haven’t necessarily done the best job in creating the optimal outcomes for all of us. How can we be more intentional about taking the best of humanity and encoding it within AI?
I’m currently calling this project The Humanity Code. Within it, I’m producing a documentary. I felt that was one of the ways to get the broadest and most visceral, invisible reach. I’m also just curating conversations around it, because literally, no one has the answers and the best thing to do is get all the brains on deck to work this through.
Ross: Let’s dig into that name. Words are really important sometimes. It took me months and months to land on Amplifying Cognition as the name for the podcast and my theme. The Humanity Code, I think a lot has gone into that as a frame. I’d love to hear what The Humanity Code means to you.
Heidi: It’s a bit of a double entendre, obviously with the word Code, referring to the fact that AI is coded. However, The Humanity Code speaks to what is the essence of humanity. What is our code that we want to encode into AI? What is the best of us? A lot of thought did go into it. I thought about Humanity Encoded, but that sounds scary, it sounds like you’re coding us. But the intent is, human beings, humanity, this species is pretty special and pretty beautiful. If we can take the best of that, and extract it, and be intentional about putting it in AI, then we’ll all be better off. Otherwise, to oversimplify it, AI will learn from the various incentives we put in place, the various systems we put in place, and our negative instincts.
All of that is very human. We sometimes get in our own way, though, of creating the best lives for ourselves and others. Wouldn’t it be amazing if we had a partner who had all the best of us and not the worst of us? I’m not saying that we can actually get to that beautiful, amazing point. But we can and we have to get closer to that with AI so that it, again, produces more of the better outcomes. It’s very, very capable, and it’s already doing a lot.
Ross: Absolutely. I was at this very interesting AI event in San Francisco, where Ethan Shaotran, a researcher and leader in the space, basically said that it is our role to be examples to AI, to teach them what the best of human behavior could be, as opposed to the obvious examples where it’s not the best it could be.
Heidi: That’s exactly right. I heard somebody also say, We are the prompt. We have to be the prompt. You twist on Gandhi’s Be the Change, not to be the prompt.
Ross: That’s a great frame. To get almost tactical, how can we put the best of what humanity is into these models and their behaviors? It’s a nice concept. Are there things that we can specifically be doing to be good examples or exemplars or guides to the best of what humanity can be?
Heidi: There are so many layers to that. That’s the creative challenge with this documentary; how to bring that out, because it touches everything from how we show up as humans, the inner guidance that we have, to the systems that we’ve put in place, the constructs, let’s call it, that we put in place to run societies, to run businesses, to run governments. Then, of course, there’s the technological component, the physical coding, and just the intentionality of what’s being done there. All of those are huge. It’s something like, okay, take out the playbook here and we’ll do that. That’s quite the problem. We literally have no playbook right now. There are a couple of things that I’m thinking about, and having conversations with folks about.
Thinking about ourselves, we don’t even have a collective vision for what humanity wants. If you think about the typical corporate world, companies have their mission, vision, and values, even people will do that work personally thinking these are my values, this is what I want to accomplish this year, this is my purpose in life, and we don’t have an extrapolation of that for our specie just to get really grandiose. To the degree that we can move closer to that is key. I think it’s a combination, again, of having conversations, and even doing collective visioning exercises.
I hosted a dinner not too long ago, where it was a room full of over 20 women envisioning what an ideal future could look like. I think collecting a lot of that input and just having people ponder that, as basic as it seems, I think it really makes a difference because as Joe Dispenza says, where you put your attention is where you put your energy. That really is true at a collective level as well. I think that’s key. That’s the softer, but maybe, really important piece because a lot of people don’t recognize the connections between what we feel inside, and then the actions that we take externally.
Another very practical area is around governance or regulation. There’s a lot of talk around that. That’s more in the vernacular today, and that’s regulation at the corporate level, at the government level. It is a key to it, but I think it’s, again, important to understand that regulation is what’s holding it in fact, but you still need that other side of the possibility and the potential of what AI can bring, so that as you’re regulating it, you’re building toward what you want, you’re not just clamping for the sake of clamping it down.
On that, there was this recent debacle, a roller coaster ride here in the Bay Area, with Sam Altman being ousted as CEO, and that essentially, was a major corporate governance debacle and shortfall. Companies are really struggling with what to do with AI because they’re motivated by commercial incentives. They want to leverage AI to increase productivity and efficiency and create new products and all of that. Yet, they also should recognize that the decisions that they’re making will have long-term implications for society. This isn’t just your corporate governance 101 type stuff, it’s something that governing bodies need to be thinking seriously about.
I’m working currently, within an organization called the Athena Alliance, on an AI governance playbook. There’s a group of about six of us women who are working on that. The idea is to focus on the things that, again, aren’t governance 101. One of our first pillars is long-term thinking. If companies are thinking about the implications of decisions on the planet, climate change, long-term value creation, and all senses of the word, the literal value creation for shareholders, customers, etc., but also just value creation for the betterment of society. That’s something that has to be taken very seriously now, because, again, AI will only amplify and accelerate what we’re doing now.
Ross: One of my frames for the last dozen years is around governance for transformation. It’s an idea that governance is not only about managing risks or downsides but also being able to amplify positives. Any governance system that stops change is broken, and it’s going to destroy organizations or society as a whole. That’s one thing where, both at the corporate level and supranational level. There’s a lot of rhetoric now in the supranational initiatives around AI governance, which are talking about the positive potential, and getting that balance between how it is we contain any risks or downsides, whilst opening up the possibilities of positive changes. It’s a very delicate and challenging one.
The pragmatic for the organization. If you have an organization and a leader who is looking at setting AI governance, what are the steps? You talked about that partly and you said firstly, is to have some long-term vision to be able to have some clarity or long-term thinking around what are we creating as an organization or what is happening from this. But what are some of the other steps or processes that a leader should be going through in establishing an AI governance framework?
Heidi: Another key pillar which, again, is very broad but high impact, and that’s just a strong degree of curiosity around learning everything. Those who are either waiting until this all washes out or those who are just so afraid, that they don’t want to touch anything related to AI, are the ones who A, are probably going to lose their jobs, or B, worse, could create some unintended damage. By just learning, being just super curious about keeping up on what AI is, what governance issues are, that is really key; and including the social implications of that.
Another is to think through the talent strategy for an organization. As an example, several organizations are now creating a chief AI officer to ensure that someone is keeping their eye on that ball from all angles, whether it is product development, internal efficiency creation, or positive impact on the community, someone’s got their eye on that ball. That’s one piece of it. But failing that, there’s recognition that this really needs to be part of every employee, especially leaders and the executive team’s ability to understand and to be adaptable. Because with this exponential rate of change right now, tomorrow can look very different from today. People have to pivot and adjust. Having that as part of the talent strategy, looking for those key attributes that may not have gotten so much attention in the past, but are now mission critical for good corporate governance.
Then, of course, there is risk mitigation overall. That runs the gamut from the basics, which I think is the bread and butter of this general security, security of company data, and security against cyber attacks. Also, the externally facing view of risk mitigation that what are, again, maybe some unintended consequences that could happen as a result of these decisions that we’re making as a company. It should go without saying, but another theory is just generally being compliant when those regulations come out, and the ones that already do exist, making sure that, again, somebody is keeping their eye on that ball to ensure that they are staying compliant.
Ross: We are speaking a couple of days after the famous Sam Altman weekend and without trying to draw any perspectives that will endure as opposed to be irrelevant a few hours from now, I think it does speak to some of the governance structures we need to have in place. We’ve pretty clearly established that governance structures there were not that effective. Are there any high-level lessons that we can learn around governance structures from AI early on from this extraordinary weekend?
Heidi: Yes. First and foremost, it just underscores, again, that no one has the answers, no one has a crystal ball of where AI is going to take us. More importantly, there isn’t a single point of view on how to achieve safe and ethical AI. Even within a company like Open AI, you’ve got various camps. Those are the camps that ultimately fought, let’s call it. But if you were to peel back the onion, both sides are coming from the same place of ‘we want the best for humanity’. You’ve got the Sam Altman side thinking the best for humanity is that we continue to develop, we continue to evolve AI so that it can better serve us, it can solve our problems, it can improve education and improve health care, it can create more economic equity, just to name a few, and on the other hand, you’ve got Ilya Sutskever and other members of the board, who are a little more on the Doomer side, and they too want the best for humanity, in their minds, they don’t want humanity to somehow become extinct because of AI. They feel Okay, we’ve got to slow down, and maybe even pause. The current focus for the Open AI management team right now is that they intend to pause and slow down.
They’re both coming from this same desired outcome of creating the best for humanity and yet, there’s a different point of view. That just underscores how incredibly hard this is. From a corporate governance standpoint, what it does mean is, again, just being very, very transparent, staying grounded in the core values, talking through the pros and cons of the different options, and making decisions, as you were saying before, that create positive outcome while dampening the worst risk. It can’t be one extreme or the other. I don’t have a pollyannish view around what AI can do, but I am very impressed when I see some of the things that AI can do. I, for one, would like more of that and so would others whether they are the tech leaders or just the general population. But there has to be the balance. We can’t go guns blazing to support that because then you may literally have done.
**Ross: **Taking a bit of a sidestep. The theme of Amplifying Cognition is, in a way, amplifying humanity. That was probably number two on the list of names of the podcast – Amplifying Humanity – which was probably just a bit misleading title. But that’s really what we want to do. How do we amplify our best thinking as individuals, or collectively as humanity? Getting down to specifics as much as possible, what are ways where you see that we can use AI to amplify who we are, in terms of just our work, our intent, or our values? What is that path to amplifying humans through AI?
Heidi: Again, many, many answers that are deep and below the surface and those that are more external. But what comes immediately to mind is I actually saw Ilya Sutskever. He’s the Chief Scientist at Open AI. I saw him present. Just to underscore that even though he has a different point of view, he still is coming from the right place, when he was a little boy, and he would play with technology, he got this intense sense of who he was compared to the technology. That really stuck with him. Now he thinks a lot about how AI can help us understand more about ourselves. Just as we’ve been talking about right now, it’s causing us to reflect on what is it that really makes us human. What is different than what we’re teaching AI? What’s the sustaining component of the essence of who we are that’s there? I think just that in and of itself can help us reflect and learn more about who we are both individually and collectively.
Think about the wonderful things that humans do, take creativity of any sort, and think about how AI can amplify that. A lot of people are afraid it’s going to take it away. But if we focus on the amplification As an example, writing. There are amazing writers and AI can also do very good writing. It’s an amplification of our ability to do that. Those who aren’t such great writers can get better and can learn. For those who are great writers, that can increase their output, so more of their works get out into the world. As you know about art, we’ve seen the beautiful pieces that Dall-E and Midjourney are putting can be used to create art.
A neighbor of mine has been creating the most amazing pieces. Early on, as a little girl, she always wanted to be an artist, and then she too went into the corporate world and did that. Now this is just allowing her to go back into her passion without having gone to art school or having all sorts of training. She’s just totally delighted by the experience, but she’s also feeling really guilty, like, is this really art? If I’ve asked the AI to borrow, to merge Dall-E and Matisse, or something like that. Is that her art or not? It’s really conflicting, but it’s amplification, nonetheless, it’s human expression.
Then you talk about work. The definition of work is going to evolve and change. I think the jobs that we hold now, in a decade or so, will seem very odd and quaint. We’ll look back and think, why did we have people doing those things? What a waste of time. But AI, whether it’s through, again, helping with coding, helping with writing, or helping doing data analysis, can just accelerate the grunt work that nobody really ever did want to do, and allow the potential to focus on other areas. Another event that I was at, I won’t name names, but it was a fairly senior product manager from a pretty renowned tech company, and there was a question around how he was thinking about AI and its ultimate impact. Her response was, of course, it’s all about efficiency. That was a complete answer. If that’s really all we’re fighting for, that would be a pretty sad state. There’s so much more that you can do with AI.
Yes, some efficiencies are created in the work environment, but again, toward what end ultimately? Toward what end for the company? Toward what end for the customers it serves? Toward what end for the employees and other stakeholders? These are just a few things that popped into my mind.
Ross: To round out, what are your suggestions to our listeners on what they could or should be doing now in terms of being able to play a role or to participate in being able to create a better future through the extraordinary times we live in?
Heidi: I would just say you do play a role, whether you’re aware of it or not, by any interaction with AI. Of course, there’s a lot of AI already running underneath the surface that we’ve been for decades now. But particularly with generative AI, because that’s where we are, unless you’re working for a company and coding, Generative AI, which is the writing and the art creation, those types of things that actually create things as an extension of humans, amplification of humans, with those interactions, again, just be thoughtful about what it is that AI is going to be learning from you as you do that. Then secondarily, I’d stay really smart on what governance and regulation is going on and ensure that that does exist.
Currently, again, at this moment in time, the EU is about to determine what they’re going to be deciding on voting on for their AI constitution. They’re ultimately a regulatory body. Stay abreast of that, and if people aren’t happy with where things end up, make sure to be vocal about that. Then lastly, I would say, two more things. Within your company, help raise awareness as well as a lot of things that we’re talking about, and particularly if you’re a leader, some of the things that we talked about before. Then lastly, just do some of that reflection of what is it that makes you human. What is special about us? What is it that we want AI to co-partner with us in creating? What is the vision that we have? Share that with as many people as you can as well.
Ross: Absolutely. We need to have that. Know what it is we want to know to be able to create it. Thank you so much for your time and your insights, Heidi. It’s been a true delight.
Heidi: Thank you, Ross. Real pleasure.
**
The post Heidi Lorenzen on encoding humanity in AI, regulation and possibility, amplifying creativity, and collective vision (AC Ep22)) appeared first on amplifyingcognition).