#### *“True intelligence is a social function. It’s about social cohesion. Intelligence happens in groups, it does not happen in individuals.”*
– Tim Stock
##### About Tim Stock
Tim Stock is an expert in analyzing how cultural trends and artificial intelligence intersect. He is co-founder of scenarioDNA and the co-inventor of a patented Culture Mapping methodology that analyzes patterns in culture using computational linguistics. He teaches at the Parsons School of Design in New York.
Website: www.scenariodna.com)
**LinkedIn: **Ufuk Tarhan)
Faculty Page: Tim Stock)
## What you will learn
Exploring the concept of culture mapping
Understanding the subtle signals in cultural trends
Discussing the impact of generative AI on creativity and work
Differentiating between human and machine intelligence
Examining the role of subcultures in societal change
Analyzing the future of work and the merging of physical and virtual spaces
Emphasizing the importance of structured analysis and collective intelligence
## Episode Resources
##
**Ross Dawson: **Tim, it’s awesome to have you on the show.
**Tim Stock: **Great to be here.
Ross: So I think people need a bit of context for our conversation in understanding the work you do. A lot of its trends are around culture mapping. So, Tim, tell us what is culture mapping.
**Tim: **It’s a culture mapping, really has its roots in understanding what is going on underneath the surface that people aren’t paying attention to. So I searched essentially when we speak to whoever I need to explain cultural mapping to, it’s to help companies understand how and why culture is changing, and how to use that information to make better design and business decisions. And so a lot of those kinds of real changes in culture are not obvious. They’re not things that we can ask people about. So they’re the weaker signals. And so culture mapping allows us to be able to map the relationship between what is the broader culture and subcultures and understand the relationship between those and how they develop narratives within society and cultural change.
Ross: So where’s the state of today? So what are some of the signals you’re seeing in the Euro cultural mapping work?
**Tim: **Well, I think that , we’re in a particular moment where we’re shifting from one kind of age to another in terms of, especially in terms of how people do work, how we understand our relationship to identity, there’s a growing nihilism, I would argue that’s going on. And I think that , when people say a lot of when we’re talking about things that are happening, the negativity that people would say, is coming out of the pandemic. But again, if you see, from a cultural mapping standpoint, those signals were there already, but those externalities of the pandemic just really exacerbated them. So things like, sort of issues around how we see work, and how we understand our relationship to work, has a lot to do with how technology is changing, it has a lot to do with the kind of work that needs to that is the kinds of skills, all of these kinds of affordances that go along with that. And essentially, culture is always trying to catch up to that particular change. And so at this particular moment, I say, we’re kind of stuck. There’s a moment where we haven’t found our voice yet. And so it’s the reason why we see a lot of this kind of there’s political dysfunction. There’s, there’s issues in terms of I mean, we’re, we’re at a moment where there’s a lot of unrest, and there’s a lot of language around that. And so essentially, I see us trying to as a society, trying to find that trying to find that voice.
Ross: So, yeah, there’s a couple of directions for this, we’re looking at the role of generative AI, one is from a cultural response. Another is, I suppose a deeper level is to our understanding of what is our relationship with generative AI.
**Tim: **Yeah, I mean, it comes down to what do we do? And I think that that’s that , that nihilism is emerging from well, what am I supposed to do? What is what caught the, we, we’ve almost coped, we’ve co-opted a lot of these words like intelligence. So what is left for humans to do? And the state of AI, I would say, is that you would see that there’s a lot of replacing, and mimicking human actions. As sort of, we get sort of things that look like they’re created, the word creativity, for example, has been co-opted, and sort of like so. But, we’re at a point where we need to be asking what is creative. I mean, creativity is a human action, human intelligence emerges differently, the machine intelligence, machine and child, that, machines don’t see ghosts, machines don’t understand, machines can’t believe in, in conspiracies in the same ways that humans can humans, see in between. And it’s how we access information learning, learning as children, how we acquire language, and how intelligence is so tied into narrative in that particular way. But right now, what we see is a lot of replacing things that we normally would do. So the question is, is it especially if you’re a young person today, you’d be like, Well, what should I learn? What are the kinds of skills there’s nobody to tell you? Because it hasn’t been framed? To be able to understand, so in a way, there’s a lot of shift towards the individual. We’re seeing sort of, in every area, from medicine to education to work, we see the individual now having to take on much more responsibility. And I think that causes a lot of anxiety. And so, that’s the moment we’re in until we sort of figure out well, how are we going to use these tools? So that sort of more collectively? I don’t think we figured out the collective part of that.
**Tim: **Yes, yes, last year, I created this mapping intelligence framework. And my first response, as we started to look at chat DBT was that we don’t know what we mean when we say intelligence, essentially, that humans are the reference point. So there’s this debate where we happen to come up with a phrase and you use the phrase artificial intelligence all the way through whereas, if it could be a production of history, you might have called it cybernetics or some other phrase, which will give us completely different frame but because it is art or artificial intelligence, its objective has always been to copy and to replicate or try to be the same as human intelligence, which means puts us in a challenging situation. Now, when these are tools we can augment us, Doug Engelbart talked about augmenting intelligence instead of intelligence, artificial intelligence, intelligence augmentation instead of diligence. So this in a way requires cultural and linguistic reframing so that we can move to the amplifying condition types as reframes.
**Tim: **Well, it’s a culture mapping, it’s a culture mapping exercise because it means so much of what is currently broken in, in AI in generative AI is that I mean, it’s been programmed, it’s, it is one framework, it has walls, and it is defined this language is defined by the engineers that defined it. And the rules that it lives by are it’s sort of what, what we consider intelligence is what an engineer would consider to be intelligent, which would be to be able to replicate or, and so forth, speak to an artist or speak to a poet, and you get a very different answer. And you almost get this, we’re seeing this very ideological kind of battle going on, and sort of reclaiming kind of ownership over how these, these models are actually trained and so forth. But , more simple than that, what do we consider to be intelligent? When I think of chat, GBT, I like to think of it, it’s the perfect way of sort of measuring intelligence, because, what you should be saying, we all kind of go through this one, we’ll put something into check GBT, it should be like, caught that stupid like that. Why? Why isn’t that better? That’s our ability. The problem is that we’re too accepting of these answers. And I think that we sort of end there is that part of us as human beings, I mean, the difference between machines and humans, and this is key to cultural mapping as well. True intelligence is a social function. It’s about so-called social cohesion. And so like intelligence happens in groups, it does not happen in individuals. So I think that we have this this idea that individuals kind of create this great, great things in our society, it happens in groups, it happens in these subcultures of groups that create that kind of knowledge, machines can’t do that. But the problem is, we also can be, we can be sort of swayed towards accepting that information, it’s how conspiracies happen. And so like, that’s the moment we’re at, where we have to be more aware. We are, in a way, using when I think of augmenting our intelligence, I think of it’s always interesting, bringing up gender bringing up AI and sort of certain cases. And it’s like the case in terms of dating, the idea of being able to sort of date versions of up-to-date versions of yourself to stop you from making the same mistakes and so forth. It’s like learning throughout this process of vamos helping you to be the person that you want to be.
But the thing is, most of the things in society allow us to sort of copy and sort of, sort of double down on these bad habits. And I see, in a way so much in AI is doing that the Internet didn’t have that problem. Generative AI is too quick. It’s almost like the arc of adoption is so fast. And so with the internet, it allowed us to stay in the subculture space much longer. So it was able to kind of get that kind of life the whole cyberpunk and all. Have these other kinds of surveillance culture and everything, and there are so many cypherpunks and everything emerging with AI now, it’s almost like it’s an immediate meme. It’s all it’s like, it goes from zero to cottage core, and in two seconds, and it’s like, then everything looks the same. And we go, Well, isn’t that okay? We go, Well, if I said, No, will I be wrong, and people don’t want to be wrong. And so they’re like, we’re all kind of affirming a lot of these sort of negative aspects of it right now.
Ross: Yes, yes. So I’m saying a lot these days that the biggest risk with AI is over-reliance, where we sort of say, Oh, that’s good. And we just leave it at that, and we don’t exert our cognitive capabilities or stop when it starts going not being as good as it could be. But I think that goes back to your point that intelligence has been a social function. And one aspect of that is that many people find it very useful to be in dialogue with AI you can refine your ideas and have a useful conversation sometimes from an emotional perspective, sometimes from a just idea generation perspective. But it is still essential, it is not a true counterpart. And I don’t think you can have the same dialogue you can have with a single group of humans,
**Tim: **And you can’t and the other part of that key to this Ross that’s different is the fact that we’re having a dialogue and we’re the dialogue you want to have with that it’s sort of like I want to, I want to get better, I want to improve something. But the thing is, that normally within social behavior is that you do that. And if you then begin to disagree, then you create other subgroups of people that believe your particular idea. And you develop a whole group and ideology around that. And that becomes its area of development, you get tools, you get other kinds of technologies that way, the problem is, we’re almost creating this very linear, and it’s, there’s no, there’s no divergence.
I mean, the key to intelligence is divergence. It’s not to affirm what’s there, it’s actually to push away from that and to do something that is diversity is key to biological you to our biological health, and like, it’s the same in terms of intelligence is that you, human beings, by nature, tell them to do something and that they’ll do they have this in, ingrained in them to move away from that if we make it easier to be like everybody else. If we are, we’re diluting and diluting and diluting our intelligence with that process. So in a way, the internet was a form of AI, because it was all of the subgroups working this collective intelligence in that way, understanding how to sort of come back to that in some, some some way with with these augmented these hugely sort of transformative augmenting tools like generative AI, artificial intelligence is that would be that would be what we would need to be moving towards.
Ross: Yes, well, yeah, I love what you’re saying to me intelligence is, is diversity or is grounded in diversity. And that is, it can be one of the useful functions. I mean, asking for diverse perspectives during the VI on particular situations is one of my favorite tools and says, Oh, I hadn’t thought of it that way. And that is something additional, but it is complementary to my cognition not to, it’s not the machine doing the intelligence, it might come up with a random, useful and useful but it Yeah, and I’m the one who’s passing it. But it is useful to get those perspectives.
But this in a way, it comes back to how we can do this as well as possible. And to your point, I think there’s conceivably homogenization of thoughts of some kind, which is potentially emerging from this. So how can we use these tools to augment to amplify or perhaps even a better word is extend thinking, as opposed to having a channel into narrower, narrower conduits?
**Tim: **Well, the other I mean, the key is that sort of one fundamental sort of step that can be taken is recognizing what tasks to give AI to do and recognizing that means, intelligence. AI can be incredibly valuable because one of the things that human beings have biases. And so in a way in terms of decision making, they will begin to believe things that are actually against what the decision that needs to be, needs to be made. And so like, you can almost And then we almost sort of affirm that is that there are parts of this rash, this rational area for AI that allow us to sort of what are these rational things that we need to do when we start mixing the rational with the creative and so forth, we kind of mix these two things together, we should be focused on the interpreted the deciphering part, and up upping our game in terms of our deciphering our analysis ability, as opposed to taking what AI is giving us as analysis because it isn’t, it’s just an output of whatever might be flawed in our existing analysis and the input that we put into it. So we need to be a bit better at that.
Ross: So how do we get there? What I supposed to get better at, that’s kind of a long analysis.
**Tim: **It’s a topic that is very much hot within the intelligence community, which is , they have this thing called structured analytic technique, but nobody uses it. And the idea is that, if you have structure, I mean, the problem is, is that we have these , the the intelligence community has a methodology of structuring analysis and being able to say, what are we doing, what are we sort of putting off to kind of machines and computational analysis and so forth. But then we kind of go back and say, Well, I trust my gut on this, and I’m just gonna go with my gut on this. And we have to now recognize that so much more, how much faster, things can go wrong. If we don’t, if we don’t slow it down, structure it. So structure, I would say Ross is critical, which goes back to I would say, culture mapping, which is essentially a way of structuring, structuring language to say if I were to if I, if you were to talk about any one concept people would say, Oh, well, I know what you mean, culture mapping is to say, wait a minute, no, there are many different other implied meanings to what you think that I’m talking about. And understanding that structure is important because then we start recognizing why certain things go wrong in society. And I could give you one example right now: we have an existential threat of climate change. And we have over the last 10 years developed programs like ESG, all these different kinds of language around this. And when we’ve done that, we’ve created a counterpoint.
So right now, there is a, there is as much of an ideological movement against all of what we’ve created towards dealing with the sustainability issues that relate to climate change, and we can’t, and you can’t battle them, like there are people who fight against 15-minute cities, or there are people who voted fight against vaccinations there, I was just reading today anti-fluoridation is back in the United States, it’s sort of like any science, any that the irrationality of human beings come because, and it’s, it’s not wrong, it’s the fact that what you’ve done is you’ve led something that should be very structured in an emotional way you’ve packaged it, and you’ve expected that everybody would believe you, and everybody would come around, but actually, change happens socially. And you have to understand to be able to deal with the future is understanding the weight, all the different ways, things are going to change as externalities change all the different contexts as the context changes, and people are going to have different kinds of ideological responses, you need to be able to have a structure to say, I have some scenarios for how that what, how that might how that might particularly play out? , we saw a lot of these particular signals in the recession. And then it was really clear, but nobody was paying attention to them. And so if they were there, and then the pandemic hits, and then it’s like, then they’re so obvious. Now they become obvious. But it’s the structure that you need to say, how do we then put solutions to understand what you’re dealing with? Because you’re dealing with people and different groups of people. Everybody isn’t the same. Everybody’s not going to believe what you believe. And you have to deal with that kind of variability within society.
Ross: So pulling us towards collective intelligence or group intelligence. I suppose you’re a part of the subtext here being Intel Jen says she says a social function is not far, far more than an individual function. And, with these kinds of existential challenges, or more and more complex challenges that we have, we do need to build collective intelligence that is superior to individual intelligence that has to be the path to our collective future. So in that, guys, what are we, this new intelligence that we have? How can we best build the best of humans, particularly human group intelligence, augmented by or supported by some of these new tools? Well, I think I mean,
**Tim: **For us, it’s that it’s being able to see those subcultures that are in that makeup, the society, we sort of think of society as a monolith, or we think of society as being demographically based. It’s sort of divided by age, or it’s divided by ethnicity, or divided by, it’s divided by these cultures, these particular relationships to the subcultures, whether you’re directly related to them or not. And for anything that’s changing within, within society, there’s always this point of culture mapping is that there’s always this process between affirming codes of society. And then there’s always this, this counterpoint that’s always happening. So as language becomes sort of static, and becomes the rule and the law, there’s always this counterpoint. So being able to understand and invest and understand what that response is not capitalized on, not commercialized it, but understand what it is, I’ll give you an example that works that we did back a little bit over 10 years ago, and it was that , looking at, at bicycles and cities, and , bicycles as a machine was is was understood as being a leisure product that is sold everywhere. But, these subcultures of cycling that had been sort of living under the surface, were telling you how cities needed to be planned. They were telling you how they needed to function, how adaptable they needed, and not just sort of mobility but also things like food, it’s sort of you start getting other codes of, of behavior that go with that. There are different researchers during the pandemic that studied skaters like skateboarders and made connections between understanding stents, understanding skateboarders, and how to help people age in place.
One of the biggest issues globally is that we’re living longer and that it’s very difficult for people if they’re living into their 80s and 90s to live in the home that they’re in, because the city is the town that they’re in, isn’t planned that way. Do you understand? Adaptability? Do you understand? Do you understand how how things need to function, you need to look at those parts of the culture that are telling you how things need to change, there was the same thing with our mobile devices. , we didn’t have VPNs. But some subcultures were telling us that privacy was an issue with technology while everything technology companies were telling us, what are you so worried about? Why can’t we put a camera on everything? And why are you so freaked out about that? Well, subcultures were telling us, and even things like the right to repair.
I mean, right why can I fix my foot? All of those things are there all the time, but we don’t pay attention to them. And we have to understand, we have to sort of, in a way a collective intelligence embraces the full, the full range of what society is, and doesn’t sort of, sort of force it to conform to the king of the model that we have, which is currently what we’ve done. I mean, demographics, sort of like everybody sort of , fits within a certain box. We’ve had that model. Since then from the 20th century, it’s sort of like it shapes polling and shapes, so many decisions that society mates, but it does, it’s, we see it’s giving us less and less good results. , it sort of gets it wrong, more and more and more of the time why? Because people now don’t , they don’t fit nicely in those boxes. And they and the speed of change is so fast it’s like and how they’re influenced by the range by which people are influenced so it is so much broader because of technology. We have to understand the full breadth of society to be able to do that. That’s what a living foresight model is. What is collective intelligence to me?
Ross: Well, I think, to your point, what I take from there is in a way the traditional framing of collective intelligence is you put a bunch of individuals together, and you architect ways in which together, they can be more intelligent. But it’s a well, perhaps the units that you are working with are subcultures. And so you might have a group of people that think a particular way. And then another group of people think quite a different way, another group of people think it in completely different dimensions. Linking together those subcultures that represent a frame of the world, or a way of perceiving things or a way of sensemaking is bringing together those cultures out of which true collective intelligence can emerge, rather than looking at it as this aggregation of individuals.
**Tim: **Yes, I mean, I think we tend to think we think and we think in these boxes too much, I mean, for example, I mean take everybody’s talking about the future of work right now. And they’re really practical issues related to that because it’s like, it comes down to what is an office for what, like, we have all this real estate that we suddenly the pandemic sort of whacked and you go, like, oh, well, what am I going to use that for? Oh, well, we’re going through this nihilistic phase where we’re going to force everybody to come back and we’re going to surveil them, okay, good luck with that over, I’ll give you how much time underneath the surface, I’m telling you, there are these other ways in which people are sharing intelligence and solving problems. It’s why I like there are different I mean, it’s before I’ve been talking about this, there, there are many different kinds of companies and that has sort of tapped into the intelligence within, within video games, for example, because and how people, and , and they’ve even sort of brought that into and brought that into how tasks and sort of problem-solving is done within a company, but you start dealing with, those are the essential issues of dealing with even more abstract issues, like the relationship between physical space and virtual space and realizing there is no, there isn’t physical or virtual. Now we’re dealing with this emergence of something called fourth space, which is this, where we’re digital and physical at the same time, and who understands that, first, who could give me a framework for that?
Well, I need to be able to tap into those particular groups because that’s going to tell me how to make what an office should be, it’s not going to look anything like what we currently have, it may be a park, or it might be a, it might be a mall, or it might be because people are going to are, what do they do when they go to work, they communicate, they and more and more of work is becoming kind of a grazing, more than it is sort of the idea of a meeting, we sort of do shorter, it’s shorter kind of creative kind of conversations, and then we go and do our tasks. One of the things that the pandemic taught everybody is what the hell is nine to five? Why do I have to work five days a week, if I can get all my work done in two days, or whatever. So the idea of time has changed. So used to getting all of these kinds of concepts are constantly changing, and society, that we’re not keeping up with the cultures that define what that meaning is. And the subcultures are those groups that are ahead, and we need to sort of understand because then the rest of society pulls that in like they did with privacy because it’s sort of like they the average person didn’t understand that privacy is very abstract, but they kind of go, who has this, who’s ahead on this, and they start pulling in those behaviors, and then they start becoming normalized, and they become habit, and habit becomes culture. And that’s the issue. So studying that kind of relationship between what is the general culture and subculture is what’s, really, really, really critical.
Ross: So is there anything which you would finish off with as advice or suggestions for listeners based on what your work or what you’re seeing or what you do?
**Tim: **Well, I would I would say that there’s a lot of there’s a lot of opportunity within with generative AI and I mean,I also teach a I’ve been teaching a course in in trend analysis for going on 20 years now, and which I have integrated generative AI. But it’s recognizing how we can integrate these tools that so we do not repeat lacing what we do so in a way we should be what we should be right now, what I’m hopeful for is that there’s a great opportunity for kind of a renaissance in, in, in education and sort of defining what kind of skills that we need. And I think that I think these tools can be incredibly valuable in doing that. So I would say, like, recognize kind of what the potential is, and don’t forget that we should be raising the bar as human beings in terms of what we consider to be intelligence, what we consider to be creativity at this particular moment. Yes.
Ross: I 100% agree. So where can people find out more about your work, Tim?
**Tim: **You can go to scenario dna.com. I have a blog that’s related to a class that I teach called analyzing trends.com, which I have not posted as much lately, but there is that as well.
Ross: Fantastic. Thanks so much for your insights and all of the work you do. Oh, great.
**Tim: **It’s great talking to you, Ross.
The post Tim Stock on culture mapping, the culture of generative AI, intelligence as a social function, and learning from subcultures (AC Ep44)) appeared first on amplifyingcognition).