Home
cover of episode EP.32 [EN] - Alex Svanevik: A Deep Dive into Nansen's Services and some DAO Discussion / 深入探讨Nansen的数据服务及一些DAO的讨论

EP.32 [EN] - Alex Svanevik: A Deep Dive into Nansen's Services and some DAO Discussion / 深入探讨Nansen的数据服务及一些DAO的讨论

2021/7/12
logo of podcast 51% with Mable Jiang, Presented by Multicoin Capital

51% with Mable Jiang, Presented by Multicoin Capital

Chapters

Alex Svanevik discusses his background in artificial intelligence and management consulting, and how he transitioned into the crypto space after discovering Ethereum in 2017.

Shownotes Transcript

Translations:
中文

Hello and welcome everyone. I'm Mabel Jiang, the host of 51%, a podcast series presented by Multicoin Capital. This show is the exploration of blockchain's rapid development across Asia, with a particular focus on the perspectives, communities, and operators based in China. My goal is to bring Eastern perspective to West and Western perspective to East, so you can better understand the crypto's unique market structure and how these distinct communities think and operate.

This podcast will feature a mix of English and Chinese discussions. The language you're hearing now will be the language I use for the rest of the podcast. Stay up to date with my latest episode by subscribing to this podcast. Thank you for listening.

Thank you.

Hello everyone, welcome back to 51%. This is Mabel Chang, your host. Today we are super honored to have Alex Spaniewicz from Nansen to join us. Hi Alex. Hey Mabel, great to be here. Thank you. Before we jump into the discussion, I'd like you to introduce your background and how did you get into crypto?

Sure. Yeah, so I'm Alex Svanevik. I'm the CEO and one of the co-founders of Nansen, a blockchain analytics platform. My background is in artificial intelligence. I've worked a bit more than 10 years in that area since graduating from Edinburgh University back in 2010.

So basically, after I graduated, I started a small company, an AI consultancy, and this was in 2010. So it was a bit perhaps too early before the whole AI hype had started. But we did some interesting machine learning based projects and things like that.

And then I quickly learned that I wanted to understand business and strategy better. So I went into management consulting for a few years, worked in a pretty diverse set of industries such as seafood, luxury retail, but also more traditional ones like banking and insurance.

And then after a couple of years in management consulting, I wanted to go back to sort of the data and AI roots. And so I took a position as a data scientist in a media group in Europe. And so I was there for four years, ended up leading a team across London, Barcelona and Oslo. And then in 2017, I discovered Ethereum.

I had known about Bitcoin for many years, but to be quite honest, it didn't really resonate with me. But Ethereum was kind of the first thing that was blockchain related that I got really excited about. So it only took a few months before I kind of aped into my first Ether until I actually just left my job and started working full time in crypto. So I moved from Barcelona to Hong Kong.

to lead a data team at a startup there in Hong Kong. And then that startup didn't quite work out for various different reasons. And in 2019 is when the roots of Nansen came about, when myself, my co-founders Evgeny and Lars started working on Nansen. And it's been quite a ride until now.

Nice. Interesting. So let's jump into talking about Nansen. You said you guys started 2019. I wonder how the product evolved because I figure right now you have a pretty strong focus on the smart tax. Is that what you call it? But I figured like back in 2019, there weren't that much data, on-chain data or transactions on-chain. So I have an evolution of the product.

Yeah, so I would say actually the roots of nonsense go back to 2017 because...

The sort of genesis story of Nullicent is that, so I was trying to basically do analytics on on-chain data myself back in 2017. And I wanted a way to query the data in a seamless way. And I couldn't find a good solution. So I actually started working with someone else to try to build a system that would ingest on-chain data and make it easily queryable through a normal relational database.

And as I was working on this, I came across Evgeny's open source project, Ethereum ETL. And I think I literally discovered his project a week after he had published it on GitHub. So I was probably one of the first users of it. And so I reached out to him and I said, hey, we're trying to do the same thing, but it looks like you're way ahead of us. And I'd love to collaborate somehow with you.

And somehow I managed to get him to join the company that I had just joined as well in Hong Kong. And he ended up joining my team there as a data engineer.

And so the agreement with that company was that we could continue working on that as an open source project, but we would still leverage it for the company there as well. And so effectively, we were thinking later on when we had left that company that we wanted to find some kind of commercial project that could sustain that open source project over time. And one of the things that people...

we're always very interested in when we spoke to people about blockchain data was information about addresses.

And so we figured maybe we can keep the base layer of ingesting on-chain data and so on open source, but we can build products on top of that base layer by having enriched data and a really good analytics platform on top of it. And so that's the part that started in 2019 when we had sort of thought about this for a while and experimented a bit. We figured that we should keep that base layer open source,

But we want to make really good and rich data on top of that. And, you know, the best on-chain analytics that you can find in the market and make that available to crypto investors.

So that was kind of the background. I would say before Numson, most of the kind of labeled or enriched on-chain data that existed were the AML focused companies that were kind of giving this data to regulators, law enforcement, tax authorities, etc.

But the focus of Nansen has always been to basically give really good data analytics to the market participants in crypto. So that's been our main focus. And yeah, so since 2019, we started building sort of the Nansen product itself. We launched it April 2020, right after COVID had really set in. And then it's been growing very, very rapidly since then.

Oh, that's perfect. That's exactly when the pre-DeFi summer kind of took off, like April 2020.

Yeah. So I don't know if that's, you know, there's definitely some degree of luck there. But we certainly had, yeah, the timing was definitely very good. I was using the product myself a lot. And I think that was one of the keys to the success of Nansen that we were actually power users of our own product.

Right. Yeah, I was wondering, because back in 2019, I mean, I'm sure there were a lot of, in the Ethereum core community sense, a lot of people using DeFi, but still compared to today, it's still a relatively very small group of people. So I was wondering who were using that. But if you said 2020 in April, that makes a lot of sense.

which is very interesting. You mentioned about the commercial aspect of it, because that's what I actually wanted to ask you about. For anything that is open source, I guess people often will think it's hard to monetize. Is that why you guys decided to offer this subscription plan type of business model?

Yeah, that's pretty much the thinking. So I don't think there are a lot of great examples of how to monetize open source-based businesses. You know, one company you could look at is basically the company of WordPress, right? Automatic, their goal. You know, there as well, WordPress is open source, but they, of course, have a kind of a more convenient hosting service on top of it.

So that's an interesting example. But in crypto, typically when people do open source, they tokenize. That's the way you try to sort of instead of monetizing in a traditional way, you add some sort of token. And the idea is that the token will somehow accrue value. And simply copying the open source code is not enough to disrupt that business. And I think that hypothesis has evolved.

been tested a few times with SushiSwap and Uniswap, for example. But in our case, you know, with information, it's a little bit harder to have a fully open source model because you're basically monetizing information, right? And if it's totally open source, it's not clear how you can keep monetizing it without someone else just simply copying it.

So we figured, you know, let's have the base layer be open source when it comes to the on-chain data and the ingestion of that. That means we can also have other people, you know, collaborate and contribute to that project.

But then any sort of added value on top of it, and we think there's a lot of value that's added on top of it with Nansen, people will hopefully be able to pay. They will not just be able to pay, but they will be happy to pay because the value of that information is much higher than the cost of it.

This is interesting. So kind of reminds me back in the days, I think 2018, there were a bunch of Ethereum on chain centralized query services. And then back then 2018, December, you know, you have the graph emerged.

A lot of people were questioning at that time, like, why is there a moat for the graph if you can do it easily through centralized services? Do you see some sort of similarity in this situation here? And do you think there is some potential possibility that someone launch a network

but doing almost exactly the same type of services, but using decentralized networks. Yeah, it's not clear to me how you can retain

If you think about what Numson does, right, that's information directly. I think with the graph, it's almost more like convenience. Like if you're a developer, you would use the graph because a lot of the infrastructure, you can just tap into their infrastructure and you can build a front end that relies on their subgraphs and so on. And so I think that makes a lot of sense. There's a pretty clear...

cost component to it where you just save a lot of money like using the graph instead of having to build everything yourself I think with something like Nansen it's a little bit more unclear how you'd be able to decentralize it fully and have the sort of token retain value without the information leaking elsewhere and so my thinking has been that

Ideally, if you're going to tokenize, you want a way to also decentralize things. But if you decentralize, it has to kind of be open source, right? And then if it's open source, how can you protect that information if the idea is to have it be accessed through a token or something like that? So I think you could...

it's not 100 sure that that's the only way to think about it because there are clearly other projects out there who have tokenized and they're not open source like ftx have ftt finance has bnb right and these are not open source businesses but they've still tokenized so i think the you know i've come to realize that probably tokenization and decentralization are not necessarily like one does not imply the other um

But, you know, I would openly say that we haven't found a really good tokenization model for Numson that I'm happy to jump on. But we also haven't ruled it out completely. So, yeah.

If we do find a model or if someone else finds a model, then there is a chance that we might want to decentralize Nansen fully and tokenize it as well. But as of now, I think the benefit that we have not having a token and not having to worry too much about decentralization at this stage is that we can focus 100% on the product. Because if you have a token and you have a product,

Basically, you're innovating in two dimensions at once. And both of those dimensions are really hard to innovate in. So...

For a product like Nansen, which is continuously evolving, it just makes more sense from a resource utilization perspective to focus all our resources on the product and then have just a standard SaaS business model as the foundation. And then who knows, if we come up with a tokenization model later on, maybe we can implement that. That makes perfect sense. So what are the current services that you have right now?

Yeah, so the main product we have, which is kind of what you get if you either do a $9 trial for seven days or if you just buy a standard subscription, is basically a user interface, a platform where you can go in and you can effectively do three things at a high level. You can discover new opportunities. So that means you can look at dashboards like our gas tracker, hot contracts, which shows flow of capital into recently deployed smart contracts,

And all of this, you know, with a very high degree of labeling that goes on. So I think maybe we kind of glossed over that. But the key thing with Nansen is that, you know, we have on-chain data, but we also have more than 90 million addresses that we've labeled up with metadata. And so that can be behavioral data, but it can also be entity data. So you know that this address belongs to this specific VC fund. It belongs to this mining pool or this exchange fund.

or this DeFi protocol, et cetera. And so in our data, you can see and discover new opportunities because you see capital flowing into smart contracts. You see which smart contracts are actually consuming a lot of gas because people are using them.

And then once you have discovered these opportunities, you can do due diligence on them as well. So you can plug in a token, you can look at a specific wallet or a smart contract, and you can see who else is putting capital into this and you can get a breakdown of that.

So that's been extremely useful in the DeFi craze and DeFi summer. If there's a new smart contract that people are yield farming, you want to figure out very fast who else is yield farming this opportunity because that can be a pretty good proxy for whether or not it's worth spending any time on. So it's actually a really quick filter to understand, hey, maybe this is worth looking more closely at.

And so in addition to our labeling of addresses, we also classify addresses as what we call smart money.

And smart money, the concept there is you can effectively, instead of looking at all 150 million plus addresses that exist on Ethereum, you can boil it down to like the couple of thousand addresses that you should be looking at because they are really good at what they do, because they're doing ARBs, they're doing early stage token investments.

They're doing yield farming at high APY type yield farms, etc. And so we distill these addresses and we say these are addresses that you might want to follow. You don't necessarily want to do exactly what they're doing always because they can have very high risk tolerance.

But when you do due diligence with our platform, you can see these smart money addresses sometimes being active in a specific opportunity. And that can be a sign that, hey, you might want to look at this more closely. On the contrary, if you don't see any smart money in a project, that can be an indication that perhaps this is not worth your time.

And then the last thing, so there's discovery, due diligence, and then what we call defense. So these are the three Ds of nonsense, if you will. And defense has to do with your portfolio development.

you know, being protected. And so you can set up smart alerts where you get notifications if certain events take place on the blockchain. So, for example, there's a large holder of ABC token and suddenly they withdrew the ABC tokens from a staking contract. Well, you'd probably want to get a real time notification for that because they might be about to dump these tokens on the on the open market.

So basically that's what you do on Nansen.

And all of this kind of fits into the sort of standard offering we have. We also have the alpha subscription, which gives you a few more dashboards in addition to things like more smart alerts. You have download CSV capabilities and so on. But the perhaps most valuable part of it is that you get access to a social network of other alpha customers.

And so these are, you know, at the time of this recording, it's about 100, you know, crypto, very active crypto investors, both smaller crypto funds, but also individual crypto investors, what you might call whales who are active in the crypto markets. And this is very nice because it complements the on-chain data.

The way I see it when it comes to crypto investing, in order to be successful, you need to at least have two things. You need to have really good data or information sources, to speak more generally, and you need to have a really good social network. And so the Alpha tier was created basically to complement the on-chain data with a kind of 100x leverage on your own social network.

And so people who join that tier, typically they are very active in early stage token investments, yield farming and so on.

And and yeah, so those are the two main offerings. We also have a VIP offering that kind of sits in between. But those are the two main offerings. And at the moment, we're also growing out an institutional tier which will have things like API access and more bespoke solutions that we will build for institutions that want to be more active in DeFi and crypto.

Got it. So let's say an individual goes into Nansen and subscribe the most basic plan. Is there a way for them? Is there a dashboard or some sort of ranking board for them to, because like they might not know anything about like what people are looking at and what are really trending. So is there a way for them to look at like, what are other people looking at right now? What is hot right now? And,

And then they trace down those attacks afterward or the categories afterwards. How does that work? Yeah. So my recommendation to anyone who starts using Nansen is to go directly to the dashboards that we have categorized under discovery, because with those dashboards, you don't need to have any context on what tokens

you should be looking at or which DeFi protocols you should be looking at. They just give you straight up kind of rankings of highest gas consumers, capital inflow of the hot contracts, et cetera. So that's where you start. You go straight to discovery. You click through all those dashboards. And the nice thing about Nonson too is

you know, we've tried to create a really holistic experience where the dashboards kind of link between each other. So if you see an address and you say, hey, this address is very active, you know, I want to look more closely at it. You just click the name of it and you get drilled down on that specific address and you can see, you know, what else this address has been doing. So I would definitely start with any of the discovery dashboards. That's the first thing you do.

And I will also say that right now we're working on making the actual landing page or like the homepage of Nansen more personalized and more engaging because it's frankly, it's really simple and not very useful at the moment. When you log into pro.nansen.ai, the first page you see is just a list of dashboards, but we're going to make that homepage itself more

the sort of hub where you, when you start using Nansen, you will know exactly where you should go. And so that's the second thing, but you have to wait a couple of more weeks until we roll that out. - Understood. So let me dive deeper into the tagging part, because I think that's like actually very interesting. How do you generate the tags of like different addresses? Like who takes the lead on defining the criteria?

Yeah, so we have our own attribution team. So we literally have a part of the company that is 100% focused on labeling itself. And I think that's the only way you can do this in a successful way. And what they do is they try to combine man and machine, right? So you try to get the human intelligence of analysts who understand this domain,

with automated and algorithmic approaches. And in fact, more than 99% of all the labels we have in Onsen are actually automatically generated.

Of course, they're generated because our analysts have created these algorithms after studying transactional patterns, etc. But it's really a wide variety of methods we use. And in that sense, I think when it comes to labeling, we are definitely an aggregator of sorts. So we don't rely just on one technique.

you know we do everything from like literally reading press releases of investment rounds and you know matching that up with uh transactional patterns that we see into certain you know a seed round or any form of investment round we look at patterns where you have overlap in investments um we sometimes can get information from governance proposals that are out in the public

and how different votes on these governance proposals. And so a lot of these things are quite manual, like you need humans to do it. But then on the other side, sometimes you discover patterns. And for example, if you understand how an exchange works, you can effectively create heuristics that tag up millions of addresses because of how they transact with certain key hub wallets.

So if you know how Binance works and if you know how they manage their wallets, you can create algorithms that tag up user deposit wallets for Binance. And that generates millions of tagged addresses for Binance and similar to other exchanges. And then there are more deterministic heuristics you can make. So it's very easy for us, at least conceptually it's easy. Practically, it might be difficult for other people who don't have the same infrastructure, but

to tag up like every Uniswap pool, right? Because we can just look at smart contract event emissions.

You look at the Uniswap factory and every time someone creates a Uniswap pool, this is talking about V2, V3 is a little bit different, but V2 Uniswap or SushiSwap, you could just look at, you know, smart contracts that are being generated through the factory. And then you can use information about which tokens they were created for and then basically label like that. Right. So, so it's really a mix of different methods. We do use some, um,

We also scrape some data from APIs that are out there. And we have to, of course, reshape that data that we find. But there are certain projects that might have governance endpoints where you can pull information about addresses from that. And then there's basic web scraping, which we don't actually do very much. And I think we could do more on that front.

But yeah, that's kind of a high level description of how we do the labeling. Got it. I was wondering, because earlier we were talking about tokens, and I thought that was actually somehow relevant to this.

So how do you think about having permissionless contribution to labeling versus your attribution team does that? Because I guess like there's pros and cons, and I'm sure you have thought about the kind of comparison. Yeah, that's a really great question. It's something we think about a lot.

My feeling is to kind of follow the compound model where you start off pretty centralized and you have a good degree of control. And then progressively you try to decentralize this effort through incentives. And so maybe a lot of people don't know this, but actually our labels aren't only created by non-SEN team members. We also have what we call a scouts program.

where, for example, people who, for whatever reason, find it too expensive to acquire a Nonsense subscription themselves, but they are really interested in blockchain and crypto and DeFi. Typically, they could be like students or something like that. They will join our Scouts program and they will actually help us do labeling because

because we've built out really good internal tools where they can make submissions and then we have the internal team review the submissions. So there's kind of a degree of quality assessment because that's probably the main challenge, right? If you try to crowdsource something,

you can't just have fully kind of permissionless, let's just ingest any form of data that anyone submits. You need to either have some kind of incentive structure or you need to have some form of quality assessment. And so these scouts,

uh they you know not only do they get access to like the the top grade uh versions of our product to actually use that which is very valuable in itself but they also get monitor incentives for contributions they make so if they make you know a lot of labels and they submit a lot of labels they will literally be paid out you know cash or in stable coins for their contributions on a monthly basis

So that's kind of one. In the beginning, it was all... Actually, in the beginning, it was only myself doing labeling. Like literally all the manual submissions was like me doing

And then it became like the broader team, right? When we started hiring people. And now we've made it to like the scouts program where you also have people who aren't technically Numson employees, but they're like in the outer circle of the Numson community. And I think we'll continue to expand outwards. But the key thing is to make sure that the quality is high enough. And so for that, you need to have internal, you know, quality assessment, dashboards and metrics and so on.

and you need to have really good tools that these people can use so that they are actually certain that their submissions are likely accurate. So that's kind of how we're approaching it. We're trying to gradually decentralize when it comes to submitting the data. I think, you know, if anyone is like...

Considering doing the same thing as us and they jump too fast ahead on this, I think they're going to shoot themselves in the foot because you will just end up with tons of garbage labels. As soon as you get some bad labels in there, the negative effects are going to compound because there's a really interesting network effect in labeling. If

If you see three addresses interacting, right, and you have labeled A and B, but you don't know what C is, you can use the information about A and B to sometimes infer what C is. But if the labels of A and B are wrong, you can infer the wrong label, right? So as soon as you get really bad data, that starts polluting all of your data set. And effectively, you will get compounding negative network effects.

On the contrary, we want to be on the positive side of that. Obviously, we want to have really high quality labels and get a compounding positive effect. And so that's why we're quite cautious on not decentralizing this process too fast.

Understood. This is interesting. Yes, quality control, admittedly, is definitely the hardest part to do. I've been thinking about this as well. And then I think, you know, I've thought about introducing slashing mechanism, but at the same time, like, you know, on the one hand, you obviously want to incentivize people to contribute to the on-chain kind

kind of like credentialing, like basically someone who participated in the credentialing definition process. But on the other hand, you also need to have a very strong slashing mechanism to make sure that everything is very high quality. So it seems to me that, yeah, it seems to me that at the early stage when you do not have a very well-emplaced mechanism, you probably have to do it more centralized way.

And that's interesting. I think also you have to consider kind of what information do you need access to in order to do labeling successfully? So there's kind of two scenarios here at the two extremes of the spectrum. One is that you have direct access to all of our existing labels.

which gives you the most context possible and that prepares you to do labeling really effectively. The other extreme is that you have zero contextual data and you can just submit data from whatever you find on like ether scan or something like that. And if you wanted to decentralize that process in the former case,

when you have access to all the information programmatically like how do you ensure that someone just doesn't just like download that whole database and runs off with it right so that that doesn't that doesn't quite work in a totally decentralized way unless they have very strong incentives not to do that or there are some mechanisms that prevent you from doing that

But if you don't need to give people the sort of added information context, they can just, you know, add labels based on the information that they have at hand, then it might be easier to decentralize it. That makes sense. So let's switch gears a little bit, talk about your recent new funding round. Congratulations to that. Could you talk a little more about the round? Like, how are you going to use the proceeds?

Yeah, so this is our second funding round. We did a seed round that was led by Mechanism Capital and Skyfall in October last year. And then this recent Series A round was led by A16Z. And we raised $12 million in this latest round.

The thinking is that we want to be able to grow the team. That's the primary use of the funds here. We're not a DeFi protocol. So a lot of the stuff we do actually needs humans involved, and especially on the attribution or labeling side. So we want to grow the headcount to 60 people this year. We're currently at 30. So that's one big cost driver for us.

um the other one is that we actually have pretty expensive um computing costs and so we run we scan tons of data every every day and we're also scaling out horizontally to different chains different both like separate blockchains but also side chains and layer two solutions and all of that just adds up the cost uh with regards to scanning the data uh to to the extent that we do

And then the third thing is we want to make sure that the non-sense community grows. And so there's traditional sort of marketing expenses and branding expenses related to that. But we also want to make sure that, you know, the community, there's a really strong organic component to it. And so we want to fund various different activities to grow the non-sense community.

I think actually the Numsn community is interesting because it's not necessarily very visible to people who aren't in it because of the kind of closed nature of like alpha leaks and information and things like that. So, for example, our community of Numsn alpha customers is really strong, but it's quite small. And of course, all the information is kind of behind closed doors in our program.

Similarly, the Scouts program is pretty small, but it's a really tight-knit group of people who are helping us make better data. But these things we want to scale up, right? And so you need to have the capital to invest in those programs. So those are high level, the three main things, like growing the headcount, investing more in our infrastructure, and then growing the Dunstan community.

That makes sense. We'd love to chat about the extension for the other chains. Do you have any specific ideas like what exactly you're going to prioritize? Yeah, so this is a very interesting strategic question, right? So

We've discussed this at length internally. And, you know, again, you can think of extremes. You can say, hey, we're just going to stick to Ethereum L1. You know, that's where we have started. That's been the main focus for us so far. Or we can say, you know, let's basically aim to have like every major blockchain supported. I think the approach we're taking is...

I wouldn't even say it's in the middle. It's more towards the first option. And so the key focus for us right now is the broader Ethereum ecosystem. And so that means typically we will prioritize EVM-based chains and side chains and L2s. So one of the first ones that we've had a support for is Polygon. And there's kind of two reasons for that. The first is...

the cost perspective. So it's just easier because the data looks very similar to Ethereum L1. Like we can reuse a lot of our internal components. But more importantly, that's where we've seen a lot of organic activity.

in terms of using that chain. So that's probably what we weigh the highest. But I think interestingly, because it is in what I would consider the broader Ethereum ecosystem, a lot of our users are also on that chain. And it's pretty easy for someone to bridge over from Ethereum L1 to Polygon.

And so that actually also has some interesting implications for our attribution work. So we can actually continue to use a lot of our labels for addresses because people very often use the same address on Polygon as they use on Ethereum. And so, you know, the tagging work actually has a nice kind of scaling effect on these EVM chains.

we can continue to reap the benefits of all the labeling work when we add more EVM chains that are connected with the same address system and so on as Ethereum. So Polygon has been the first one. I think that's been a good choice for us. And then Binance Smart Chain is a chain that our customers have been screaming for for a long time, and we're very close to having integration for that ready.

And then the approach we're taking as well for L2s is, you know, you don't know. First of all, we don't know if there's going to be kind of a winner takes all on L2s. It could be that you also have a very fragmented picture on L2s. So the thinking is we expect that we will have to do some integrations that probably won't be used very much.

because let's say all the activity goes to like arbitrage or all the activity goes to optimism. Like we don't know that right now. So we have to, similarly to how Bill Gates wanted to fund the manufacturing lines of like eight different COVID vaccines before they even knew if the vaccines were any effective. We have to do the same thing with supporting these different L2 solutions. We kind of have to add support to all of them and then double down on where we see the organic activity taking place.

So that's how we're thinking about expanding. We're primarily looking at the broader Ethereum ecosystem. I think one chain that is interesting, but kind of a challenge is Solana. And we have users asking for this, but it's a completely different beast and it generates an insane amount of data. I think in the order of 100 gigabytes of data per day.

And so we do have the resources to do that, but it's also a massive investment. And it's also, you know, it's completely different from Ethereum. So this one is a bit up in the air at the moment. We're investigating it, but I'm not 100% sure it's a chain that we're going to support in the very near future. But we're certainly keeping an eye on it. And of course, you know, the sort of organic usage of that chain is the main thing that drives our decision making around it.

Of course. Any other notable upcoming plans in the next one or two quarters? Like you also mentioned something around the enterprise APIs. Anything else? Yeah. Yeah. So definitely APIs, institutional APIs is a big one that we're focusing on. We decided to spin that out as a separate product team.

So we're going to be focusing a lot on that. We do have some pilot customers, some of the biggest crypto funds in the space who are using it in the early stages.

And we're also doubling down on the alpha program. So we want to make that more useful and more holistic so that it's not only about analytics, but we want to make sure that perhaps the alpha program could be a way for people who don't necessarily have that initial social network to get access to early deal flows, for example, when it comes to early seed rounds and things like that. Because

Nansen is really well connected in the crypto space. And so why shouldn't our best customers reap the benefits of that? So that's another thing we're looking at. It's kind of at the ideation stage now. We're exploring different options, but that's one thing we want to investigate. And then on the product side, as we just talked about, we're adding more chains. We're improving the homepage so that it's easier to find out where you should actually go when you start using Nansen for the first time.

And we're going to add much more personalization into the product. So it'll be easier for you to navigate stuff that you care about, like the tokens that you are generally interested in, wallets that you want to save and monitor, perhaps even segments of wallets that you have identified.

you should be able to make your own custom labels in addition to the Nansen labels so that you can have private information that you don't want to share necessarily with us, but you can have it visually available to you in the platform. So these are a lot of improvements we're going to be doing on the user experience in the product itself. Do you guys record what users usually search and then take that as also part of your database?

We definitely do not surface it to other customers because that information is obviously very sensitive. The data does get logged.

So that's kind of a part of how all websites work with regards to analytics and capturing behavioral data. But it is compartmentalized. So that data is very sensitive, obviously. And our privacy policy dictates that we can't use that data. We definitely can't use that data on an individual basis or try to even do labels based on what people search for and that kind of stuff.

That's something we take very seriously. And it is something that we have to both think about very carefully and be very cautious about, but also communicate to our customers because obviously we label wallets, right? And that's kind of almost the bread and butter of Namsan at this point in time.

And we want to make it very clear that any customer data so that could if you're paying us with crypto, we would never use the information about your wallet that's sending funds to us to label it. Obviously. I mean, to me, it's obvious, but maybe it's not obvious to everyone. And of course, we also not use any of your user activity data on the platform to label either. That's a part of the privacy policy of the company, and it's something that we take very seriously.

I see. Yeah, definitely not on a personal level, but it's good to know that the user's data are also being protected. Sorry, just to clarify one thing as well. So we do have a case where, for example, I think you asked earlier about which tokens are other people looking at.

And so this is something that we will likely introduce on the homepage at an aggregate level. So the key thing is like the keyword here is aggregate. So, for example, if you're looking at CoinGecko, if you go to the search bar, you can write in before you start typing anything. It says like trending searches. Right. So that kind of information is something we have specified in our privacy policy that

We can use it for aggregate statistics, but obviously these statistics would never reveal any individual user's information and it would only be the macro picture of what people are searching for. Got it.

Yeah, I have two more open questions. So one is about the tagging. Now you have smart money, but obviously we are seeing more and more Web3 applications beyond just DeFi. So have you thought about launching new category tags? Like one idea that I noticed that you're a big fan of Axie Infinity and

And there are a lot of on-chain activities there, obviously, daily active users and whatnot. Will there be something dedicated towards any Web3 applications on-chain behavior?

So we do have some other categories. For example, I think the very first category we created, which has kind of been memed quite a bit, is the Dex Trader categories. So things like Medium Dex Trader, Heavy Dex Trader, Elite Dex Trader. And we also have NFT collector labels. So things like Uncommon NFT Collector, Rare NFT Collector and Legendary NFT Collector.

So we are constantly making these new types of categories. And in fact, smart money itself is kind of composed of different sub components. So you might be a smart liquidity provider. If you have generated high yields on capital, you might be...

a what we call a flash boy which it means that you're you're able to do flash um flash loans and and do arbitrage on that that's something i can qualify you as a smart money address

So, yeah, we definitely have, we are constantly working on adding new categories. And I think this is what is one part that's interesting about Namsan data is that it's a combination of kind of like entity data where you can say this address belongs to a specific entity, but you can also have this behavioral data, as you called it, where you can create behavioral categories and tag up addresses based on that.

And it would be quite nice if you could filter based on these. So imagine you can say, hey, I want to get alerts every time a elite dex trader acquires this new token.

And so that's where it gets really interesting. And I think we will certainly invest a lot more time into both improving the smart money tags that we have, because there's a lot of cool things we can do to make it even better. But we're also going to ship new category tags. It's definitely not going to be restricted only to smart money tags.

um and then one last kind of interesting question that i've been thinking about in terms of the alpha product so when people are in those groups i always not not well not always but i kind of imagine that it's like a dark forest as in everyone in that group like they are looking for some sort of alpha opportunity

And then they would not want to share a lot of the new things they've found because obviously if someone else is front-run, then they might lose the alpha opportunities. How do you think about that, though? I thought this might be something interesting to discuss. Yeah, it's a great point. So I think here we can contrast the 2017-18 ICO boom with the DeFi summer boom.

and the current DeFi season, if we can call it that. Maybe we're out of season right now. In 2017, 18, people would basically fill and shill. So they would buy a token and then they'd go on social media and tell the world that everyone should buy this token because they're pumping their own bags.

With DeFi, especially yield farming, it's actually the opposite. So if you put money into a yield farm, you want to make sure that you get as much yields as possible. As long as that pool is kind of zero sum, there's like a fixed amount of tokens being distributed to the farmers.

So this is a challenge because it's true. That's where the alpha leak meme comes from. You don't want to share the alpha with others because it will dilute your yields. But I think it's been pretty interesting to see with our alpha group that as long as people feel that if you're on a call with, say, 19 other very active DeFi traders or investors,

If you see that, hey, if I give up this one tip, but I get 10 really useful tips back,

that's actually a pretty good trade-off so the key thing is to make sure that you're keeping uh everyone like no one's just um free riding off of these uh these different uh sessions that we have both like calls and and the Discord group that we run um you have to make sure that people that that's the culture and that's more like a cultural thing than it is like something you enforce

but i honestly this is one of the things we were uncertain about before we started the program we didn't know if people would even want to share but it's been quite uh quite cool to see how people see that you know if i share you know that sets the tone for other people to share as well and if you get 10 really hot tips back from sharing one useful tip with the others you're probably you know gaining in net terms

So it is a challenge, but I do actually feel like this is surprisingly, this has not been a big problem so far in the alpha program. But it is also one reason why we don't want to scale up the alpha program infinitely. We do want to keep it manageable. And in fact, we had a waiting list for it for a long time because we didn't want to get too many people in too fast. We wanted to kind of scale it up in a managed way.

But it is a really good point. And currently, I feel like we found a good balance for it. But it's primarily driven by just, you know, fostering a culture inside of that community where people see that, you know, sharing one piece of information might give you 10 useful pieces of information back.

That's certainly very, very interesting. Good to know. Yeah. Thanks for sharing about the Nansen. I'd like to switch gear a little bit towards your own involvement in some of the DAOs. So you're involved in PleaserDAO and LidoDAO. What are they? Yeah. So LidoDAO is effectively the biggest, what you could call decentralized staking pool for Ethereum 2.0.

It's taken that position pretty fast. And I've been basically a member of that DAO from the beginning. And it's structured as a DAO in the sense that all of the key decision making is made through the Lido token.

And for example, if we wanted to onboard a new node provider to the network, that's something that is subject to a vote. Or if you wanted to extend the amount of validators that they can run for the network, that's something that's also up for a vote.

And so it's actually, I'm just checking the data now. And it's actually, if you look at the top address in terms of ETH deposited into the deposited contract, it's actually almost like Kraken, which admittedly has several addresses. Kraken has...

597,000 ether deposited and Lido has 596,000 so they're actually almost identical in terms of how much ether has gone in to the pool but it's a huge project and it's quite formally governed through a DAO structure which builds on the sort of Aragon DAO structure but it's been modified slightly to fit the needs of Lido

And, you know, at this point, it's been growing organically. They've been very smart by integrating and creating incentive structures with liquidity providers like Curve, which means it's by far

um the most liquid way to do staking on each two because you know the way it works high level is that you basically deposit ether into lido and you get a st eth or staked ether token back

And this token accrues yields from ETH2. But it's liquid. So if you wanted to exit it, which you cannot do if you run your own validator, you can simply sell the STETH token back into the pool and get Ether back again. And so the key thing is to have very deep liquidity. That's the way you can kind of build a moat. You want to have the deepest liquidity so that the slippage is very low for entering and exiting ETH.

this staking pool. And so they've certainly taken the lead on that. And as I said, the way it works in terms of a DAO is really on key governance decisions. You have to run those through the DAO and there's a vote that gets decided through the token holders effectively.

So that's LidoDAO. PleaserDAO is quite different. So PleaserDAO was very grassroots created to acquire one specific NFT

which was people pleasers nft for uniswap b3 so when uniswap b3 launched people pleaser uh made this amazing nft to sort of celebrate that launch and some of us really wanted to get that nft but we realized that probably it would be pretty expensive

And so we decided to pool funds together. And this happened very fast. I was involved. I think I was maybe the fifth or sixth member to join, committing some amount of ether to say, hey, I'm happy to pull this. And the idea was simply to buy that NFT. And to make a long story short, we ended up winning that auction. We bought that NFT and a lot of great people

you know, enter this DAO and became members and figured, hey, we should keep doing this and make this kind of like an art collection DAO where we can, you know, acquire really iconic NFTs and also support what we think are good causes. Because, you know, the proceeds of that first auction all went to charity.

So you're raising money for charity, but you're also doing something that's really cool and new when it comes to owning an NFT in a decentralized version that PleaserDAO is. So effectively, it's about acquiring NFTs and supporting great artists and also in many cases donating funds to charity through these different auctions. That's the high level.

And there's been a lot of cool, like there's an Edward Snowden piece that was created, which we acquired. And yeah, so that's high level. The way the governance works in PleaserDAO is very different from LidoDAO. So in LidoDAO, it's more formal because there's a lot more at stake, right? Like you're managing a lot of ether that has been deposited into ETH2. With PleaserDAO, it's a bit more informal. We use snapshot documents.

page to do basically off-chain voting, but still through our token holdings. And it tends to be more fast moving when it comes to key decisions that have to be made. But the key decisions are still all subject to governance votes. It's just a bit more informal in the way it's implemented. Interesting. PleaserDAO sounds to me like JennyDAO, if you have heard of it. Sorry, which one?

jenny jenny down the one oh yeah yeah yeah sorry yeah that that could be uh i'm not intimately familiar with jenny though but i've heard the name and yeah there are a few other uh cool vows out there that are uh collecting nfts uh as well that's true got it so how would you categorize the existing dow space i i figured that people when people talk about dow they were using it as a very loose term

That's right. It could be some sort of SaaS governance platform. It could be some organization that you're actively involved in and

obviously there are some other uh like look like i guess like tooling tooling type of services as well and also consulting services and people all kind of categorize that into the dow um category so what are some of the building right building blocks and primitives that you see we need in order for an application protocol to easily operate in a dow form so i think aragon

was a great pioneer in this area because and they had incredible foresight uh to build sort of the you know the hub for for any DAO um and so if you just look at the modules that you have in Aragon that's a pretty good starting point so things like treasury management or like a finance module uh you need some way to you know manage uh the DAO's finances in a in a formal way

You need to have a voting component so that you can make decisions. You need to have some kind of token issuance component so that you can say, how do you manage the kind of monetary policy of the DAO itself and sort of share issuance? And then there's also maybe one thing that is not a part of Aragon natively, and that was probably never the intention either, is the whole communication and coordination layer.

which has you know obviously happens off chain for the most part so you know back in when Aragon started I think Discord was not as big as it is now but Discord has been a I think a very you know welcome piece of infrastructure actually for the crypto space with regards to actually doing governance and practice and discussions there's also a lot of use of right forum boards

So there's kind of the communication part as well. The downside, of course, of something like Discord is that, you know, obviously it's centralized and it might not be that easy to have a good track record of how certain decisions were made. That's where, you know, forums are a bit better. But I think those are the key components. So basically, to reiterate, some ways to manage finances, some ways to make decisions are

some ways to control the share issuance or the monetary policy, if you will, of the DAO and then a communication and coordination layer on top of that. You could maybe also add

code and sort of, you know, the ability to deploy code and also have revision history. So there have been some interesting projects to try to kind of decentralize GitHub. But of course, many DAOs produce code. So that's another thing that you could perhaps consider one of the core components or building blocks that you need to have DAOs that work well. Right. So you got to an interesting point. I wanted to see what your opinion is on that.

So you mentioned there's forum, there's discord, and then obviously sometimes people also use telegram. There's always certain level of fragmentation of information that I've found it's kind of hard to track. Like for example, sushi or maybe urine is also a good example. They all have their own forum built up and then they have their snapshot and then discord. So

So then whenever I wanted to find something, I have to go to all these different places and then try to collect information about one specific project, like specific proposal that they have, and then try to get myself into a full picture. Do you see this as a continuous trend? Or I should say, do you think it's a better way to kind of just keep it separated? Or do you think there should be one...

maybe you can call a super app that can solve all of these problems? I definitely think that there is like someone should explore making the super app, the communication layer for DAOs. That's a project that I would probably try to fund myself. It is an unsolved problem and every single DAO has this problem.

And in fact, it could be that you could extend it even further than DAOs and think about startups and so on. How do you reconcile relatively informal communication and get a track record of it with, say, key decisions that were made and so on and so forth? So I totally agree that this is very scattered at the moment.

And there's probably room to create some kind of product or platform that makes it better. At the same time, it could also be that it's like a pipe dream to get the perfect solution here because it might just be too difficult or you might have to break it up into different components. There are some interesting maybe attempts in this area. So something like with Tally.

or Tally. And then DeepDAO as well, which that's more like an analytics platform. But I think something like what Tally is doing, where you try to kind of get the proposals in one place is quite interesting. They should probably have some kind of informal chat layer as well. Although at the moment that effectively is Discord, right?

And then here as well. So there's another, just to touch on that briefly. I think there's kind of like an interesting friction between transparency and privacy. Because I think in an ideal world, if you want a well-functioning DAO, it should be 100% transparent.

And that also means that the discussions that take place around the DAO that can lead to certain decisions being made, we should have a record of that. Right? Kind of like the same way you have a record of your politicians putting forward certain points in the parliament or, you know, you should have a similar thing. But many people might not be comfortable with having every word they write in some chat app or Discord be recorded forever.

And so it seems to me that, you know, maybe the only way out here is like an increased trend and people taking on anonymous but persistent identities online so that they cannot be tracked down to their, you know, actual person in real life. But you have like a persistent identity over time that is active in these different governance communities. But, you know,

Obviously, I'm a bit biased because it's a good thing for Nansen if people have persistent identities online and they're active in governance discussions and so on. But I strongly believe that the most successful DAOs will very likely be the ones that are very transparent and where it is easy to get information. And that to me, it's

The consequence is like you need 100% transparent track record, like end to end of all the activity that goes on in that DAO.

You raise a very interesting point. I definitely see that there's consistently a contradictory force between one hand, it's like people wanting to stay anonymous. But then on the other hand, obviously, if someone wants to have on-chain, you may call it credentialing history or credit history, you would want to use a consistent identity.

because like your contribution to one protocol can be leveraged or you know seen as part of almost like a collateral for for you to get you know early um like kind of premier access to another protocol like you actually technically can do so i think i think you point out a very interesting point where people um

I think the reality is people will want the latter more because that serves the larger population's purpose. But then people who would want to stay anonymous, then there will be more effort towards that. So I think there will definitely be something interesting going on there.

yeah i think there's kind of like a hegelian process going on here where you kind of start out by you know some people thinking hey i'll just use you know one address for everything i do and then you have like the anti-thesis of that which is like oh people can track me you know on the blockchain so now i'm just going to create a new address for everything i do and so you get the scattered identity which is kind of meaningless

And I think now we will sort of arrive at the synthesis, which is, hey, I'm going to use one address, but I'm going to be much more cautious of how I link it to a specific identity. And most likely people will take on, many people will take on anonymous identities, but they will be persistent. And that's very, very well aligned with a public blockchain. And I think actually one of the reasons they are going to do this is that more and more

people are going to reap benefits of their own on-chain history.

And there's been plenty of examples of this already in the last year, but people, you know, airdropping on, you know, what you might consider productive addresses, like addresses that are a net positive for the ecosystem. You know, we actually get a lot of teams approach us and they ask us, hey, can you guys help us find the best investors that are kind of long term investors?

thinking so that we can send them free tokens for our new project. And that's an incredible asset. You've just turned your address into an asset that you can actually reap benefits of.

for a really long time. And why would you want to abandon that by just creating new sort of addresses all the time? It seems much more logical and rational to maintain that identity, but instead perhaps make it pseudonymous or make sure that it doesn't link back to your real identity. Yeah, it's interesting because recently Perpetual Protocol and I think some other protocols were doing this diamond hand investor badge thing.

which is exactly ties back to what you were saying. Yeah, 100%. Right, right. I think this is a perfect place to end our podcast. It was a great chat with you, Alex. Thank you for joining us again. Yeah, thanks for having me, Mabel. Nice talking to you. Yeah, likewise.