cover of episode CodeSandbox with Ives van Hoorne

CodeSandbox with Ives van Hoorne

2024/12/4
logo of podcast Software Engineering Daily

Software Engineering Daily

People
I
Ives van Hoorne
Topics
Ives van Hoorne: CodeSandbox 是一个在线开发环境,允许开发者在浏览器中进行完整的 Web 开发项目工作。它类似于 Google Docs 和 Microsoft Word 的协作理念,旨在简化代码共享和协作。其核心功能是代码编辑器和预览窗口,将代码编辑器(左侧)的输入转换为预览窗口(右侧)的输出。CodeSandbox 的发展始于作者在团队协作中共享 React 代码的困难,以及对类似 Figma 和 Google Docs 的在线协作工具的启发。最初版本使用 JavaScript、Create React App、Elixir 后端、Postgres 数据库和 Redis 缓存。为了支持大型项目,CodeSandbox 推出了 DevBoxes 功能,使用虚拟机运行项目,并通过 WebSockets 连接进行文件检索。为了提高 DevBoxes 的性能和可扩展性,CodeSandbox 使用了 Firecracker 虚拟机技术,该技术允许暂停和恢复虚拟机,并从快照中创建新的虚拟机。为了解决安全问题,CodeSandbox 通过多种方法来确保 DevBoxes 的安全性,包括使用 Unix 用户、jailer 和检测算法来防止滥用,例如加密货币挖掘和网络钓鱼。为了应对网络钓鱼攻击,CodeSandbox 在预览页面添加了警告信息,提醒用户注意潜在风险。CodeSandbox 的前端使用 iframe 来渲染预览部分,以隔离用户应用并提高安全性。它通过解析代码、转换代码和使用 eval 函数来执行代码。为了优化用户体验,CodeSandbox 提供了安装依赖项的建议,并对一些常见错误进行了处理。CodeSandbox 的依赖项安装通过 AWS Lambda 服务处理,并对下载过程进行了优化。CodeSandbox 的代码编辑器最初使用 CodeMirror,后来迁移到 Monaco(VS Code 的代码编辑器),并通过模拟 Node.js 环境来支持扩展。为了实现类型信息的显示,CodeSandbox 在浏览器中运行 VS Code 扩展程序,并模拟 Node.js 环境和文件系统。CodeSandbox 现在直接从 NPM 注册表下载 tar 文件来获取类型文件,以降低带宽成本。未来 CodeSandbox 可能转向使用 ES 模块和 Service Worker 来改进打包和执行代码的方式。CodeSandbox 的 Firecracker 虚拟机技术具有广泛的应用潜力,可以扩展到 CI/CD 系统和其它领域。 Josh Goldberg: 主要参与访谈,提出问题,引导 Ives van Hoorne 讲解 CodeSandbox 的技术细节。

Deep Dive

Key Insights

What is CodeSandbox and how does it compare to traditional development tools?

CodeSandbox is an online development environment that allows users to start web development projects directly in the browser. It is similar to Google Docs, where users can share a link to a live, editable environment, eliminating the need for local setups.

Why did Ives van Hoorne start coding?

Ives started coding at around 10 years old to create a program that translated a secret language he and his friend used to write letters in class. This was his first interaction with programming, using Visual Basic to create a simple translator.

How did Ives van Hoorne's early coding experience influence his later work?

Ives' first coding project involved creating a program with input on the left and output on the right, a concept that mirrors the core functionality of CodeSandbox, where users write code on the left and see the result on the right.

What challenges did Ives face when transitioning from graphic design to coding?

Ives initially disliked graphic design because it was too subjective, with clients often requesting designs he didn't agree with. He found coding more appealing as it felt like solving puzzles with clear solutions, though he later realized coding also has subjective elements.

How did Ives van Hoorne's experience at his first job influence the creation of CodeSandbox?

At his first job, Ives struggled with sharing React code snippets over Slack, as it was difficult to debug errors without a live environment. This frustration led him to envision an online code environment where users could share running code easily, similar to Google Docs.

What was the initial tech stack of CodeSandbox when it launched in 2017?

The initial tech stack included Create React App for the frontend, Elixir with the Phoenix framework for the backend, Postgres for the database, and Redis for caching. The entire platform was hosted on a $20 VPS with 2GB of RAM.

How did CodeSandbox scale its database to handle half a billion files?

CodeSandbox stored all files in a Postgres database, with each file represented as a row. Despite handling over 500 million files, the database queries remained efficient, with sandbox loading times under 100 milliseconds due to well-indexed columns in Postgres.

Why did CodeSandbox introduce DevBoxes, and how do they differ from sandboxes?

DevBoxes were introduced to support larger projects that exceeded the 500-file limit of sandboxes. Unlike sandboxes, which run entirely in the browser, DevBoxes run on a server-based VM, allowing for full-scale development with features like lazy file retrieval and VM snapshots.

How does CodeSandbox handle security for its DevBoxes?

DevBoxes use Firecracker, a VM technology developed by AWS, which allows for secure execution of user code. Each VM runs as a separate Unix user, and a jailer ensures code remains within its environment, preventing unauthorized access to the host system.

What challenges does CodeSandbox face with crypto miners and phishing attempts?

Crypto miners often abuse CodeSandbox by running miners on VMs, while phishing attempts involve creating fake login pages. CodeSandbox uses detection heuristics and AI to identify and block malicious activities, though it remains a constant cat-and-mouse game.

How does CodeSandbox execute user code in the browser?

CodeSandbox executes user code by parsing it to create a dependency graph, transpiling the code, and then using JavaScript's eval function to run it. The require function is overridden to handle dynamic imports, creating a loop that allows for efficient code execution.

How does CodeSandbox handle dependency installations for NPM packages?

CodeSandbox uses an AWS Lambda service to install NPM dependencies, creating a dependency graph to determine which files are needed. These files are then bundled and cached in an S3 bucket, allowing for fast retrieval and installation of dependencies.

How does CodeSandbox integrate VS Code into its editor?

CodeSandbox uses the browser version of VS Code, emulating Node.js in the browser to run VS Code extensions. This allows users to have a familiar VS Code experience with features like type information and extensions, all within the browser environment.

What is Ives van Hoorne's vision for the future of CodeSandbox's infrastructure?

Ives envisions generalizing the Firecracker-based infrastructure to support not just development environments, but also CI/CD systems and deployments. This technology could significantly reduce setup times and improve parallelization in CI/CD pipelines.

Why does Ives van Hoorne enjoy playing volleyball?

Ives enjoys volleyball because it allows him to disconnect from work and focus entirely on the game. It is also a highly competitive and physically demanding sport, requiring short bursts of energy and strategic thinking, which he finds both challenging and rewarding.

Chapters
CodeSandbox, founded in 2017, is an online development environment allowing users to work on web development projects directly in their browsers. It's compared to collaborative document editors like Google Docs, enabling easy code sharing and collaboration. The core functionality involves a code editor on the left and a preview on the right, mirroring the input-output process from the founder's early programming experiences.
  • Founded in 2017
  • Cloud-based development environment
  • Allows users to work directly in their browsers
  • Core functionality: code editor on the left, preview on the right

Shownotes Transcript

Translations:
中文

CodeSandbox was founded in 2017 and provides cloud-based development environments along with other features. It's quickly become one of the most prominent cloud development platforms. Yves Van Horn is a co-founder at CodeSandbox. He joins the show to talk about the platform. This episode is hosted by Josh Goldberg, an independent full-time open source developer.

Josh works on projects in the TypeScript ecosystem, most notably TypeScript ES Slint, the tooling that enables ES Slint and Prettier to run on TypeScript code. Josh is also the author of the O'Reilly Learning TypeScript book, a Microsoft MVP for developer technologies, and a live code streamer on Twitch. Find Josh on Blue Sky, Mastodon, Twitter, Twitch, YouTube, and .com as Joshua K. Goldberg. ♪

you

- Divas, welcome to Software Engineering Daily. How's it going? - It's going great. Thanks for having me. - Well, thanks for coming on. We're really excited to have you. I personally used your product quite a lot from job interviews, sandboxes, to demos. Could you give us a brief introduction to what is CodeSandbox? - Yeah, so CodeSandbox simply said is an online development environment. You can start a new web development project on CodeSandbox and you can completely work in your browser.

I tend to compare it with Google Docs and Microsoft Word, where if you're writing a document in Microsoft Word and you write something on your computer, but if you want to share it, that becomes harder. And so people have been using Google Docs more where they can just share a link to a website where they write together in an environment. And we wanted to build the same thing with CodeSandbox.

Fantastic. Before we dive into that and the future of code sharing, I want to dial back a little bit and talk about you as a developer, as a person. How did you first get into programming? So that's a long, long time ago. I initially started with coding, not because I wanted to learn coding, but because I had a need. I think I was about 10 years old or 11. I'm not sure how old I was, but

A friend of mine and I, we have a secret language and we tended to just write secret letters to each other in the class. So we would write secret characters and then the other would have to translate it. And that was a lot of fun, but I wanted to go faster. And that is when I started looking into whether it would be possible to create a program.

And that's my first kind of program, visual basic program where you could, it was really just two text boxes where if you put something in the left text box, it would translate it and put the solution or the translation in the right text box. And it would also work the other way around. And we had a kind of a Dutch Facebook at the time, and we would send public messages to each other with this secret language.

That was my first interaction with coding. It was quite challenging. And to be honest, after that, I didn't code for a long time. Only when I started to look into gaming and mods, I started to do coding again. But that was my first interaction. There's a humor point that your first form of coding was some kind of input on the left and some kind of output on the right. And you're still doing that decades later. Yeah.

Yeah, that is ultimately, it's the same thing kind of. Yeah, you have something on the left and it transforms it and puts it on the right. I guess you could say the same thing about Code Sandbox, which is very funny. And we have a lot of fancy things in Code Sandbox, but the core functionality and the most important functionality remains that you have a code editor on the left and a preview on the right that shows what the code is doing. That's the core functionality.

How did that core functionality come to be? Or how did you come to create the product in the first place? So I stopped coding for a while. But later on, when I was, I think, 17, I had done a lot of graphic design for a company. And I started to realize that I liked graphic design, but I didn't like...

doing graphic design for other people because they wanted me to design things that I didn't agree with. Things like, oh, can you make this fire yellow on this purple background? And I was 17 and a bit naive. And I thought, well, graphic design is not for me because it's so subjective.

So I started to move to coding thinking, oh, it's kind of like solving puzzles and there's only one solution to a puzzle. So there's no discussion or there is no subjectiveness in coding. But I was wrong in hindsight, but I still do still enjoy it.

But I started to learn web development. And initially, I created a portfolio website. And later on, I read in a newspaper, a local newspaper of my little village in the Netherlands, that there was a very cool new startup that was growing extremely fast. And I was thinking, I want to join this startup because there was not much happening in this village.

So after high school, I asked if I could join them for vacation work so that I would work over the holidays for them.

And they essentially said, well, initially they ignored me. They didn't want to. Well, the recruiter was a bit confused. An 18-year-old person would just ask to work there as a developer. But later on, they called me back after I called them a couple of times and they said, yeah, you can work here, but you have to work at least for a year. So would it be possible for you to take a gap year? And so I started working there and everything was in Ruby on Rails. And around that time,

a new technology called React became more and more popular. And I was intrigued by React. I started building more little pages in React and I was thinking, wow, our Ruby on Rails frontend feels a bit antiquated compared to how fast a single page application of React is. And I guess, again, I was a bit naive and I started to convert more and more Ruby on Rails pages into React. It's funny because

When I did that conversion, at some point, we wanted to test it and we put it live. And then the marketing department came to us because they were distraught. Suddenly 50% of their analytics were gone because I didn't think of it.

all those other business things, implementing analytics. Anyway, I'm going on a bit of a tangent, but at some point I realized that when I was working with my coworkers on React code, it was very hard to share work with each other. It was like whenever I was on vacation at some point and I got questions from my coworkers about a piece of code in React router, and they just sent me snippets on Slack and I had to decipher from my phone what was going on and what was going wrong.

and what the error was. And I didn't have like a JavaScript interpreter in my own head. So that's when I started thinking that it would be very cool if they could just send me running codes

And at the time, Figma became more popular. Google Docs became the default for a lot of people. And I started to think, would it be possible to have a code environment in the browser? And at the time, I was just writing down all my ideas for if I ever wanted to start a startup in the future. So I wrote this idea down, didn't do much with it, started going to university after my gap year. And then at the university, initially, I had a lot of fun drinking beer.

But that got boring after a while. We started to get lectures about object-oriented programming in Java. And I was already fully in the React world by then. I was like, why do I have to go through all this again? So that's when I started to work on a side project. And I just looked at my list of ideas, picked the top one, the latest one, which was the online web editor, and started to make the first design and sketch.

And then my friend Buzz joined and we started working on it more and more and more. And then on April 2nd, not April 1st, we released the first version of Code Sandbox. That was initially why it got started. This was April 2nd, 2017?

Yeah, that's right. So that's been over seven years. What was the tech platform that you used in React land at first back in 2017? Oof, very different. Everything was in JavaScript, not even in TypeScript. We did use flow types to type things with comments and everything. And that was very, very interesting too. The base application was in Create React app.

The backend was written in Elixir using the Phoenix framework because Elixir was this cool new thing that looks a lot like Ruby and Ruby on Rails, but was fully functional. So that was an interesting learning project. And it turns out that it scales really, really well. And then for the database, we use Postgres. And for some caches, we use Redis. So ultimately the stack was create React app frontends,

Elixir backend, Postgres database, Redis, second in-memory database. And I deployed all of this to a VPS on Vultr, like a $20 per month VPS, two gigabytes of RAM. And it was quite surprising how well that scaled. It scaled for a year, I would say. At some point, we got to the 500,000 monthly users, and it was still running on this $20 VPS in Vultr.

After that, we did move to Kubernetes and we moved to deployment to GCP, to Google Cloud Platform. But because the idea was that we can also do easier migrations that way and easier scaling. But it was intriguing to think that a solution as simple as this

works so well. Like, I think the implicitly ultimately helped with scaling it. I remember when building the first version of code sandbox, that I was very worried about saving files. Like if someone creates a sandbox, and they press fork, for example, and they get their own version, then the other person gets their own version of a sandbox.

How would we scale all of that with all those files? Should we use something like S3? Should we use something like Dropbox? There was a lot of questions about how to store those files. And ultimately, after like a month of very, very advanced thinking of should we use Git and all those kinds of things, I decided in that moment to store everything in Postgres.

So every file would just be a row in a Postgres table. And when you press fork, we will just do a bunch of selects and a bunch of inserts in Postgres to copy all the files, no deduplication or anything like that.

And the funny thing is we still use that today, that system. It never hit limits. And we now have like, I think we have over 80 million sandboxes, over like 500 million files stored in a Postgres database. And it's still like this query still is like within 20 milliseconds, a query and sandbox can be loaded within 100 milliseconds, just doing a bunch of selects. Only thing that we don't store in the database is

are binary files. That was a bit too far. So those are stored in a Google Cloud bucket, and then we store a link inside the database to that file. But it was a reminder of me that often the simple solution works the best, either because it's just so simple that there are less race conditions or there's less things that can go wrong, but also because it's much easier to understand how it works. So yeah, that was the initial.

Before we dive into more on how that works, I want to take a moment to emphasize that point that you have half of a billion file entries in your Postgres database, and you're still able to load the core part of your site that involves potentially many file queries in a tenth of a second. That's an incredible scaling performance feat, no? Yeah, but I would attribute it all to Postgres. Postgres has...

exceeded my expectations time and time again. If you have a good index on multiple columns, then the performance is incredible. And the scaling is also incredible. I would choose Postgres for any database right now. The only exception would be for things like storing

a tremendous amount of data that is inherently tied to timing. So time series data. Then I would look at something like ClickHouse, but Postgres is for everything else an incredible solution. We'll see if we can get that quote on their homepage.

So let's continue that journey. On the very bottom or back of a code sandbox, you have files stored in a Postgres database and also links to binary large objects stored in Google Cloud. What is on top of that? How are those retrieved or what's the system around them? Yeah, so...

I think it's the easiest way to describe how everything is retrieved if we look at like the oldest version of CodeSandbox, like the initial version of CodeSandbox. Because otherwise we need to think about like all the permissions, billing and stuff that comes now in between. But the simplest, the first version of CodeSandbox, whenever you would retrieve a sandbox,

You would call an API call on our Elixir server. The Elixir server would go through a couple of checks. It would check if you have access to that certain sandbox. It would get your user from the cookie. And then it would run a query on the database. And that query is huge. And it has like 20 joins. That's one of the only queries where it's handwritten, where it's not a query that is generated by an ORM.

And that query, it looks at our modules table. That is where all the files are stored. And it looks at our directories table, which is how the files are linked to the different directories.

and it looks at our sandbox table. So it gets from the sandbox table, it gets the sandbox, then it gets from the directories table, it gets all the directories that are related to the sandbox, and then from the modules table, it gets all the files that have the same sandbox ID. And then based on all that information, it generates like a JSON blurb that contains all the files of the sandbox itself and returns that. And

And you could argue that that's unscalable in the sense of it wouldn't work for huge projects. And that is completely right. CodeSembles was specifically built for creating prototypes, creating small projects. That was the initial use case.

And so there was a limit of 500 files for a single sandbox. And if there's a limit of 500 files, then it's fine to return the whole contents of the sandbox. Now, at this point with CodeSandbox, we also have a second type of project, which we call dev boxes. So we have sandboxes for prototyping and dev boxes for development. And for dev boxes, we have a more, I would say, sophisticated way of retrieving files. You can lazily retrieve files so that you don't have to download the whole sandbox just to see what's going on.

That is in a nutshell how it works. Then we have Redis in between for simple, small things like caches, but also tracking page views. So that if you access a sandbox twice in the same hour, that it's seen as one page view instead of two. But yeah, that's how we retrieve sandboxes.

So that DevBox's concept, that wasn't there in the first versions of CodeSandbox. When did that get added in? DevBoxes are now, I would say, approximately two and a half years old, maybe three years old even. When CodeSandbox launched, it was initially not very popular. I put it on Twitter. I had like 60 followers on Twitter. Most of my followers were high school friends. So I got three likes. We

We started to become much more proactive with talking about CodeSandbox. I started to read blog posts about how it worked, started to directly talk to people if they created an account to just get feedback, all with the idea, at some point, Paul Graham, he said that it's better to have 100 fans than 100,000 people that like you. And with that idea, we've tried to get to 100 fans by doing a lot of unscalable things. And CodeSandbox,

Code Sandbox started growing and growing and growing, and people started to use it for things that we didn't imagine in the first place. It was initially built for the specific case that I had at work where you wanted to ask a question and you wanted to have a live example for that question.

But people started to use it for other things, things like job interviews, bug reports, documentation, also workshops where people learn how to code. And people started to use it to build new projects. Like they started to work on a new website, a portfolio website, for example, or they started to work on, for example, a new blog, or even some people started a new startup. Like, for example, there is one that I always found funny to...

Well, funny, it's more like proud. There is this whiteboarding tool called Excalibur. I use it a lot. And the interesting thing is the initial version of Excalibur, it was built on CodeSandbox when it was called Excalibur. And it was sandboxes shared over Twitter. So people wanted to build real things with CodeSandbox. They wanted to build their portfolio website, things that they ultimately want to deploy online.

And while CodeSandbox worked really well for the things that were small, like job interviews or examples, it didn't really work for the big projects because of our 500 file limits. And that's when we started to look into...

If we could create like the same experience that we have with Code Sandbox for the smaller projects, but then for big projects. So still you should have the capability to share a link with someone and they can see the running code. They can see everything, how it works, and they can press fork to get their own version. And that is what Dev Boxes has become. It was kind of like a sort of rewrite of Code Sandbox because the core system, the file system changed underneath it.

And we normally, sandboxes all run in the browser because they can run small projects, but dev boxes, they run on the server. So we built a version of CodeSandbox that was really meant for full development. So how is it different, just thinking in the database context, how is it different that dev boxes retrieve stuff versus the original layout? So in the case of dev boxes, we run a VM to run that project.

a virtual machine. So essentially a small server that runs the project itself. And inside that server, we run a process that can read from the file system. So now when you open a dev box, you don't connect to our API server to get the files. Instead, you connect via a WebSockets connection to a little server that runs inside that VM.

And then the editor can ask things. It can ask like, can you give me the contents of the file under the file path slash project slash hello.txt? And then it will return it. And so the whole API server was still there, but it was only there for validation, for authentication. But ultimately, the connection for getting the files and understanding what's going on within the project would be done by directly connecting to the server, to the VM itself.

So that's not as scalable as just the original sandboxes. Everything runs on the client. We're just a file and API lookup. It's not as scalable. Like starting a VM for every user, for every project, it's a real challenge. And the last, at least the last four years, I've been

really, really deep in learning how to build efficient infrastructure. We went through multiple iterations of finding ways to run VMs efficiently. Initially, we tried Kubernetes, we tried Docker containers, but we felt like that was too slow. And in 2021, we

I found a project called Firecracker. It was created by the Amazon team, by AWS team, because they use it to run AWS Lambda and AWS Fargate. And the really interesting thing about Firecracker is that it's a VM that can run code, but at any point in time, you can say, pause this VM, and it will literally just halt. It will not do any execution anymore. And then you can say, write your memory now to disk.

And then later on, a day later, for example, you can say, create a new VM exactly from this memory that you wrote to disk and it will continue exactly where it left off. It could be in the middle of like an operation where it's calculating Fibonacci sequence, for example, and it will just continue. It doesn't matter. That was so interesting. And it's very, I would say it's very similar to like, if you would close your laptop.

and you would open it a day later, it will also just continue. Even if you have an XJS server running and it's in the middle of a compilation, you can close your laptop and a day later you can open it and it will continue exactly where it left off. But the interesting thing about this is one of the things that people felt with our initial version of CodeSandbox

with the server, was that it was slow. Because when you would open a project which hasn't opened in a long time, then you would have to wait for the create React app server to start. Maybe you need to run npm install. It could take a long time before you can actually see a preview. And also, when you press fork, then we would have to create a copy of that file system and we would have to do that same process again. And

And with this approach, we solved both of those problems at the same time. Because whenever someone would go away from a VM, we would just pause it and we would save the memory to disk. And then when someone later on, like two days later, would open that VM, we would be able to resume the VM exactly from where it stopped, from when it was paused, and it would resume in like one second. So that was problem one solved.

And that also helped a ton with scaling because we now have a rule, like if someone hasn't looked at a VM for five minutes, then we already hibernated because people won't notice if we hibernated because the resuming is so seamless. And the second thing that's done now is when someone presses fork, we also create a snapshot of the original VM and we use that snapshot to resume the new VM that was created. So when you press fork,

you can recreate kind of like an exact copy because it will continue exactly where the last VM left off and

And later on, we did optimizations where VMs even share memory. So if you have like a VM that started from a snapshot and someone presses fork, then that new VM will share the memory of the old VM. So if two VMs use two gigabytes of memory, it could be that the total usage of memory is two gigabytes because they refer to the same shared memory. And those little tricks made it possible to...

Skill VMs. It is the most challenging thing I've worked on because it's much more challenging than sandboxes because with sandboxes, we would run everything in the browser. So we would not have to run servers to run the code of the user. Like we would just, the only thing that we had to provide were files and all the execution was on the user part to do. But in this case, we had to create a fast service that...

can run code, but also is secure because people, they shouldn't be able to break out of that environment. And we're literally giving remote code execution as a service with CodeSandbox. That's such a hard problem. When I was at Code Academy, we had issues with people doing incredible amounts of compute and we had to have all these

hacks and cool gotchas around, say, crypto Bitcoin mining. But you have not only that, you also have intentionally the ability for people to call out to the network on the server. So how on earth do you make your boxes secure?

Yeah, that is challenging. The boxes themselves, they are pretty secure in the sense of they are very secure. They ultimately use the same techniques that AWS Lambda uses and AWS Fargate uses. So every VM has its own Unix user. We use a jailer to make sure that everything is in its own environment. But people can still abuse for

For example, someone could run a crypto miner on server. Someone could create an account on CodeSandbox, create 20 VMs, and that will go fast. We can spin up those 20 VMs very quickly, which makes it very easy for them to do. And they can start mining crypto. And crypto miners are the most frustrating people. They are very creative. They have a lot of time on their hands. The thing that we do right now is we have a detection heuristic that

that runs every minute inside a VM.

to detect if a VM is running. And I have to say, right now it's a lot of if statements and else statements based on existing crypto miners and their behavior that we've seen before. We're now experimenting with training a little neural network that automatically can detect crypto behavior based on network calls and the process behavior. But it's a cat and mouse game. The other problem that we've had a lot was phishing.

People were using Code Sandbox a lot for phishing. They were using it to create fake bank sign-in pages. They were using it to create fake Microsoft login pages. And we were fighting that for a long time. And we also saw disruption in Code Sandbox as a service because of it. Because, for example...

Google could, for example, block the whole csb.app, which is our preview domain. They could just block the whole domain because of an automated check that said, oh, there were two phishing sites on this domain. So that whole domain should be blocked. And then we had a downtime of like three hours because of that. And then we had to fall back to other domains. Or there is still an ongoing issue where...

Turkey, some ISPs in Turkey, they have blocked code sandbox, like the preview domain, just because they saw a couple of phishing pages. Initially, we also applied AI to this, actually. We created screenshots of all these public sandboxes and tried to determine whether it was a phishing sandbox or not. And then we would show warnings to the user or we would proactively block these sandboxes.

Nowadays, we sort of have given up the fight in the sense of whenever someone visits a CodeSymox preview for the first time, and it's a standalone preview, so they open it not from within the editor, but they open it from, for example, an email.

then we will show initially a big interstitial saying, watch out. This is a code sandbox preview. It's not a bank login page or it's used for development purposes. And then they have to press a button saying, I understand. And then they get to the preview and it,

It does affect the experience of CodeSandbox. Like when you share a preview with someone to a real thing, then they still have to go through this interstitial before they can actually access the page. But after deploying this, the amount of phishing pages on CodeSandbox has gone down tremendously. Like probably the phishers, they're looking for new pastures. They're looking for places that don't have something like this.

We do still sometimes get emails from these services that detect phishing pages, but we even automatically handle those now. Like we scan all our emails and if it's from a domain, an email that we know is a phishing detector, we would automatically ban the service.

sandbox when they put a link in there. Yeah, that's tough. There's no right answer here, right? Like what if I'm a, say a bootcamp and I'm teaching my students how to write a full page in whatever front end language and the example app is a login page, one that happens to look like say Google's login. How do you know that that's legit versus, you know, the equivalent scammer? Yeah, those were the hardest cases. That's why we prefer to show a warning instead of proactively banning sandboxes.

At this point, it's much easier because now we just show this interstitial for everyone. And if you trust the person who has sent you that page, then you can open it. And if you get this page from, I don't know, a random text message, and then you get this interstitial saying, don't trust any sign-in form on this page, then that covers a lot of cases.

This is yet another example of you've tried the complicated approach, say the AI scanning to detect scams, and then the simple, scalable cheap one actually is pretty effective. Yeah, I didn't reflect on that, but you're right. This simple approach, it solved all of it. And it is also the simplest approach. No fancy AI or detection heuristics.

Introducing Hyte, the only autonomous project management tool. Backlog grooming, bug triage, keeping documentation up to date. Those aren't why you got into product building, right? Well, Hyte handles all that grunt work for you. Using a first-of-its-kind AI approach, Hyte proactively takes care of time-consuming workflows without you lifting a finger. Hyte recognizes when you've agreed to trim scope and handles mapping the necessary edits back to your product brief.

When new tickets are added to your backlog, Hype combs through them, adding feature tags, time estimates, and more. And it's not just you. Everyone on your team manages projects, tracking updates, scoping work, balancing priorities. But whether or not your product succeeds shouldn't depend on project management. With Hype, autonomous workflows handle that mundane upkeep so your team can focus on building great products. If you're ready to stop managing projects, it's time for Hype.

Join the new era of product building where projects manage themselves. Visit height.app slash SEDaily to get started.

Before we move on to the client code runs the entire code sandbox portion of the dev stack, I have one last question on the back end. At Codecademy, I always wanted to build something where if we did detect someone was crypto mining, we would let them and then steal their crypto coins and give them something fake in response just to really stick it to them. Ever tried something like that? Are you willing to talk about it?

I have fantasized about this as well. Yeah, I did think about it. Never did it. There was a case where people did not just use it for crypto, but they were also, and still are sometimes, using it to watch advertisements and get money from it. It's very interesting. They start a browser inside of VM and then they have like a VNC connection to it so you can see the browser.

And then they start on like 20 VMs all at once. They start going to different pages where they watch advertisements and then they get money back for watching those advertisements. It's very interesting. And I once...

I found this, I found one of those VMs and the VM was just public so I could open it. And I saw the VMC window open in the preview. So I just started doing a lot of things inside that VMC. Like I opened Notepad and I sent a message and I could see them at some point like looking at that sandbox and getting very confused and closing all the windows because they felt like they were caught. Yeah.

That was funny. Yeah, the backend team for us, I hear they used to do things to mess with people to make it not worth their while to abuse the platform. It's a great way to stop people. It's similarly, when we detected crypto miners for a while, we just throttled their sandboxes, their dev boxes. So then they would feel like they would mine crypto and everything would still work. Everything would still run, but they would only be running at 5% speed of what the full VM could run.

So that's another way to confuse them, I guess. I love it. But okay, let's move closer to the front end. So let's stick with the original code sandboxes for now. Your Postgres database is continuing to scale wonderfully. You served up the file contents to the user. What happens in the user's browser now? Yeah.

Yeah. When we started CodeSum, we had a very low budget. So using a server was out of the question to run the code. Like we were students, we had a budget of like a hundred dollars a month. That was the maximum that we could have. And that was all from student loans in any case. So I started to look into whether I could run Webpack in the

in the browser. Webpack was by far the most popular bundler at the time. And initially, I got Webpack to run, but the bundle size of Webpack itself was huge. It was like eight megabytes, which was even more for 2017. That was huge. So then I started to look into more how these different pieces of code are executed. And I started to try to execute the code myself, like try to build a very simple version of

of Webpack that would run in the browser to execute the code. And in essence, it all revolves around eval, like the JavaScript function eval, where you can give it a string and it will evaluate that code.

And everyone says that you should never use eval, but that's the core functionality of CodeSemmle. That's what we built kind of the startup around. And what happens is we receive all the codes. And the first thing we do is we parse all the codes. We try to understand which files are used and which files import which other files.

This results in a dependency graph, as they call it. And with this graph, we transpile all the code. So you could have, for example, your source code could be TypeScript or it could be JavaScript, but not suitable to run in the browser yet. So we would transpile that code to code that would be possible to run in the browser. And then later on, once everything is transpiled, we would execute that code by using eval.

And these pieces of code, they could import other files. And they would do that using a require function call, the common JS way of running code.

And what we would do is whenever we would eval the code, we would wrap everything in a function where we provide the require as a function. So we would overwrite the require function with our own function. And whenever it would require, call require, we would resolve that code and we would run eval on that code, creating this kind of infinite loop if you have infinite imports.

And that is ultimately how it works. The logic itself is not extremely complicated. I also have given a talk and I think I've done a blog post about how this works. And I also have a sandbox that has like a mini bundler implemented. That is in a nutshell everything.

how it works. It became more advanced because there are bigger, also big challenges. Like for example, how are you going to support node modules? Like installing dependencies alone, everyone makes the joke that node modules are bigger than the universe. So how do you make that efficient? Which is a challenge in of itself, but the core functionality is essentially this loop of creating the dependency graph, transpiling all those files in that dependency graph, and then calling eval with an acquired override on the code.

So code sandboxes are known for having beautiful visuals, right? How does that interact with the DOM then or the HTML page? Okay. So code sandbox, if you are looking at the editor page, you're actually looking at two applications. You're looking at the editor itself. That's a create React app application. But the preview on the right, that's a completely different application that is rendered inside an iframe. And that iframe refers to a different entry point, so to say.

That entry point is the bundler. So when we, the editor downloads all the codes from our Postgres API server.

Then the editor sends that code to the iframe of the preview iframe. It would call post message to send all the files to it. The reason that we have to run everything in an iframe is twofold. One is it isolates the user application completely from our editor, so they cannot mess with our editor. But the other reason is security, because if

We ultimately are running user code, so they shouldn't be able to access our cookies or our local storage, those kind of things. So the editor would call post message on the iframe, send the user code to that preview. Inside the preview, we would have the bundler sitting idle and listen for any message that comes in. And when it would receive a message, it would then do this loop of executing the code. And that would then ultimately fill up the preview.

And whenever the user would change code, we would just send the whole bundle, like everything from the editor back to the preview iframe again. And the preview iframe would then, the bundler in that would make a diff. It would look at the previous, the current version of the code and the new version of the code, and it would compare everything. And then for every file that changed, it would reevaluate those files. It would just run eval on those files and its parents.

So you're using, let me get this straight, eval.

iframes, and window.postMessage. Yeah, that's the core functionality. Yeah, it sounds counterintuitive, right? Yes, but then on the inside, you're also building up a full dependency graph and doing dynamic reloads of the graph based on impacted changes from the editor. So you have this incredible mixture of pre-2017 era tech with very lovely computer science concepts for efficient redeploys.

Yeah, that's the interesting part because you're building essentially two things. You're building a, I would not say hardcore, but like pure developer tool like Webpack, but you're also building a UI around it, like a developer experience around it. And that opens up some unique opportunities because you have the whole stack, you control the whole stack. So a simple thing that we did, for example, is whenever we would get an error inside the bundler that...

It could not resolve a dependency. Let's say you start, you import Lodash, but you have not installed Lodash yet. We would create like a specialized error message in the bundler. There would be a button saying, oh, we could not resolve Lodash. And then it would be a suggestions that you could click, which says install Lodash. And when you click install Lodash,

And when you click that button, the bundler would post message back to the editor saying, we need to install Lodash. And then the editor would create, would show the UI of installing Lodash. It would install Lodash and call re-evaluate on the bundler. That is just one of those examples where because you control the whole experience, you can create a pretty nice experience. Another one was someone in our Discord was saying,

frustrated because they were using code sandbox for a job interview and they hadn't capitalized their components the react components and they were the react component was not working for some i think it was back in the day when react components had to be capitalized to work

And because of that, they failed their interview. So as a response, I created a small detection heuristic on that too, where if we detect that there's an error that's about a custom component that's not a default HTML element, and React would throw that error, then we would show a suggestion like, have you capitalized

your component and then we would have a button capitalize component and when you would click it we would update the code to capitalize the components to catch those kind of things and that was the most exciting to me because it's a mix of like the core developer tooling building a bundler but there is also a developer experience and user experience that you can refine based on what comes back from that bundler did the person ever respond that they got a job

Afterwards, or did that help them? No correspondence after. I didn't implement any analytics to know how often that button was pressed. Maybe no one has seen it afterwards. But who knows? Just one clarification point for technical areas. When you say install, what is installing in the client? Yeah, installing dependencies.

It has changed over time, but the biggest challenge that was there was installing NPM dependencies because NPM dependencies tend to be pretty big. Sometimes a library, an NPM dependency could be two megabytes and that's not a big problem in of itself, but that dependency could also say, I have 20 other dependencies and those dependencies could also be two megabytes. Suddenly you have 80 megabytes of dependencies you have to download. So,

We added support for NPM dependencies by creating a separate service, an AWS Lambda service. And that Lambda service, we would say, I want Lodash, for example. And Lambda would install Lodash. And then it would look at what files have a high probability of being required to run the dependency. So it would look at the entry points.

So for example, Lodash would say, oh, my entry point is in source slash index. And we would then look at that file and we would look at all those imports in that file. And we would, again, go through the same process of creating a dependency graph. And we would make sure to only include the files that were required to run the main dependency. And we would send that also back as a JSON to the bundler itself.

If the user then would require a file that was not included in this bundle, then we would manually download that single file from a service called Unpackage, which hosts the files for NPM dependencies. So when I say installing a dependency,

The only thing the editor does is add, for example, Lodash to the list of dependencies. But ultimately, the bundler is the one that calls a service to download the files required for that dependency. And this is all cached under Redis? This is cached in S3. Yeah, it's something that came later. And it's very interesting how it works.

Because working with dependencies itself is a very interesting challenge. Because what if, for example, if you install dependency A and it requires Lodash version 2, and then you install also dependency B and it requires Lodash version 3, how can you make that work? Or if you install dependency A, it requires Lodash version 3.

Any version that's in 2 and then you install dependency B and it requires Lodash, specifically version 2.5. How are you going to make sure that both versions get Lodash 2.5 to be most efficient in the download? So there are some very interesting challenges with building up this dynamic

npm install service. And so we don't just cache singular dependencies. We also cache combinations of dependencies, like for example, React and React DOM. We cache that combined, that merged bundle. But we also cache React, React DOM, Lodash, and React icons, for example, as combined bundle so that we don't have to recompute these combinations every time. So yeah, that bucket is huge. It's a couple terabytes, I think. Yeah.

So there's a benefit then to users using common combinations of packages. Like if I'm using React and React DOM the same way everyone else is, that's already pre-cached before I make that installation request. Yeah, for sure. And very often people will use, I think React and React DOM is the most common combination. And all of this, this whole S3 bucket is cached, Cloudflare is in front of it.

So because of that, we also have a lot of requests to S3. So dependency installations, it's very interesting. If you try to install, if you want to run a sandbox with a dependency combination that has been used by another user before, then...

installation will be within a second. It would just be a matter of downloading the file from the S3 bucket. It would already be pre-bundled. Sometimes we even do pre-transpolation on the source files to make it also faster to run so that the bundler doesn't have to do transpilation on it anymore. We would even store information for every file, like what the

The dependency graph, you would also embed the dependency graph into dependencies to know which file imports which file. So the bundler would also save work. The bundler wouldn't have to do the parsing of all those files to run the code. And that is the interesting thing. You could see it as a super fast NPM because the dependency combinations...

You are most likely using a dependency combination that has been used by someone else before. And in that case, you would just have to download that file from an S3 bucket. Okay, so we've covered most of the stack. We're nearing the end. We've got the database and correlated S3 and Redis caches around it. When I make an installation request, you've got a whole bunch of heuristics and strategies there that all get sent to the clients in some kind of transpiled bundled form.

And then you have the two apps on the client, the editor and the preview or viewer. Whenever the editor causes a file to be changed, we post message over to the iframe to eval the code as necessary. I think the last bit I want to touch on is the editor itself, because that on its own is an entirely separately complex and interesting application. The code editor that works in the browser, how does that work?

I would argue that's the most complicated part of that specific stack. The first version of the editor was very simple. It was all React components. It was like the file explorer were React components. And the editor itself, the code editor was CodeMirror, which is a library created by a Dutch person. Later on, we would move our code editor to Monaco. Monaco is a little piece of VS Code. VS Code...

has an incredible code base, one of the best code bases I have ever seen. It has really good composition. It has really good APIs. And you can already see that it's so good

because they were able to extract the core editor from VS Code and make it a library and keep it in sync with VS Code relatively easily. So Monaco is that. It's essentially the VS Code editor, but then exposed as a library.

It doesn't have all the fancy things from VS Code like extension support or a file explorer or all those. It just has the editing part, but it works and it works really well. So we moved to Monaco because it had better, well, it had a more familiar experience for everyone using VS Code. And when CodeSumbo started,

Atom was the most popular editor, but later on VS Code became the most popular editor. But there was always a case where people were wondering if they could do the same thing as VS Code. And Code Editor is an incredibly complicated piece of technology. You have things like the command palette, you have key bindings, you have themes, because developers, they want themes. You have settings for all the little things like line height, font size, font itself, and

how the editor should behave when you press all

Alt and move your cursor around. How should multi-cursor work? All of these, there were a ton of settings. And then you have extensions where extensions alter the editor behavior or add new editor behavior. So code editor is one of the most ambitious UIs to ever work on. And it's an incredible thing to work on. You learn a ton. Oh, and performance. Just making it performant, like when you click on a file that it can open that file within 10 milliseconds. Incredibly hard.

Initially, we were doing everything with Monaco. We had our own UI built around it. And Monaco was the core editor, but we had our own UI. And because of that, we were able to create a lot of custom experiences. But later on, I started to become more and more interested in VS Code. And in 2018, during my Facebook internship, I was looking at a way to make VS Code run in the browser.

VS Code didn't run in the browser back then. Initially, VS Code was built to run in the browser, but it was not performant enough. So they made VS Code an Electron application. So it used web technologies to render the UI of VS Code, but it did not run in the browser because it did use things like Node file system, a lot of Node APIs.

So I was looking at if we could use VS Code code samples because that would save a lot of work. It would create familiarity for developers, like their own custom key bindings, those kind of things. We would start to allow extensions. It would be faster because ultimately VS Code is much faster than React, mostly because VS Code is imperative instead of declarative. It would have a ton of benefits.

So making VS Code run in the browser ultimately came down to emulating Node in the browser, because Node ultimately is JavaScript with native Node modules. So for example, file system, net, HTTP, OS, all of these Node modules. If you create a browser equivalent version of that that would run in the browser, then essentially you're re-implementing Node and you're tricking VS Code in thinking that it's running on your local computer, but it's running in the browser.

And so that's initially what I did to make VS Code run in the browser. Later on, the VS Code team actually ported VS Code back to the browser. And now with Code Sandbox, we're running the browser version of VS Code.

We're still emulating some node components to run extensions sometimes in the browser. But if there is, for example, a VM, then we use VS Code Server so that you have the native VS Code experience. But we do have a little add-on that we put on VS Code that allows us to render React components inside the VS Code UI. And that allows us to create custom experiences still using React.

but within the context of VS Code. So how does that work with type information? Let's say I want to hover my mouse over Lodash, import it from Lodash, and see what are the methods available under it.

Yeah, that's a very interesting question. So if we are looking at the Devbooks version, it's quite simple because VS Code Server gets the files from the file system. And well, in fact, the extension, the TypeScript extension runs on the server itself and it would read the files. That's kind of like a solved problem. But for sandboxes that completely run in the browser, we have a bit of an interesting solution.

We run VS Code extensions in the browser. Those VS Code extensions expect to run in a node environment. They expect to have. When they do require FS, they expect to have the FS module available. They want to be able to run read file, read file, read dir, write file, all those kind of things.

So the first thing that we did was implement file system, but then to run in the browser. So we can run VS Code extensions in the browser. They will think that they are in a Node environment, but they're running in the browser. And the second thing is whenever you install a dependency like Lodash, we do the same thing that we essentially did for the bundler. We have a service that's

calls npm install for Lodash. And then it looks at all the types files that are installed. And it will create like, again, a JSON bundle that just only includes all the type files. So only the .ts files or the .ts files. And then in the browser, we would download this bundle and we would write all of those files into our in-memory file system, as we call it. So this fake file system that we've created in the browser.

And then when the TypeScript extension would run, it would again think that it's in a Node environment and it would read files from Node modules. And those files, those are all coming from this in-browser file system that was populated from this dependency packager. Is that a different service, just to clarify, from the runtime JS file service? Yeah, yeah. But it shares a lot of code.

I essentially copied the code from the dependency package here and then changed some things to make it more fitting for type fetching. I have to say there's now, that's how it works for the first six years. And now recently I've made a change that we actually download the tar file directly from NPM. Thousands have become better and better. And at this point,

unpacking a gzip tar file is really fast in the browser. So for a lot of dependencies, we don't go to our own servers anymore. We literally download from the npm registry directly the tar file and we unzip it in memory and then we save it. It's a bit less efficient, but we had to do it because half of our bandwidth usage was just from this typefactor.

And Cloudflare told us that we had to go on a more expensive plan. And I was thinking, well, what if we then just download directly from the NPM registry, then we don't pay for the bandwidth of those bundles. And the funny thing is,

Because of that, we significantly reduced our Cloudflare bill. But Cloudflare backs the NPM registry. So Cloudflare itself doesn't notice a difference in bandwidth. In fact, the bandwidth has increased because now tar files, they don't create these perfect bundles of typings. But at least we don't get billed for that bandwidth. Microsoft owns NPM. If they want to reduce that, they can always add a feature to NPM to help you out.

Yeah, that's true. That's true. It was an interesting thing. And unpacking a GZIP tar file definitely would be super inefficient back in 2017 or back in 2018. But nowadays you have, I think it's a browser feature called decompression stream or something, which makes it super fast because ultimately it uses the native decompression of the computer. It is really beautiful how the browsers have made things like this possible. And more and more over time, more and more of your stack is kind of becoming built in. True.

I would even say that if I would rebuild CodeSandbox today, I would probably not build the bundler as it is today because right now everything is based on common JS with this require override. I think if I would build it today, I would use the ES module capabilities of the browser

and then perhaps use a service worker that acts similar to a feed dev server. Whenever you try to download a file, a JavaScript file, our service worker would maybe transpile that file and give it back. But then the execution and bundling is done by the browser. It's not done by us anymore. We would not even be using eval. Yeah, it would essentially, it would be fully ESM. It would be like,

a Vite. It would literally be an in-browser version of Vite. If I would rewrite everything, I would do it that way because it's much more native to the browser and the world is moving to ESM in any case. Eventually. Right. Eventually. It'll take a while. Do you think you'll ever have time to try out, say in a few years, the fully browser bundle and transpiler for CodeSandbox? Yeah. If no one else has done it.

That's also the big question, because I think that's a big opportunity and I can understand if people would explore it. Right now, the bundler as it works today works really well. I'm a bit worried that if I would rewrite it completely using this, that we would introduce bugs. It would still be an interesting experiment.

Yeah, it is something that I would want to explore. I had it in my mind for a while, but I'm not sure when I will explore it. What's another long-term exploration that you would love to get to, if and only if you had time? Hmm.

What I find very interesting about this whole firecracker stack that we now have for development environments is I think the fact that we can clone a VM within a second, that we can resume a VM within a second, that's incredibly powerful technology.

that is not just applicable to development environments, but also CI/CD systems, even to deployments. Yeah, I think there's a whole plethora of different use cases for that technology. I would love to generalize this infrastructure so that people can use it for other purposes.

that go beyond the cloud development environments. Yeah, I think it's incredibly powerful. It could be used for so much more. So that's something that I would find very interesting, like making it so that the infrastructure that we have today, we can open source it, that we can make it generic enough that people can use it for their own use cases. A lot of companies, a lot of projects have projects

CICD times where you have whatever, let's say a half dozen tasks. Each of those tasks spends 20 seconds in NPM or PNPM or yarn install, and then three seconds in the task itself. It's very frustrating. It's very wasteful. Yeah. And this technology, I was thinking you could, for example, one thing that you could do is

You could do all the preparation that needs to be done for CICD and then create a snapshot. And whenever you need to do a CICD run, it will continue exactly from that snapshot. It will just then pull the latest code and then run the tests. But also for parallelization, like if you, for example, want to run a test suite across 12 different workers, what you could do is you could create a snapshot. You could do all the preparation work on one VM.

And then after that, you clone this VM 12 times and all these 12 VMs, they run a different part of the test suite in parallel. And one of the things that we even support, or we have never deployed it, but it's something, it's an experiment that I built, is that you can do cloning of VMs over the network even. So you have a VM running on one machine and

and then another machine starts a clone of that VM and they use a network to sync the memory between the VMs. That's also a capability. Then you could really think like if you have like a CI CD cluster that is like, I don't know, 20 servers big, you could do the preparation on one

on one machine and then clone that machine to all the other 20 machines and they run all 1/20 of the test stack using the result of the preparation that the single server has done. And it's actually worth it both time and compute to do it that way. Yeah, I'd say so. It's just an idea. It's like you probably will find things that don't work in practice, but that always happens when you try things. I also wonder, for example, there's now

a ton of research being done in how to efficiently train AI models using hundreds of servers at the same time. Because ultimately that's the biggest challenge. Like how can you scale so much, so much compute and data transfer in a data center? I think those challenges that they have, they are not the same, but they are in a similar vein, which is, it's an interesting, it's an interesting space at least.

How soon will I be running my CI jobs on CodeSandbox then, given all your caching? I think not soon. Well, it would be good to explore different use cases and work on the different capabilities for the stack itself. In fact, you could run your CI CD today already. It's just that the platform has not been built for it, but we can expose an API or you can even create a dev box that runs CI CD. Let's see.

Let's say just next year. Then next year you can ask me, can I run CICD? And then I will say, yes, we have this API. And then you can give it a Docker image and it will run your code. We'll follow up in a year.

Same with the AI models. This has already run longer than our typical SED interviews because you've got so much fascinating tech to talk about. I want to end on a less technical, perhaps still fascinating note on your personal points. Can you tell us a little bit about volleyball and why it's such a fantastic game to play?

Yeah, I mentioned this before. In my free time, I do a lot of volleyball. Volleyball is something that I started doing when I was like, I don't know, nine years old or something. And I did it a lot. During high school, I did it an extremely amount of lot. If that would be words, I think I trained like six times in a week. I got a bit burnt out on it during high school. I was a bit too, I guess, a bit too competitive.

So I stopped for a while and didn't do much volleyball. But I recently picked it up again, I think a year ago. Yeah, about a year ago. And it's so incredibly good to play volleyball because it's when I'm during the training or during the match, I completely forget everything. Like everything surrounding. Like I could be worrying about something, but when I play volleyball, I don't worry about anything. I just worry about the ball. So it's very good from a psychological standpoint.

vantage point, but also I'm much fitter now. And the day after I feel much more energized because of the sports. So yeah, I'm a big fan of volleyball. It's kind of a reminder for me that it's important to not just do coding or meetings the whole day. Ultimately, it's also important to stay healthy and exercise.

I think a lot of people maybe did high school volleyball, just hanging around in gym, maybe have done some casual beach volleyball with friends. What is different about your areas and brands of volleyball that we're not just standing around awkwardly waiting for the ball to come to us? Oh, so you mean what makes it fast? What makes it a good workout? You know, what are you doing?

When I play volleyball, I'm mostly found on the ground, I think. There's a lot of diving going on. Like if you want to get to a ball and it's just out of reach, you just jump. And recently I started to also...

track my heart rate during volleyball to see like how intense it is. I've now done it two times and my heart rate goes up to 195 when I play volleyball. It's really, especially at the net, it's also you jump a ton because for example, when we play volleyball, if I play mid, the thing you do is whenever the setter has the volleyball, like whenever the ball goes to the setter, you have to jump and

fake an attack. So you always jump when the setter has the ball because that way you confuse the opponent about where the ball will be attacked so that you confuse the blockers. So you can jump like...

In just one rally, you can jump like two or three times and then you have like 40 rallies and that's four times, 160 times. Yeah, you jump like 350 times or something in a single match. It's very intense in that sense. It's very explosive. There's no long running, but it's...

It requires a lot of short bursts of energy. That sounds very intense. It's also very psychological. You're not turning your brain off. You're still thinking. Yeah, it's true. There's a lot of prediction going on. Like you can confuse your opponents. They can confuse you. But what I like the most is the competitive aspect of it. Like,

winning matches or losing matches too. That makes it a lot of fun compared to, for example, running. I also do a lot of running nowadays, but for me, something is only, when it comes to sports, I find something mostly exciting when it's a competition that keeps me in it, I guess. Great. Well, this has been an absolutely phenomenal interview. Ivis, thank you so much for talking us through the start of CodeSandbox, the backend, the database, the caching layers, the

the front end this is a lot of stuff if folks want to find out more about you and or courts and box on the internet where would you direct them my twitter is where i'm most active i would say that's compute ives c-o-m-p-u-i-v-e-s

That's my Twitter. And CodeSandbox can be found on CodeSandbox.io or CodeSandbox.com or CodeSandbox.org. That's a small tidbit. We went with CodeSandbox.io because the domain was only $30 and CodeSandbox.com was $3,000. But then later on, a lawyer who learned how to code using CodeSandbox gifted the CodeSandbox.com domain to us. That's how we ultimately got the CodeSandbox.com domain. Well...

All that being said, this has been Josh Kulberg and Yves Van Horne with Software Engineering Daily. Thanks, y'all. Cheers. Thanks.