cover of episode Node.js and the Javascript Ecosystem with Gil Tayar

Node.js and the Javascript Ecosystem with Gil Tayar

2024/11/28
logo of podcast Software Engineering Daily

Software Engineering Daily

People
G
Gil Tayar
Topics
Gil Tayar: 本次访谈主要围绕Node.js生态系统,模块化和Monorepo展开。他分享了自己在软件开发领域的35年经验,以及在Microsoft, Wix等公司的工作经历。他强调了模块化在软件开发中的重要性,并详细阐述了其在Node.js项目中的实践经验,包括如何构建独立的包,如何管理大型Monorepo项目,以及如何处理ESM和CommonJS模块系统之间的兼容性问题。他还分享了自己对前端开发中非标准import语句的看法,以及对Node.js加载器的理解和应用。 Josh Goldberg: 作为访谈的主持人,Josh Goldberg与Gil Tayar就Node.js生态系统,模块化,Monorepo,ESM和CommonJS模块系统,以及前端开发中非标准import语句等话题进行了深入探讨。他引导Gil Tayar分享了其在这些领域中的经验和观点,并就一些技术细节和问题进行了提问和讨论。

Deep Dive

Key Insights

How did Gil Tayar get into software development?

Gil started programming at the age of 13 with a ZX81, moving on to Turbo Pascal and eventually joining the army as a software engineer, where he spent six years as an instructor. He then transitioned into industry roles at companies like Wix and Microsoft.

Why does Gil Tayar enjoy mentoring and teaching others?

Gil enjoys mentoring because he finds it fulfilling to help people understand complex concepts in a structured and logical way. He believes in starting with context and motivation before diving into the details, which helps learners grasp the material more effectively.

What is Gil Tayar's approach to explaining complex programming concepts like promises?

Gil's approach involves breaking down the problem step by step, starting with the basics and gradually building up to more complex ideas. He emphasizes the importance of understanding the context and motivation behind each concept before diving into the technical details.

Why does Gil Tayar prefer not to transpile code in Node.js?

Gil dislikes transpilation because it adds complexity to the development process, turning it into a pipeline. He prefers to avoid transpilation and instead uses JSDoc typings for type checking, which works well in most cases.

What is Gil Tayar's opinion on monorepos versus polyrepos?

Gil prefers polyrepos because each package can have its own independent configuration, making it easier to reason about and modify. In contrast, shared configurations in monorepos can become overly complex and difficult to manage, especially at scale.

How does Gil Tayar handle dependency updates in a polyrepo setup?

Gil suggests updating dependencies only when developers modify a package, using a script to automate the process. This approach ensures that dependencies are kept up-to-date without the need for a shared configuration across all packages.

What is Gil Tayar's view on the Node.js ecosystem and NPM?

Gil considers the Node.js ecosystem unparalleled in terms of productivity and the scale of NPM, which has enabled developers to iterate and share code at an unprecedented level. While it has its problems, NPM remains an amazing success story.

Why does Gil Tayar advocate for ESM (ECMAScript Modules) in Node.js?

Gil believes ESM is crucial for the future of the Node.js ecosystem because it standardizes module systems, reducing the mess created by the coexistence of ESM and CommonJS. He sees ESM as a necessary evolution to simplify the ecosystem and improve developer experience.

What are the challenges of requiring ESM modules in CommonJS?

Requiring ESM modules in CommonJS can lead to issues with top-level await, as CommonJS cannot handle async imports. Additionally, allowing CommonJS to require ESM could delay the adoption of ESM across the ecosystem, which Gil sees as a negative.

What are some use cases for Node.js loaders?

Node.js loaders can be used for transpilation (e.g., TS-node), mocking modules in tests, and handling different protocols like HTTP or loading from zip files. They provide a standardized way to intercept and modify module loading behavior.

Chapters
Gil Tayar's journey into software development started at age 13 with a ZX81, progressing through Turbo Pascal on an Apple II and a software engineering course in the army. He recounts his experiences working at various companies like Wix and CloudShare, and his current role as a software engineer at Microsoft. He highlights his passion for staying updated with industry trends and his enjoyment of mentoring.
  • Started coding at age 13 with a ZX81
  • Army software engineering course and six years as an instructor
  • Worked at Wix, CloudShare, and currently at Microsoft
  • Enjoys mentoring junior developers

Shownotes Transcript

Translations:
中文

Gil Tayar is a Principal Software Engineer at Microsoft, Developer Advocate, and Conference Speaker. Gil's contributions to the Node.js ecosystem include adding support for ECMAScript modules in Node.js, Tomoka, and Test Double. He joins the show to talk about his history in software engineering, monorepos versus polyrepos, the state of JavaScript, and more. This episode is hosted by Josh Goldberg, an independent full-time open source developer.

Josh works on projects in the TypeScript ecosystem, most notably TypeScript ES Slint, the tooling that enables ES Slint and Prettier to run on TypeScript code. Josh is also the author of the O'Reilly Learning TypeScript book, a Microsoft MVP for developer technologies, and a live code streamer on Twitch. Find Josh on Blue Sky, Mastodon, Twitter, Twitch, YouTube, and .com as Joshua K. Goldberg. ♪

Gil, welcome to Software Engineering Daily. How's it going? Good. Thank you, Josh. Thank you for having me. I'm very excited about this. You've been around in the industry for a little while. You've had your hands in quite a few areas like Microsoft and Wix and Node.js. But could you tell us just to start, how did you get into the wonderful world of software development? Well, at the age of 13, I'm Jewish, so bar mitzvah.

And I asked my parents for a PC. At that time, it was the ultimate in PCs was a ZX81 with 1K of RAM.

which I upgraded to 16K and I learned basic that way. From there on to Turbo Pascal on an Apple II, where I worked on the wonderful IDE by Anders Helmsfeld, who is still one of the luminaries of the software industry. And now we're both working in the same company, Microsoft.

And then the army. I joined the army as a software engineer. I did a software engineering course there. It's like a nine-month boot camp kind of thing. And spent six years as an instructor in the school of software engineering. So I continued being an instructor. From there on, easily got into the industry. Companies like Magic, which nobody remembers today.

But definitely Wix. I even started my own company in 2000, the height of the dot-com era, about a year before everything crashed down. But we survived and the company was sold for pittance. And from then on, Wix, CloudShare, and today my role is a software engineering at Microsoft. So 35 years. Yeah, more or less. Wow.

I believe I remember at a conference, someone mentioned that you were their instructor in the army. How does it feel that people you've taught to code are now also speaking at conferences and movers and shakers in the industry? Yeah, I taught her in the army, Tali Barak, wonderful speaker also. I taught a lot of people and mentored a lot of people, not only in the army, that's really, really old news, but also I did a bootcamp once, front end.

So, I taught like 10 people and we still get together sometimes and reminisce. I mentor a lot of juniors trying to figure out how they can enter because at least in Israel, juniors have a really, really hard time getting into the industry.

So I helped them along. My nephew and my niece came to the software industry with a little bit of help from me. So yeah, a lot of people and it's fun. I like mentoring and helping people along, whether they're junior or senior. In a way, it's what I do. It's what I like. How do you stay in the same mindset as them? How are you able to relate to these people who are coming in with such a different set of technologies and people and places around them?

I mean, I don't program in COBOL anymore. Well, I never did, but let's say. I keep abreast of all the trends and everything because this is what I love to do. I don't do that because I need to. I do that because for me it's fascinating. I went through a lot of revolutions. The ZX81, so the PC revolution. I still remember...

In the army, I wanted to go work on PCs and not on mainframes and minis, which were all the rage back then. And they were like, come on, this is just for games. What's interesting about that? I told them, you just wait. And I like, it's a prophecy and it worked.

So yeah, I keep a lot of revolutions and it's fascinating. Computer industry and the software industry, the OOP revolution, the functional programming revolution, the web revolution, the mobile revolution, now the AI revolution. It's really, really amazing and really, really interesting. So I'm keeping like even front end. Like I started...

Around 2000, writing front-end code where you had to have one big file full of JavaScript and you could do very little. And then Knockout came out and then Angular, JS, and then React. And it's so fascinating to see all the different solutions coming up for the same problems. I just love it. So when people come to me, I know what I'm talking about. And we talk in the same wavelength.

They are younger, though. Yeah, I want to rephrase the question a little bit. When you're a deep expert on something, it can be hard to maintain empathy with someone who's very new to it. They're facing very different challenges from you. For example, when I was working at Codecademy, we would have some deep experts on C++ who deeply understood memory management, trying to explain how to write a for loop to someone who had never programmed before.

So do you find it difficult or do you have any strategies around taking someone who's, say, a niece or nephew and teaching them despite you having a deep understanding of everything in the computer? Yeah.

I don't think it's a tragedy, man. I like doing it. And I always like, I know a lot of people when explaining, they're like diving deep into the problem and you don't understand anything because you're like deep in the rabbit hole and everybody's around you and say, well, where are we? Where are we? I don't understand anything. But they're not saying it because they're polite or confused or ashamed that they don't understand anything.

I always thought, okay, let's start with the background, like the context of things, and slowly explain each and every single step by single step in a very... I really, really tried to make it orderly and logical and structured. And a lot of people come to me and say, wow, this is the way it should be explained. And again, it's because I take the step back, I give the motivation, give the understanding of the context, and then slowly and logically understand everything. And once you do that...

It's fine. I mean, it works. And no, when I'm working on JavaScript, I don't explain memory management, obviously. So you need to understand the scope of what you're explaining.

That's a great way of putting it. You must have a lot of patience. I understand a lot of folks don't have quite the emotional endurance to be able to sit through and re-explain the whole area to people over and over again. One way to say it is patience. The other way is like, oh, just give me a stage or a junior sitting next to me where I can explain to them something. And I'm like, okay.

It's like somebody comes with a question. I'm like, okay, let's start from the beginning. Let's explain. Promises. Okay, what you're doing here is blah, blah, blah, blah, blah, et cetera, et cetera. I love it. So it's not patience. It's love. It's not patience. It's love. Oh, there it goes. Just as a human. Yeah. Great advice to live by.

Just to humor me before we start diving into some of the more deep aspects of build systems and web tooling, suppose I know JavaScript, but I've been transported in time from the era when callbacks and callback hell were quite popular. How would you just briefly, you know, 30 seconds up to a minute, explain, say, promises or promises A-plus to me as someone who wants to understand?

I have a talk about that. 2014, that was my first talk. I called it "From Callback Hell to Async Heaven and Through the Promised Purgatory." And it's a wonderful talk where I slowly explain from callbacks, I go to promises,

Okay. And how did I do that? I don't remember anymore. And then from promises, slowly I went to the yield thing where you yield the promises and then async heaven and boom. I'm guessing 20% of the audience really, really understood that back then. Probably also now.

But still, I think it was a very orderly way of explaining it. You just go explain the problem with callbacks and then moving from callbacks to promises. The idea is taking the callback and treating it as a value, the back to promise. So it's a value. And then once you have promises, then you extend to dot, then dot, then dot, then dot, then. And then you do the yield and blah, blah, blah. And everything is great.

It's tough, but it's doable, or at least somewhat. Yeah, there's a kind of beautiful parallel, I think, between the way you're explaining it and the way it's technically built up. Because for those who haven't gone deep into promises, it really is built that you have the concept of generators, which then powers async-await, but in order to do so, you need to work with basic raw promises underneath. Exactly. And what I'm seeing today is the exact opposite. People are using async-await

And they have no idea what a promise is. So they're not using .then and .catch. And once you get into the promise world where you're not awaiting something, but you're like taking the promise and putting it aside and then running things, they get confused. What is this I'm doing? Like I asked somebody, you have an async function and then you console log a call to that function. What is it console logging? And a lot of people say, let's say it's just returning a five. They're saying five and not a promise to a five.

A lot of people today, because there are so many levels of abstraction and because it's so ingrained today, don't really understand the connection between awaits and promises, not to mention callbacks. Do you have any particular strategies or ways that you would educate folks if you were, say, working on a team and you wanted to help folks understand how that all is built up? Slowly. Always slowly.

You said patience. In the end, if you go too fast, people don't understand. So slowly and logically and methodically, there's no other way.

It's kind of inevitable too in any area of tech that we are constantly building things on top of the older, that we're learning how to better do stuff, creating higher and higher abstractions. So by nature, although we're more powerful as developers, dear God, I can't go back to async or pre-async to call back hell. We also have more we have to learn to really truly understand the full stack. Yeah, absolutely. And I programmed an assembler, so I go even deeper. Yeah.

There is a certain satisfaction to knowing where all the bits and the bytes are headed. Yes, a lot. Well, let's talk a little bit about the stack because you've done a ton of work on build systems and web tooling and Node.js over your career. What is your relationship to the build system or CICD for Node projects these days?

It combines two of my big things that I love, which is build systems and Node.js. Even in 2000, when source control was just an idea that like 20% of developers used and build systems were like non-existent, there was make, that's about it. And the company I built, we decided to build a build system. And it was this huge amount of Perl code, which took XML files and, or, or,

I don't think it was XML. It was lightning. We needed to parse everything and build a build out of it. It was extraordinarily beautiful. Obviously,

unmanageable after a while, but we really, really tried because it was really important for me. It's not the build system. It's the idea of modularization that is important to me, not to look at all the code as one huge, big monolith, but rather as a set of modules or packages, as we call them in JavaScript today. So that's one like love. The other love is Node.js.

I've done tons of stuff like assembler, Pascal, basic C, C++, Java.

Python and now Node.js and JavaScript. And the most I've ever been productive with is by far Node.js. I have this developer today. I gave him a plus one ticket to Node.tlv. There's a conference in June in Israel, Node.tlv. So I have a ticket because I'm speaking and I gave him my plus one ticket. And he comes to me, he says, I'm really excited about Node.tlv. I said, why? He said, because I'm writing in C sharp and I'm like, I'm

I'm really, really missing the productivity and the ecosystem of Node.js. And he's right. It's unparalleled. I mean, it has its problems, but nobody has built anything remotely to the scale of NPM in the world. It's an amazing success. Tons of problems, but an amazing success story.

Yeah, it's truly not comparable directly to any other system in existence, where just the sheer ability of people to iterate, to publish, to learn from each other, and as we were saying, build abstractions in NPM is never before seen in human history, which is kind of a weird scale to be able to apply to something that's come out over the last decade or two. Absolutely. I mean, I remember there was this holy grail in the early 2000s of building a component model.

Everybody wanted to build a component model because the idea was, erroneously, if we have lots of components that are well-built, we can just glue them together and build an app. A bit naive, but...

NPM is the closest we got to this goal because I can truly, like I need an Emutex today, an async Emutex that locks parts of the code. And it was like, okay, NPM, search for it. There's the SNCC advisor, which tells you if it's a healthy package or not, like based on lots of different metrics. And boom, I chose a package, NPM installed it, well, YARN, and used it. No problems whatsoever. This would never have happened in any other language.

Well, modern languages are better. Go Rust. In some ways, Python, but there's still a horror show there. But NPM tops them all. I look forward to people angrily writing in about your description using the phrase horror show. That's not my problem. Once you install some kind of package manager in Python, well, maybe it works. But, you know, there are so many and they're so complex. Sure.

I feel not just an intellectual need, but for the sake of humor, a need to start complaining about the node and the NPM ecosystem with you. Because as you mentioned, there are some flaws, some drawbacks, which is, of course, natural when you're evolving, when you're moving quickly. You've said a great quote that I want to build on. You said previously, modularization is the number one problem in software development. What would you say is the number one issue or difficulty that you personally feel or experience with writing Node.js?

Writing Node.js? Well, it's been a while. I'm now a front-end guy. I've become used to TypeScript.

And I do not want to transpile. So I have to use these shenanigans like JS doc typings to make it work. And it works like 95% of the time, but that's a difficulty. Configuration. Like it's not out of the box. If you take Rust or Go, everything is out of the box. You get your linter, you get your format or et cetera, et cetera. If I have a new project for Node.js or React or front-end,

Configuring it is, you know, you need a wizard to do that. Now I've done it so many times that I can do it like with my eyes closed, but every time I do it, like ESLint now has the flat config. So, okay, let's try that. And this doesn't work with that, et cetera, et cetera. Combining all these into one thing is difficult and you need to understand each and every one of the tools.

Once you get into composing them, for example, as you well know, TypeScript and ESLint or TypeScript and React and bundling and TypeScript and bundling and this and that, then it really becomes really, really, really difficult. And you either accept the defaults today or you become a wizard at configuration. Those are the two op-ins today.

We actually just released a beta for V8 of TypeScript ESLint that changes how you configure typed linting. So I look forward to you and everyone else wizarding through that. We will look into it. Oh, it's very exciting. But yeah, so let's say that you were leading a team and you needed to, let's say, help your developers set up all these tools, you know, whether you're going with the basics or becoming a wizard or luminary yourself. What would you kind of take as your algorithm for helping folks understand what to do in that space?

Well, there's understanding what to do and there's helping them set it up. So what I did at Round Forest was I was the wizard. I built a configuration and for each package, and I built this tool where you could take a package and then it's a template and you just create another package from it. And that works really well. That's one thing. The second thing is do as little as possible. I

I mean, the more configuration you have, the more tooling you have, the worse you are because you need to configure and you need to make them combine one another. So I really, really tried to cut down on configuration and tooling. So yeah, ESLint, Praline are in the back end and TypeScript for type checking, JSTOCK typing. It's not for transpiling because I hate transpiling.

And transpiling, the reason I dislike it, it's not like I dislike it, I use it in the front end, but the reason I prefer not to have transpilation is because it makes everything more complex. It becomes a pipeline. So from my point of view, less is more. That's one thing. Another thing that I do not do, and this goes against everybody, what everybody is doing, if I have a monorepo and I have lots of packages, every monorepo has lots of packages, every

Everybody is trying to make one configuration for all the packages.

And it's not easy. Not only that, once you have that, changing that configuration becomes like there's this grand wizard that understands everything, and they're the only one that can change it because there are so many packages to bend them in on, and there are so many configurations in there, nobody understands anything. What's the other way? And it's arguable, but I worked like that in Roundforce, and I really liked it. Every package has its own configuration. There is no sharing. I mean, you could share, but don't.

This brings two things. One, you're not afraid of changing the configuration because it's only that package becomes affected.

And two, it's a small package. The configuration is very simple, very easy to understand, very easy to change, very easy to reason about. And that, for me, is the most important thing. And that is why modularization is the number one most important thing. Once you have lots of packages and they're independent of one another, not like in modern monorepos, once they're independent of one another, they're easy to reason about, they're easy to understand, they're easy to build separately, et cetera, et cetera, et cetera. And the most important thing is, as I said, they're easy to understand.

And we had a monorepo of about 400, 500 packages, and it worked like a charm because everything was independent. One drawback, though, of every package having its own file, you're nodding and smiling at this. One drawback is that if you need to update something, then you have to update it in those 400 or 500 packages. What do you do in that case?

There are two answers to that. One is I don't. Why do I need to upgrade everything in all those 400 or 500 packages? That's one. The other is if I do need to, for example, I want to add a rule for packages, what I say is this. Whenever a developer goes into a package to modify it, first of all, update all the dependencies. That's very important. Second, they run a script that knows how to update those packages, like a code mod for configuration.

And that code mod knows how to check if it's possible to change the configuration or not, et cetera, et cetera, et cetera. Does it work 100% of the time? No, it works 95% of the time. I don't have a, like, the people that say shared configuration aren't wrong. They're right. But,

But I'm also right. And the question is a balance. What is the better right or what is the worst wrong? My point of view is I would prefer the burden of sometimes updating the configuration because it's only sometimes when you have four. Remember, if you have 400, 500 packages, you can't do a shared configuration anyway. It's just not possible. Okay. So I prefer the burden of once in a while getting problems with updating the configuration or

rather than the burden of shared configuration where you can't really change anything because you can't really make it work on 400 packages. And the fact that for me, a package has to be independent. It has to be built independently, tested independently, etc., etc. Otherwise, it's just a monolith in another form.

So yeah, there is a problem with non-shared configurations. I understand it. I felt it at Round Forest every day, but I still think it's better than a shared configuration monorepo. So what are the advantages then for you of having a monorepo versus a whole bunch of individual single repos? One only, very easy one. I can search across all the packages.

That's it for me. And by the way, at Round Forest, we didn't have one project. It was not a project monorepo. It was a company monorepo, just like at Google. So we had lots and lots of projects residing in, and each one used different packages from the monorepo. And the way we dealt with it was we had Visual Studio Code workspaces, where each workspace had only the packages that belonged to the project.

So when I searched in VS Code, then that searched only the packages that belong to the project. And then I could do whatever, search, reference, et cetera, et cetera, search for references, et cetera. And it just worked. So that is the main thing. Managing 400 Git repos is basically impossible. Managing one monorepo where you can search everything, that is, for me, the only advantage of a monorepo like that.

because I prefer not using shared config monoliths. How do you deal with the build systems issue then, where you have 500 different packages, you don't want to rebuild the other 499 on one change?

Now people will get really, really angry. Okay. We build locally. Okay. I know. I go, oh, no, but you don't know what's on the computer. It was true at my time, like in the early 2000s. You never knew what was on the computer. But today, look, this is the JavaScript ecosystem. It's NPM. Everything is local in the package JSON. All you need is JavaScript of the correct version. And if you have an NVMRC that deals with that automatically. So...

Why not build locally? And we're talking packages. I don't build everything. So when I'm working on one package, I'm working on it, building it and running the test takes not more than a minute, maybe two minutes.

So, why push it to CI, have this whole process that takes like whatever, 10 minutes, because there's a lot of overhead, and then you get back, oh, there's a failure. So, we just built locally. We had a script to do that. So, the script basically did an NPM install, NPM build, NPM test, and NPM publish to publish the package. And now people say, well, what if I want to work on two packages at the same time? Well, no.

That's the problem. The need to work on two packages at the same problem is not a good thing. It's a problem because...

Because now, once you work on two packages at the same time, they start becoming dependent on one another in terms of thinking, in terms of testing. You're not testing into two packages. You're testing only in one package because you're anyway building them all together. So you suddenly have hundreds of packages with no tests whatsoever, and you start really, really getting afraid of changing them because there are no tests and they could

like affect all the other hundreds of packages. So you start becoming afraid because the packages are not independent. You're basically back into a monolithic code base.

The code base we had at Round Forest and before that at Applitools, each package was really independent. Really, really. So it had its own tests, and you could only build that package and publish it. So you had to work package by package. And a lot of people say, oh, that's a really bad developer experience. I say, yes, not perfect, but it creates packages that are really, really independent.

They have their own tests, and the tests are rock solid because they're built independently and tested independently. So it's weird. I'm taking away some freedom, which is you build packages by package, okay? But what I'm gaining psychologically is an amazing ecosystem where you can have 400, 500 packages, and they're all independent, but now there's just no problem. And if you think of it, the NPM ecosystem works exactly like that.

Each package is independent of one another. So what I built was like a mini NPM inside the company. Now people say, no, it doesn't work. It doesn't work. Yeah, I know. But go to Roundforce, work there and see how it works. I love it. It's solipsism from the package or monorepo level. Absolutely. I was surprised how well it worked, actually.

This episode of Software Engineering Daily is brought to you by Jellyfish, the software engineering intelligence platform powered by a patented allocations model and boasting the industry's largest customer data set. You know, one of the biggest shifts in the past year or so is the adoption of Gen AI coding tools like GitHub Copilot.

and engineering leaders are trying to figure out if their teams are actually using it, how much they're using it, and how it's affecting the business. Are they shipping more? Is more roadmap work being done? How do you know beyond anecdotes and surveys? That's why Jellyfish introduced the Co-Pilot Dashboard in 2024 in partnership with GitHub.

and since then they've analyzed data from more than 4,200 developers at more than 200 companies. Want to know an interesting finding? Developers are shipping 20.9% more with Copilot. The folks at Jellyfish have a ton more insights, and you can get started seeing your own team's data so you can plan and manage better. Learn more at jellyfish.co slash copilot today.

I wouldn't think it would work unless someone were talking to me on a podcast who's had experience working at a company making it work. Me too. I'm still like, oh my God, this is a great idea. I was like, it was bits and pieces. Like suddenly it just happened. And then suddenly, oh, this isn't, this isn't happening. This is just happening. It's a methodology. So...

It works. Let me try to poke a hole in it, though. You mentioned you're working more in front-end these days. Suppose you're working on a design system, which is then consumed by a dozen different variants of a front-ender app or some such, and you want to make an API change that also and or changes some colors. So you need to test TypeScript type compatibility, logical compatibility, accessibility in case the color change messes stuff up. How would you do that if you have the one package that then is consumed by others?

What you don't want to do is if you have a design system, check all the other 10 apps because there's no way you can do that. There's just no way. So you build it in a backward compatible way. If it's not backward compatible, you do a send for major. And now once you change that, the apps consume that package. And let's say they find a bug there. Okay. You introduced a bug because we're people, we're different.

And so the package chooses not to upgrade to the new version of the design system, stays there, goes to that package, fixes the bug, adds a test, obviously, publishes, and now they can consume. I know it's a roundabout way, but look, a project-only repo where you share config and everybody works on all the packages, it works for 40, 50 packages.

Once you get to the scale of 400, 500, you just cannot build everything. You just cannot change everything. If you have a package, you cannot say, okay, let's test it in all the other packages. It works because people have a project monolith or one app with lots of packages around it, like lots in 40, 50. That's it. But that's not scale.

That's a monolith with folders, what we call them packages, because we're cool. A monolith with folders. Yeah, that's fascinating. If I remember right, that's how the Microsoft design systems were working. When I was in an office team, every time we wanted a new version of, say, Fluent, we would have to pull in and check, okay, does this work? We would have our own pull request with our own validations checking, does this new system work?

Exactly. You have tests. So I don't care. At Round Forest, we pulled in automatically. Whenever you started working on something in a package, we just upgraded all the dependencies, some for minor and sometimes some for major, and just ran the tests. Like 99% of the time, the tests pass and boom, we're good. The 1%, okay, we fix a little bit here, fix a little bit there, or we say, no, no, this package I don't want to upgrade it, which is fine.

Before we change topic to talking about Node.js, was there any other hot takes or hard opinions you wanted to put forward in the concept of monorepos and package management? Yeah, I think I put enough. Okay. You've incurred enough intranet hatred for yourself to deal with for the next year in this one conversation. Let's talk about Node.js. How did you first get involved with that lovely project? I was at Wix and...

And there was this project that needed transpilation. Babel didn't exist, so I used the Google transpiler. And I didn't know, like, I knew only front-end JavaScript. And I understood that I needed something in the back-end that transpiled the user's code. Remember, Wix is like a place where you can build your own website. And our project was not only build your own website, but allow people to code

in their website, like backend and frontend, et cetera, et cetera. So we knew we had to transpile and we had to do some lots of build stuff, et cetera. And the obvious pick was Node, which was in version 0.12, 14, 11, 10. I don't remember exactly. And I picked it up and I wrote a POC in two days.

That was callback. That was the days of callback. Not a promise in sight. And I was like, oh my God, this is amazing. I want more of that. And obviously we continued working with Node.js and it's still my pet. I love working it. I love the ecosystem and I love the project itself. That was the first love. And I started talking about stuff in Node.js and one of the things I talked about was ESM because ESM

Modularization and packages and module systems obviously very much aligned. I got really interested in ESM and how Node.js worked with ESM, which was back in the days not working with ESM. It used only CommonJS.

And I gave a talk in Japan and Myles Borens was there and we started talking in the cab and everywhere. And he suggested I become a contributor and an observer in the Node.js ESM working group. And that was the start of my involvement with Node.js. Mostly ESM, mostly as an observer, mostly as a speaker about ESM or whatever it is in Node that is current. I'm not so much a contributor.

Not all of our listeners might have a deep love and understanding. What is ESM, and how does that compare to CommonJS or whatever Node was doing before ESM? Right. So we're talking back-end, Node.js, not front-end. OK, so let's talk Node.js. Node.js was back in the days when there was no module system in JavaScript. There just wasn't. Import from and export were just thoughts in the minds of Dave Herman and others.

So they invented one. It was called CommonJS. It's not the event. They took somebody else's module system, if I remember correctly, and used it. It was the require thing, you know, const x, x, load dash equals require load dash. And that was what we used to great success. NPM is built on CommonJS.

And that was great. But then ES6 came along and ES6 said, "Okay, but we need a common module system." And they invented or standardized the module system, which is called ESM, ECMAScript modules.

With the syntax we know, import from and export whatever, export default and export this and export that. So now we have two module systems, CommonJS and ESM. ESM, it took a long while for it to enter Node.js and to enter browsers, by the way. 2015 was when it was standardized, but I think browsers got it only in 2018, something like that.

And Node.js really got it only in 2020, I think. I don't remember the dates, but somewhere around that area. So a long, long while to make it standardized. And today we have two competing, in a way, Node. Node works with two systems, CommonJS and ESM. So that's ESM.

I can see what you were talking about 20 minutes ago in the interview with how you layer your descriptions, where by the time you actually said the technicals of what ESM were, it made total sense in the context of how we came from an original four-note system and then built up to the language specifier. Because what a lot of people miss is that, A, as you said, it's old. It's from almost a decade ago that it was standardized. And B, it's an addition to the language. It's not like CommonJS, which is just a few identifiers and function calls that happen to be magic.

Exactly. And this is why I'm a huge proponent of ESM. Today, NPM is a hodgepodge of ESM and common JS and front-end developers who are today using ESM only needing to import packages that are common JS. And it's a big mess. As we all know, bundlers are trying to hide that mess from us and succeeding 99% of the time. It's that 1% that is like hair pulling out.

So that is why I want ESM to succeed and Node.js. I think it's incredibly important for the ecosystem. - What do you see as the steps that are being taken now that are particularly positively impactful towards making that success happen in our lifetime? - I want to thank Cindele Soros

Sindler Soorhus, I hope I'm saying the name correctly. Sorry, if not. He's this wizard who has about a thousand packages to their name. Great packages, just use the mutex one. Thank you, Sindler. And they decided that they're moving all their packages to ESM only, and all the next versions of the packages will be ESM only and not CommonJS. And that's difficult. And why is that? Because in Node.js,

ESM can import CommonJS, that's not a problem, but CommonJS cannot import ESM. So if you want to use the latest versions of Cinderella's packages, you need to convert your app to ESM. So that's one flag in the mud, or how do you say that? Sure. Whatever.

And more and more packages started doing that, either being ESM only or being dual mode. I think of today, it's about 15% that are of the top packages that are ESM, either ESM only or dual mode. So that is great. Hopefully it will continue. I'm very ambivalent, but a note just introduced the ability to require ESM. Still experimental. I'm like, no, please don't do that. Don't, don't, don't. Because I've been preaching not to do that.

A lot of people are trying to convince me that it is a good thing, but it has its problems. And in the end, code wins, PR wins. So it's going to be in the ecosystem and maybe it will help, maybe it won't. But I think that in the end, we're going to be ESM.

full, like in a few years. So that's the goal. So those are the two things, I think. The move of packages and maybe require ESM. From a naive or first-timer perspective, being able to import a CommonJS package from ESM and an ESM package from CommonJS seems like being able to go both ways would be very lovely. Why might one of those ways not be ideal for the ecosystem in your mind? Right. Two reasons. One is ESM, in essence, is async.

How do I know that? There's a feature called top-level await, where a package in the top-level code can just await something. And that means that importing it is an async process. That is why you have to await import something dynamically, and you can just import it. So it is an async module system.

And requiring an ESM module is weird. What if there's a top-level await there? And the answer today is if you require a package or a module that has top-level await, the require fails. It's that simple. And that generates a split. I can't use top-level await because maybe somebody will require me. That's one reason, not a big reason. The other reason is

Once more and more packages move to ESM, this will force the hands of other packages and other packages, et cetera, et cetera, et cetera, to move to ESM. Once you have the ability to require ESM, people can stay in common JS forever. As we all know, I want the organization. So those are the two reasons I think it's problematic. I'm not sure I'm right. Absolutely not sure.

Sure. There is pain. I know Sindra got a lot of flack online for moving to ESM only. But as you just said, there is benefit, you know, pushing the young bird out of the nest once it's able to fly. It doesn't hurt, by the way, that Sindra, for those who haven't seen on GitHub, has an incredibly cute dog.

in the profile photo, which is probably very helpful to the cause. Before we start wrapping up, another interesting thing about ESM, a lot of folks, especially in front-end, will import stuff from odd extensions, like images from .pngs or .webps or CSS from .css files. As someone who's worked deeply in the import and loader areas of Node, how do you feel about non-standard imports and their place in the ecosystem? I have a lot of problems with that.

It's only the front-end world. I mean, back-end, we don't do that. The maximum is import JSON. We don't do that in the back-end because we don't have our... I don't think we need a reason for that. But in the front-end, it's very, very common. It started with importing CSS and CSS modules, and now we're importing everything, SVGs, PNGs, whatever. It's problematic because, one, it's non-standard. So the definition of what exactly this is changes. And two...

the bundler needs to deal with that and each bundler deals with it a little differently so if I try to move from webpack to parcel and from parcel to vit and from vit to whatever is built then

then I always invariably, I have to deal with the problems in these parts. And it's very, very difficult to move between bundlers for that reason. So that is why I dislike it. It's non-standard. It creates lots of problems. And people say, okay, just don't move bundlers. But a project doesn't live for two years. It lives for five years, seven years, 10 years. And you have to change technology. And if you're non-standard, it makes it very difficult.

But standards are informed by common usage. And the fact that so many people are doing things like import styles from .css means that there is a real need here. So how do we get to satisfying that need without this kind of bizarre, painful real-world experimentation? I don't know. Seriously. I mean, I've just started front-end. I know front-end, but I've just started really developing front-end. And the project I have, we're importing CSS and PNGs and SVGs and whatever.

So I haven't really wrapped my head around, okay, what if I don't do that? Is there a problem? How do I solve it? My guess is that if people say, okay, I don't want to do it via imports, but some other mechanism, they'll figure it out. And it's, I think, not that difficult. But I seriously don't know. Maybe the developer experience is so good that it's the only way out. I don't think so, but I don't know. I really, I seriously don't know.

It's a hard question. But there's also built-in support node now, right? Is intercepting the right word? Saying that when you import a particular file, there's some kind of importer or loader that's able to come into play? In Node.js, yes. There's loaders. I have a talk on loaders. You can search for them on YouTube. Yeah, you can intercept just like we did in CommonJS.

But in CommonJS, we also intercept it. That's why you can do a TS node and run TypeScript. Because what TS node does is it intercepts all the requires and transpiles it on the fly.

But in common JS, it was problematic because the interception was like we just reverse engineered everything. But now Node.js is tied to the way everybody does that, so they can't change anything around it. In ESM, they decided to do it the correct way and described a standard way to implement loaders. And it's really, really nice. It took a while, but now it's here and it's really, really nice.

What would an example use case, other of course than CSS, be for loader or an import? LTS node. That's what they use, loaders. Babel transpilation, or any kind of transpilation, basically. The second is Marquee Modules.

You know, you want to say if somebody imports this module instead of this, they get that. We use it a lot in tests. So the loaders are the only way to do that. And that's great. I actually wrote the first loader for mocking modules. I test double. It was a really, really nice experience. What else? Oh, HTTP loading or loading from zip. You can do that too, if you want, instead of just from the file system like Node.js does.

So transpilation, mocking, and other forms of protocols. Those are the three off the top of my head that I can think of. Oh, APM, application monitoring. So testing performance, et cetera, et cetera. Yeah, that's the fourth. I hadn't thought of that one, being able to put the application monitoring directly into the code with import calls. Yes. That's really nifty. It is.

The testing, though, is so interesting, because that's been such a pain in the tush for such a long time for, for node folks using whatever it is, Jest or VTest or whatever Mocha people are using, where there's just this set of sometimes semi standardized with each other ways of if I import something, I want it to be mocked out. And people have such a hard time properly setting up, you know, just dot mock, V dot mock, and so on. Do you think that that's going to get friendlier over time now that node loaders are a little more standard? Correct?

or actually standard? Good question. Jest has their own way of doing module loading. They're basically replacing CommonJS for some kind of such a thing. VTest is a bit more standard in that it transpiles the code on the fly, but it's still using Node.js ESM. But I think...

I don't think they're using loaders. So they're just transpiling everything, including the imports, so that they work in an expected way. I think they could use loaders. Not sure. But VIT itself is, oh no, VIT itself is a front-end. VIT just uses VIT. And VIT is a front-end bundler. It can't use Node.js loaders. So that's out.

A lot of backend projects use VTest. That's really funny that it's built upon a frontend's bundler system. I'm not sure why. It just makes them slower and it does weird things to your code. So I'm not a fan. I use it. I love using it in frontend. But in backend, Mocha, NodeTest itself, Ava, all the regular ones that just import the test files and just run them, I think they're the best. They're the simplest and easiest to understand.

I would love to dive into that with you, but we're almost out of time here. And I wanted to save time at the very end for a fun personal question. You are a prolific reader and consumer of books and movies. What's your latest reading kick?

in the last few months? Well, mainstream, very mainstream, but Three-Body Problem. I really liked it. I read it like two years ago, I think. And then the Netflix series came out and I said, okay, let's read it again. Wonderful. It's weird. It's different. It really is Chinese. It's different than American or British sci-fi. So there's that.

And I'm doing a marathon on my favorite ever author, Samuel R. Delaney. He's a 60s writer. He's still alive, still prolific, a great writer, mostly doing mainstream, very weird literature. But he's amazing. And like every 10 or 20 years, I'm just like, okay, let's read it all again.

It's still amazing. Wow, he is prolific. He's won quite a few awards. Oh, Grandmaster, everything. Yeah, he deserves it. He really is one of the more complex and interesting sci-fi authors ever. What do you look for in a sci-fi author or their books?

Well, it depends on the mood. I mean, I love my space opera, so I'm fine with that. But in the end, I tend towards the more psychological, complex, very literate sci-fi. So yeah, again, it depends on the mood.

Personally, and this will result in a question. Personally, I really like books that make me question and kind of increase my ability to understand either myself and or the world around me. Like Asimov's Foundation series made me really think about, well, what is the behavior at mass of people and systems? Like, what are we moving toward?

Is there a particular book or series that you would recommend folks use to get into thinking more about themselves and the world around them? Ooh, that's a question. Well, Delaney, sorry, I have to go back to him because he's the first author that introduced me to the idea of postmodernism or poststructuralism, where you understand that text and language can be read together.

in different ways and can be understood by different people in different ways. And culture is a very, it's not ingrained into us, but it changes and develops and everybody has their own culture. And he does it in such a beautiful way.

But maybe the author that has changed me most or influenced me the most is one of the big three. There's Asimov, you mentioned him. There's Arthur C. Clarke, which is wonderful. And Rendezvous with Rama has this amazing quote in the end. It's a huge series. It's, what is the meaning of life? And it's loving and learning. And it's the best quote for me ever. But the most influential author is Robert Heinlein.

By far. I know he's very liberal and not libertarian, very, very much so. So that's not me. But his ideas around family, sex, friendship, love...

They've touched me very, very deeply and influenced how I think of the world and people.

And then later, later, later in my life, learned that it was strongly objectivist. One would even say objectivist propaganda. I had no pickup on that when I was reading it. But I did take a lot of, you know, philosophies out of it, you know, like standing up for yourself, standing up for what's right. At the time, I think it really linked to the Jewish concept of tikkun olam, which is, you know, healing the world around you. And in retrospect, I have no idea how I got there. But at the time, it really helped me understand those things.

Yeah, same for Heinlein. Looking back, you understand how his philosophy of zero government is deeply flawed, but all the other concepts are amazingly beautiful.

Well, that's a lovely way to end an interview. Gil, I want to really thank you for chatting with me this last hour. I think this has been a wonderful conversation on Node and the ecosystem and build tooling and love and teaching. So again, thank you. This has been great. Is there anything else you want to plug or any particular ways folks can find you online? Oh, Gil Tayar on Twitter, which is the best way. Feel free to contact me, talk to me, ask questions. Don't be shy. Really, I'm there always. Thank you. Well,

Well, we all appreciate it. And for Software Engineering Daily, this is Josh Goldberg. Thanks, everyone. Cheers.