cover of episode Next.js 15 with Jimmy Lai and Tim Neutkens

Next.js 15 with Jimmy Lai and Tim Neutkens

2024/12/5
logo of podcast Software Engineering Daily

Software Engineering Daily

Key Insights

What are the key upgrades introduced in Next.js 15?

Next.js 15 includes enhanced integration of Turbopack, support for React 19, stability improvements, and new features like the Next Form component and after hooks for request lifecycle management.

Why did Next.js 15 take longer to release compared to previous versions?

The team decided to take more time to ensure the release was polished, bundling significant changes and stability improvements to set a new baseline for the app router.

What is the significance of Turbopack in Next.js 15?

Turbopack is now stable for development, offering a 95% faster hot module replacement (HMR) compared to previous versions, significantly improving developer iteration speed.

How does Turbopack improve development performance?

Turbopack reduces the time it takes to see changes reflected in the browser by only recompiling the affected parts of the code, rather than the entire module graph, resulting in faster updates.

What is the motivation behind the async request APIs in Next.js 15?

The async request APIs were introduced to prepare for the upcoming dynamic IO feature, which simplifies the distinction between static and dynamic content by marking pages as dynamic if they use promises.

How does Next.js 15 handle static vs. dynamic content determination?

Next.js 15 uses dynamic IO to determine if a page is static or dynamic based on whether the code uses asynchronous operations, marking it as dynamic if it does.

What is the relationship between Next.js and React in terms of development?

Next.js and React have a close relationship, with Next.js often integrating React's latest features, such as server components and new directives, to enhance the framework's capabilities.

Why did Next.js 15 release with an RC version of React 19?

Next.js 15 released with React 19 RC to align with the React team's roadmap, but the stable release was delayed due to a significant change in React's suspense behavior, which Next.js decided not to block on.

What is the purpose of the Next Form component in Next.js 15?

The Next Form component is a drop-in replacement for the standard HTML form, adding features like prefetching and client-side navigation to improve form handling and user experience.

How does Next.js support self-hosting and non-Vercel deployments?

Next.js supports self-hosting through tools like Open Next, which allows deployment on various serverless platforms. The team is also working to standardize infrastructure outputs to make self-hosting easier.

Chapters
Next.js 15 brings enhanced TurboPack integration, React 19 support, and stability improvements. Key features include the Next Form component and after hooks for request lifecycle management. The release focused on improving developer experience and addressing common issues like hydration errors.
  • Enhanced TurboPack integration for faster development
  • Support for React 19
  • Improved stability and numerous bug fixes
  • Next Form component for easier form handling
  • After hooks for request lifecycle access

Shownotes Transcript

Next.js is an open-source JavaScript framework developed by Vercel. It's built on top of React and is designed to streamline web application development using server-side rendering and static site generation. The framework's handling of both front-end and back-end tasks, along with features like API routes and file-based routing, have made it an increasingly popular choice in the web dev community.

Next.js 15 just released in October of 2024 and introduces significant upgrades, including enhanced integration of TurboPack and support for React 19. Jimmy Lai is a software engineering manager at Next.js, and Tim Newkins is the tech lead for Next.js and TurboPack. They joined the show to talk about Next.js and what's new in version 15.

Kevin Ball, or K-Ball, is the Vice President of Engineering at Mento and an independent coach for engineers and engineering leaders. He co-founded and served as CTO for two companies, founded the San Diego JavaScript Meetup, and organizes the AI in Action discussion group through Latent Space. Check out the show notes to follow K-Ball on Twitter or LinkedIn, or visit his website, kball.llc. Hey guys, welcome to the show. Hey.

Hey. Thanks for having us. Yeah, good to see you. So let's start out a little bit with some quick introductions. So let's actually, I'll throw to you first, Jimmy. Jimmy, do you want to introduce yourself and your background and how you got involved with Next? Yeah. So my name is Jimmy. I'm a French software engineer. Before Vercel, I used to work at Meta in London. I used to work on React Native and on some internal products. I used to work a lot on web performance and everything related to that, like to product infrastructure.

In general, I decided to join Vercel because I wanted to work with a company that focused on, well, actually on performance on the web, the mission of

Guillermo's mission really struck me there in terms of bringing the amazing technologies we added thanks to our companies. I didn't start working on Next when I joined. I used to work on the FutureFlex, but I quickly joined the team back in, what was it, end of 2022. And ever since, I've been working mostly on the app router. A year ago, I started managing...

the team and we've like handled mostly the 15 release and a lot of the two great work team has completed in the version 14.1 15.2 yeah that's it on me awesome how about you tim hey i'm tim i've been working on nexus for a while since like 2016 when it first came out as a contributor and eventually joined for cell in 2017

And since then, I've been building quite a lot of different things, but mostly working on Next.js across all of them and building out the team. And now I'm tech lead for Next.js and TurboPack. I'm mostly focused on TurboPack nowadays and trying to get that over the line and into the hands of everyone. Yeah. So the impetus for this is y'all just had a big release. Do you want to tell us kind of what that was and what's in the box? Yeah. So...

This release has been a long time coming, actually. For those not familiar with the release schedule, we used to drop really regularly in the past year. After we shipped AppRouter with Next13, we were really following up on it every month or so. With 15, we decided to take a slightly different approach. We wanted to take a bit more time to make sure it's really polished. And so we released a release candidate back in May.

And it took us quite some time to shape it because we're now basically in October. It's been six months. We really took the opportunity to bundle as much really nice changes as possible so that we could set it as the new baseline for the app router. We added an insane amount of stability improvements and improvements. We sprinkled some

Some features like the Nexus form component or the after hooks, which allows you to tap into the request lifecycle. And we also, you know, improve so much on the TurboPack development story, which to me, I think is actually maybe the biggest headline for Next 15. TurboPack is now stable for development. Maybe Tim can say more about it as well.

Yeah, there's definitely been like other changes as well. Next to features, there's been a lot of work on like polishing things that people run into every day. So it's definitely been a strong like shift in focus towards like stability improvement. So it's like,

just like make your day to day better in short. So like if you ever use Next.js or like any React framework that does service at rendering, you've seen this hydration errors because you like add like a date somewhere and the date changes like once it gets to the browser, that kind of thing. Those errors, like we just saw that everyone was struggling with them. We were struggling with them ourselves as well inside of Vercel. It was just like not clear where and what it was causing it, right? So like what code and like what was even changing on the page that caused the error, right?

So what we did is we work with the React team to make React better in that regard. So like that React can actually show like this is a diff basically for like where this thing is mismatching. And that is the component that was causing it. So what's really nice now is that you get all these like small tweaks that may seem like very small stuff. But in the end, like it's affecting a million developers every day because it just makes it easier to

to solve like your iteration errors or like some other errors that didn't have correct source mapping or things like that.

So that's like just on the Nexus 15 side of things. And then as part of Nexus 15, we're also shipping TurboPack for development. So TurboPack is like the new underlying compiler and bundler for Nexus. Then we're planning to make it more like a generic bundler in the future. But right now we're just focusing it on Nexus because that's like the largest service area. And like once that like works well for, and once it works well for Nexus and it can build all like all the dependencies that we see people use every day,

then it will be a really good generic solution already.

So we're first focusing on, first we focused on development because that's the thing that most people were running into, had complaints about, things were too slow, took too long to open a page. Or when you make a change, it would take like seconds sometimes before you can see it on the screen, be it like CSS changes or like code changes. So basically we set out to build like a faster solution, what Nexus had up to that point. So we built this like new architecture to really like scale

scale to the large amounts of code that we see nowadays. So when I started Reconnection like eight years ago, JavaScript apps were certainly not small and the Node modules meme has always been true a little bit. But I would say the bottomless pit has gotten a lot more bottomless in the recent years. What we basically see is that

there's more consolidation of libraries and it's not bad at all really like it is really nice so you see more icon libraries more design systems that are just like out of the box have everything that you need right so previously where you would have to go and write like every single component yourself like eight years ago for example you just had to write your own button component write your own menus right like the drop downs everything yourself now you just have like out of the box like toolkits that have everything

But with that comes them shipping a lot of components by default. And that means we have to bundle more. So that's not inherently bad. It just means that our tools now need to scale up with that demand of overall usage.

And basically what that meant for us is that in practice, what we would see is we would see like smaller apps get over 10,000 modules where previously that was not the case. Or in some like exotic cases where you accidentally import like five different icon libraries that all export like 10,000 modules, you would see like 30,000 plus modules for like something that seems to be like a simple case.

When I say modules, I don't mean like... It's the great NPM inflation, basically. Yeah. There's some libraries that ship icons that...

have multiple icon libraries. So you can pick and choose between different icon libraries and use different icons. From a design perspective, maybe not the best idea, but it's very convenient. And that's why we see it a lot. And it's not a bad thing. Like I said, it just means that there's more code to be compiled. And it doesn't mean that we ship more code to the browser per se, because you have stuff tree shaking and all that. But from the compiler and vendor level, we first need to know about everything that exists before we can actually shake them.

And that causes the compiler to take longer, even if you only use one icon from the cycle library. So that's what we set out to build, a new compiler and bundler that can scale up with these high demands of larger apps.

And then besides that, also our own, like Vercel's internal app for like Vercel.com, for example, started growing quite a lot as well. We added hundreds of engineers at Vercel. So it was just like more people working on it day to day as well. So the code base itself is growing way more quickly than it used to. And in order to keep up with the scaling of that, we just had to create a better solution. So that turned into TurboPack eventually.

We basically investigated all different kinds of solutions, but found that it doesn't really fit with the way that Next.js works or the way that we wanted to do Node.js and browser compilation and a bunch of other things. And in the end, we ended up building a new bundler that should set us up for the next 10 years at least. And we can still optimize further as well. So where we're at today, this is just a start. So we're at a certain performance that's much better than the previous compiler, but

But the current performance of the new compiler is still like only like at a certain point where we still are not super like we're happy with where we're at. And it's much better than where it used to be, but it can still be so much better from here. So that's where we're working on disk caching and some like extra caching layers to make things even faster across like rebuilds.

So just to make sure I understand, this is replacing what you were using Webpack for and what other frameworks might use, like some combination of like Vite and Rollup or something like that? Exactly, yeah. I think what we found, yeah, like Tim said, building on Webpack is just that we were sort of like architecturally limited. I actually don't remember how old Webpack is, probably around 10 years old, Tim, I think.

It's over 10 years. Yeah. Yeah. And so the whole structure, the whole amount of legacy had support, the whole host of weird options and quirks that you could configure via Webpack was limiting us. And so we sat down and we were thinking, what if we could start it from the ground up? Obviously, that's going to be a question. We considered using Vite and the roll-up option as well. But

I think we took a really big bet here a few years ago, right? Like we believe we have the solution to scale it properly. And what's exciting really now is that this is starting to pay off. Like we spent the last few years iterating on sort of like just the basics of making a bundle work. But the great thing to me, which was, you know, I was really impressed talking with Tobias about it, like at the last conf is now that we built the bundle in mind with like the idea that you can

separate each of the tasks that it does and cache them individually at the function level instead of at the module level. This allows us to avoid repeating any work that we don't need to do. First off, we can see that from the HMR performance boost, which is sort of mind-blowing. You hit Command-Save.

on a file and it just you know it feels like magic to me yeah we found that it's 95 faster than what it was before so it would take one example is like on one page on oversell's own app it's taken like 900 milliseconds and it went down to i feel like 45 milliseconds for the exact same change right so like changing some like css or gumball

MARTIN SPLITT: Exactly. MARTIN SPLITT: Yeah, one of the problems that Webpack had is-- or still has in general is that the moment you start-- you add more modules. So modules are like JavaScript files or TypeScript files or CSS or anything else that you add loaders for, for example.

The moment you have like over 10,000 to 30,000 modules, there's just an inherent overhead on processing like module replacement updates. So like fast refresh updates. So what that means is that anytime you make a change, it doesn't matter what change it is. So if it's a JavaScript file change or a CSS file change, which you might expect the one is faster than the other, but actually it's not. So the CSS file change will still take like 900 plus milliseconds because of just

of just the overhead of having to crawl the entire list of modules. And with TurboPack, we actually made it so that TurboPack only has to redo the work that is affected by the change. So that means if you're using, like you're writing CSS and you don't have any customization, so you don't add like post CSS or till end or anything like that, we only have to recompile that single file instead of recompiling like the entire module graph or the entire

like chunks are like JavaScript files output, for example, or CISOs files output. Like we don't have to recalculate those. We only have to recalculate like the part that's affected by that change basically.

Well, and that amount of timing change is a real difference for your dev cycle, right? 900 milliseconds is still like not massively long, but that's, I make a change, I save it, I go see it reflected. Whereas 45 is like, I'm tinkering with this and it's live updating with me and I can iterate this. Is this right? Is that right? It's like using dev tools essentially, except you're using your code base. Yeah. Yeah. And yeah, exactly. What I was getting on in terms of like, since this is now...

The baseline for us, it allowed us to basically really quickly, well, I say quickly, but this is years in the making, to really quickly add a persistent caching layer on top of that. So for HMR, it's all in the memory. We do this instantly, but you still have to hit the cost of actually starting up and computing the task. And the amazing thing we showed last Thursday at AdConf was

was what if we could just persist all of that work now to the disk cache? What if instead of like, well, instead of like saving it, you know, in your session, we could save it across forever. Like on all of your session, you stop dev server, you go to sleep, you wake up the day after tomorrow and you can pick up exactly where you left off in like hundreds of milliseconds. That's a lot of time saved.

That is a lot. All right. So this is part of what's going into Next 15 is TurboPack is stable and you're shipping it with this. Was there a reason to couple the two or it just happened that way? Yeah. So we had TurboPack in Release Canada for a long time. So...

When Jimmy mentioned the release candidate for Next.js was shipped six months ago, I think we had TurboPack in the release candidate for even longer than that. And really the benchmark here for TurboPack was that we passed all development tests because we only shipped it for development so far. So it's coming for builds as well, as it's important to note here. So in the end, you'll be able to run Next builds with TurboPack as well and have the same performance improvement.

but it's still a work in progress because we have to add like production optimizations and things like that.

But yeah, on the coupling of the releases, we shipped a release candidate for TurboPack, but then the release candidate, that was the first time people actually started to try it out in their own apps as well. Up until that point, we have been using it for Vercel, Vercel.com and our internal apps and things like that since October last year. So we already had it in production, in development for a while, and it was working great for us, right? But the big thing with TurboPack is that it

that since it's a bundler and it's going to touch all your code, so that's all your node modules that you're importing, like all your first-party code that you wrote yourself and all that, it needs to be able to process every single edge case thing that you're using as well, a feature of the platform or a feature of node or a specific resolving thing that TypeScript supports or things like that. So we basically spent the last eight months

Basically, the bundle was already done for one and a half years, I think, at this point. It was really stable. The big thing here was getting all the tests to pass for Next.js. So we finished that in April. And then after that, we just spent time on bug reports, testing out the top 300 packages that are used with Next.js, for example.

trying it out on more like open source apps and doing all the due diligence on making sure that like we could actually confidently say this thing is going to work for your app if you don't customize your Webpack config. So the important thing to note here is that we allow you to customize your Webpack config. And that means that you can basically override like any setting that is set in Webpack in XS internally. But we can't support that with TurboPack because TurboPack is not Webpack. Might sound like a no brainer, but actually it's not as simple.

So the easy explanation here is that we do support Webpack loaders in TurboPack, but we don't support Webpack plugins, for example. So if you add Webpack plugins, then you can't add those to TurboPack because we don't have the same low-level hooks and things like that. But we do support loaders. So if you just have a Webpack loader like SVGR or SVGR, I'm not sure what the right way to pronounce it is, but...

If you want to import SVG as components, for example, at the loader, that works with TurboPack as well. You can just add a TurboPack config for Webpack loaders.

So that's your question around the timing. In the end, we spent a lot of time working towards a stable release. And then I think a month or two months ago, we finished all that work. So TurboPack for development was basically ready. We fixed all the linear issues that we had about it and all that. But then Nexus itself depends on TurboPack. So it actually has TurboPack as a dependency in a way. It compiles it in as a Rust binary.

And in order to release it, we had to ship it as part of like next year as itself. Uh, and actually as itself wasn't a release cycle where it was already in release candidate and was going out as next year, 15 and like a month or like one off months later. Right. So in the end, like we,

The timing is basically coincidental. It could have been that it was an earlier version as well, or a later version, depending on when these guys went out. And then the other thing to know here is that Throwback and NextJazz is actually not just Throwback, the bundle, it's like the

the bundler itself. So that's what we call TurboPack. And then the other part is the Rust bindings that we integrate with Next.js. So we add all the Next.js specific ways that layouts and pages are resolved and

like custom transforms that we do for Next.js specifically, things like that. We call that like Next.rs internally because like we need to have some code name for it. But that's basically all the like bindings into the bundler and like how we add entry point, like routes basically to the bundler and things like that. Yeah.

So this gets to kind of an interesting topic around when you own your own build chain, which you now do as you're doing this. You can use it to make standard things faster because you happen to use them. Or you can even use it to start extending the language. Frameworks like Svelte extend the language they own the compile chain. Or you get frameworks like Quick, which also sets things up to be magical for you because it knows end-to-end what it's doing.

Next, as I understand it, and particularly with things like Open Next, it's still just JavaScript and React. But are you looking at extending it further now that you own your whole build chain?

So, interesting. I feel like one might say that Next.js is in the same category as other frameworks you explained. Like, if you think about it, from our perspective, actually, Next.js is mostly all compiler-based, especially with, like, the new server components we introduced with the app router. It's now sort of like its own sub-language you have.

Well, it's its own language in React 1. We've used client and used server. So we introduced new paradigms. We introduced use cache at Thursday at Conf. Those is React plus those things, which are, to me, really like an extension of the language already. It's not in the same way as Svelte or Quick in that way. So it's not like it's adding a language extension where you have specific directives that are special language.

besides the directives like use cache and use client and use server. What is interesting there is that those are not JavaScript directives in a way. They're not actually directives that are saying this is different syntax that allows you to do a certain thing.

they're more like boundaries between the server and the client and they're like bundler marks so they're more like hey like bundler now move to this different like basically like move to this different environment you can switch between environments using those directives so you can say like use client now this is a browser slash like server set rendered component and then like use server this is now like something that runs on the server as well

So all of that is deep integration into like vendors already. So like we already had to do this with Webpack. We support it with Webpack as well. The main difference now is that with Webpack, it was like we had to do manual bookkeeping between like three different Webpack instances where like there's basically three compilers running at the same time. Whereas now it's one compiler that can reason about the entire module graph of like all the different environments as well.

To go back to the compiler work, I think maybe the difference in philosophy is that we try to still just be React and JavaScript. I don't think we're looking to go anywhere beyond that. But if React went for it, if they introduced their own .react file extension and they had their own language where you would need to

to declare use anywhere and it could like have its own like syntax, et cetera, we would follow it for sure. But we don't have any other ambitions besides that. And to be fair, they already sort of did that with JSX, but it didn't introduce new semantics. It was more kind of sugar and easy use, but yeah. It could be interesting though. You know, if React did it, they could introduce their own flavor on it and make it so that you could use conditional hooks, all that kind of things. That'd be great.

This episode of Software Engineering Daily is brought to you by Leanware. Struggling with development teams that say yes to everything but deliver on nothing? Leanware offers a refreshing approach. They're a Colombia-based team delivering top-tier software development with full transparency and world-class engineering standards. They've honed their craft over nearly five years, sticking to technologies where they have senior expertise. This means no compromises on quality ever.

Their C-level executives are always accessible, ensuring seamless communication and a genuine partnership. Plus, being in a similar time zone to the U.S. makes collaboration effortless. Don't settle for less. Partner with Leanware for software development done reliably. Visit leanware.co or see the show notes to get started. That's leanware.co. Leanware, redefining software development with exceptional quality and realistic expectations.

Let's maybe talk about some of the other functionality changes that, I mean, you mentioned you'd been making all of these improvements. And when we talked about initially, like a lot of what you mentioned was stability improvements, build improvements, things like that. But this is a major release. So there's got to be some sort of breaking features in there. And looking at it, the one that stood out to me in the release set was the async request APIs. Do you want to talk a little bit about that? What's the motivation? What are the implications of introducing that?

Yeah, that was pretty fun. It was sort of like fairly risky on our end and we're like very, you know, wary of such a big change. For context, what we had before was through the app router, we exposed information about the current request through methods like cookies, headers, or we would inject like params or such params as like props to the server component that you would render.

In 15, we decided to change those methods and functions to be accessible in the same way, but via promises instead. So calling headers would now return a promise, calling cookies would now return a promise, and you need to await it in order to read the content there. And so the 15 blog post goes a little bit into why we did that change, but it's kind of vague, basically. We didn't really say why we did it. And so we uncovered that

What we've been looking to do with this change is to actually prepare for this new era.

like sort of like this other big change coming up in Next soon, which we call dynamic IO internally. There's a lot of talks basically around Next.js complexity in the past around like how the semantics around caching and like the static or the dynamicness of Next makes it hard for people to reason about. For context is we used to

What we still do is to pre-render all pages by default. So you'd write a page, and if it was a fetch call in there...

or anything really. We tried to pre-render it at build time so that we could optimize it and serve it in a static form. However, those changes were too... All this heuristic was a bit too strong sometimes and you would end up with people deploying their website and asking themselves why their website content was not changing if they had made a fetch call to a third-party API to display some content.

So we had the semantic changes and basically we ended up having to add a lot of configuration as well because some people wanted control over whether or not it always needed to be dynamic or always wanted to be static or actually a mix of both. And it all made it a pretty hard learning experience in my opinion. And I guess we're probably the only framework to do these kinds of optimizations. Yeah.

So anyway, we went back to the drawing board and we came up with this concept of dynamic IO, where in order to simplify the learning experience, we wanted to come up with like, you know, a single concept through which users could determine if their code was static or dynamic. And so dynamic IO is this. The gist of it is if your user code uses promises, if you actually await promises,

for some asynchronous work, then Next can generally reason about it and say that this page should probably be dynamic. So you don't have any problem anymore if you're doing file system reads, if you're accessing your database because

99% of the case are probably dynamic here. Now you use Next.js as you would if you write a simple blog post and then you're just reading content. It's going to be static. If you're having a dashboard and you're fixing for your database,

it's probably going to be dynamic. And so Next can now reason more intelligently about it, which leads us to the cookies and headers changes. If you think about it, reading from the cookies or the headers actually makes the request dynamic because it needs to be. It's actually about the e-commerce request that comes in. So you want to read it so that you can personalize it according to the user info.

Is the user logged in or not? So implicitly, that's dynamic behavior. And so the real reason we made that change is so we could adapt it to this new dynamic IO behavior. Now it works the same. You await it.

And now you're telling your page it's dynamic. So I think it makes sense. So if I were to sort of rephrase back to you, you are doing, this actually gets back to the previous question around things you're doing with the build tools, right? So you are doing a build time step where you're optimizing things that can be generated statically to pre-generate them statically so they go up there. And you're trying to do that determination, quote unquote, automagically.

without having the developer have to tell you things. And the simplest way to do that is say, is there anything async?

going on here. And so in order to do that, then you had to take these things that maybe were using asynchronous API previously, but actually technically should be asynchronous because they do depend on something dynamic, something user request, change them to be async. And now your initial build time static analysis works across the board. Is that a fair summary? Yep. That's perfect.

The only thing there is that it's not static analysis or like when they're related per se, it's more like we run the code and find that it's like basically at build. This is where it gets complicated at build time. So during next build, we run the code and if the code then is doing anything async, then we mark it as like, this thing is not a static. It's dynamic analysis. Maybe. Yeah.

We actually, internally, we call the static, the build phase, like sort of like just plus processing for doing runtime optimizations. It's not static analysis in terms of analyzing the written code, but it's pre-processed, pre-run code. Is that right? We try to call it pre-render for the most part. So we try to pre-render during build. And if it turns out that it's

like doing anything async, then we basically bail out from doing the full pre-render. And there are some implications on partial pre-rendering and all that as well that we maybe can talk about. Maybe not. We could talk about that for hours, probably. But yeah, it's like that pre-render we tried to generate. That's the same in X14, by the way. We do this pre-render, but the mechanism is different. So when you call cookies, it's a throwing mechanism instead of finding promises. Makes sense. All right.

Other changes that are in Next 15, you mentioned the Next Form component. Do you want to talk a little bit about that? So, yeah, Next Form, really simple. So it's a drop-in for just a normal form tag, but it adds some additional features. So it adds prefetching, it adds client-set navigation. It allows you to do the things that you're

very often already doing anyway, but it's quite cumbersome to manually handle. Or if you do manually handle it, think of this Link component, for example. Link does a bunch of features for you automatically that you could totally write yourself. You could write an is this thing in the viewport, then writer.prefetch or something like that, but you really don't want to be spending time on that per se. And this is similar for next form.

where it will automatically do the prefetching for you if it's a get route for example it just integrates better with like server functions and server actions yeah idea behind the component is that we we wanted to make it as you know as similar to the vanilla form as as possible just and we just wanted to add like a really thin layer that connects it to the next js router

on its own and like we're not looking to doing anything fancy there we're not doing like we're not integrating with like form validation or anything you might expect from from some other library it's just supposed to be like a really raw primitive so that you can like get instant loading states if you're doing like get form to another page that kind of things it's like your search forms and things like that much easier to write those whereas today you you might have to

like manually manage suspense and like adding transitions and a bunch of like things that are like slightly newer react as well so you know not everyone knows about them even so this just makes that that whole setup a bit easier

This episode of Software Engineering Daily is brought to you by Jellyfish, the software engineering intelligence platform powered by a patented allocations model and boasting the industry's largest customer data set. You know, one of the biggest shifts in the past year or so is the adoption of Gen AI coding tools like GitHub Copilot.

and engineering leaders are trying to figure out if their teams are actually using it, how much they're using it, and how it's affecting the business. Are they shipping more? Is more roadmap work being done? How do you know beyond anecdotes and surveys? That's why Jellyfish introduced the Copilot dashboard in 2024 in partnership with GitHub.

And since then, they've analyzed data from more than 4,200 developers at more than 200 companies. Want to know an interesting finding? Developers are shipping 20.9% more with Copilot. The folks at Jellyfish have a ton more insights, and you can get started seeing your own team's data so you can plan and manage better. Learn more at jellyfish.co slash copilot today. So that gets into another thing that I wanted to talk about with you guys, which is the relationship with React. And

In particular, I saw that you're releasing against an RC of React, not even a stable released version. What's the sort of thinking behind that? Were there particular things you needed to get from that? How is that all working? Yeah, it's a pretty interesting question. For context, we've been working really closely with the React team at Meta, and we also have a few members of the core team inside of our team as well. So yeah,

So generally, the roadmap, the decisions around releasing React 19 or not are usually led by those members. And so the decision...

Here, originally what happened is that back in May, React also released their release candidate, React 19. And basically, we wanted to ship Next 15 as part of that as well. So the idea was that we would like release an RC and then fast follow on it. And so we made all the breaking chains that we needed. We bumped the p-repancy and like forced users on React.

on React 18 that were using the pages rather, for example, to also upgrade to React 19. However, what happened is that a month later, there are some discussions around one change in particular regarding the suspended sublinks rendering behavior in React 19. That was a big change for a lot of community users. And so the React team decided to hold the React 19 duties on this, which is why we're still on the RC.

So we ended up waiting on it for a while, but then we actually discussed with the React team internally, and we decided to opt for this strategy of releasing R stable without blocking on the RC and this behavior changing. I think with the caveat that we would add backward compatibility to React 18 for the pages rather, so that, you know, sort of like separate concerns there, but

However, yeah, we got into like a slightly more complex situation with the app writer. Because one thing to know about the app writer is that we're building it off of a vendor version of React, which is I think they call it Canary.

Tim? Yeah. So AppWriter always came with the actual latest version of React Canary, which is a version built for us frameworks, like meta frameworks, so that we could build on top of it so that we could integrate with the latest features before they actually hit the React table. So the reasoning here is that we were...

We've been on React 19 for basically a year or so already if you're using the app rather. So the siblings, the suspend siblings change as a shortened word, like was actually has been present for over a year now for us. So we decided to not consider it a breaking change and we just started to move forward with it. Because per the React team itself, that's really the only change.

change that's going to be shipped whenever the react 19 release chips as a ga does next depend on that particular part of react 19 or is that just something separate so that if they ship a change to that it just doesn't bother you at all yeah it doesn't actually affect us it affects not to go into too much details but it affects like client-side suspense usage if they are doing

fetching in rendering from the top of my mind. So that basically means if you have two components that are in the same suspense boundary, what would happen previously is it would kick off the two components at the same time. So it would call render on both component A and component B if they're in the same suspense boundary. Now it actually will call component A. Once it's spent, it will not render component B. So that's a problem if you're using a library that is heavily relying on this.

and as it turns out there's like quite a few of those in the whole react community in our case like the the next chess like the router in apprider is not using that pattern in like any way the only thing where it or like might affect you in in some ways if you're using lazy loading or things like that but that's not super common per se and yeah so in practice like we don't

ran into the same problem here because the fetching mechanism is different. If you're using server components, for example, they don't run in the browser, so you didn't hit the same annotation. Basically, at worst, it doesn't change anything for Next.js app browser users since they always had it. And so whenever that gets fixed, it's going to be sort of like a minor performance optimization. Yeah, it will only get better, basically. That's the-- Make sense.

This conversation brings me to another thing. I know there's been stuff out in the sort of web community questions around the deep interlocking relationship between the React team and the Next team now and thoughts about, oh, a server component is just for Next or how does that work and things like that. Kind of curious, how do you all think about the relationship of Next and React?

philosophically what do you think like what belongs in next versus what needs to be in the react side and then are there things even further out that shouldn't be in either of them and should be in a third party library like how do you think about those lines there's a surprising amount of things that people think are next specific they're actually react good example is use clan to server those are like reacts rfcs specific actually like

But like the biggest misconception is that we invented use client and use server. That was actually not the case. That was based on like feedback from like other early adopters of server components actually. So like, for example, like Hydrogen at Shopify was like one of the first frameworks to implement React server components even before we had Hydrogen.

like a full like working implementation and that was even before you built a provider right so they started like migrating apps and then they found like that they would run into problems with the extension like why didn't you add like dot client dot tsx or something like that it's like a common feedback

But that was actually something that the Hydroden team found was a very big problem to get overall community package adoption, for example. Because it meant every single React library out there would have to change their code in some way. And there would be no way for you as a user to say, this is now a client component or things like that. But yeah, so...

I'm sure that Jimmy has a take on the Next.js and React overlap. My personal take here is that we're trying to make, in the essence, like for me, I've been working on Next.js for so long. A lot of what we're doing now is actually bringing a lot of the learnings that we had from Next.js into the overall ecosystem.

It's like a really good example of that is the head management, for example. Very first release of Next.js, we had to work around this limitation of React, which is that you couldn't just inject tags into the head. So we had to create this Next head and overwrite, build our own React-ish thing that loops over JSX and tries to magically inject it into the head.

Over the last year or like the year before, so Josh on the React team for Cell, he spent so much time like figuring out like, can we bring something like this next head thing into React itself and bring it to all frameworks and like all users of React? So what this means is that you can now, with React 19, you can just write a meta tag in any component and it will just magically send it to the head for you automatically or write a title tag and it does the same thing.

Makes our lives easier because now we don't have to maintain this brittle logic of trying to inject stuff into the head that React doesn't know about. And it makes everyone else's life better, including Next.js users as well as everyone else, by being able to inject link tags, meta tags, title, that kind of thing. As well as integrating those deeper into React, which is link tags can now...

integrate with suspense and we can show a loading spinner until the link tag is loaded and things like that. Stuff that I would never have been able to, we as a team would have never been able to add to Next.js even because we don't have full control over rendering, which React does, right? So that's like one of the examples

I feel like one of the other examples is just the overall React server components, proving them out. And like I said, there was other teams like Hydrogen and some people building other frameworks on top of the React server component spec.

But it's really helped to bring our expertise in how we were building service-led apps, bringing it into React and give people all the things that you would ever need. So an example here is if you want to pass some data from... An example here is a limitation that Next has. Get service-led props, you were never able to return a promise or return a date or a JSON object that would have been recursive, for example, or things like that. And

React now has a serialization format that allows you to just return a JavaScript map and pass that to the browser from the server. And it can serialize that and create a new map in the browser. It avoids saturation errors. It also is much more reliable when you have it in React itself. So...

There's a lot of benefits from being able to work with React team directly. Hydration errors is a good example of that as well. There is some integration in Next where you have to show the error overlay and things like that, but really everyone's getting better hydration errors, even if you're using other frameworks and other libraries as well. One thing I wanted to touch on in particular is we don't

Think about it just as Next.js is like when we design like server components, et cetera, like it has to all go back to React itself. And, you know, in most of the mind of the people on the team, I'd say like, you know, it's rather like React pushes the Next.js direction that we, whenever we design some changes, for example, we could have easily changed

got in our own XJS dev tools that allow you to tap into server components and see what they're made of and like, you know, kept that for ourselves. Instead, we did the work to integrate into the React dev tools so that any frameworks that will want to use server components will be able to

tap into it. I think the awkward part maybe is that it's an insane investment of time on us on the Next.js team to realize the vision of server components to its fullest. And so that's why things haven't caught up on recently. We have a

a little bit of a head start there, but I'm, you know, really confident like frameworks like Redwood have started exploring it. Remix also have been looking into it and I actually look, I'm looking very much forward to see what, you know, what their spin on zero component is. As we talk about sort of

relationships with other community projects. And you said you're never designing it just for Next, you're pushing for React, which improves others. It leads me to another question I had, which is around the relationship between Next and Vercel. And I know there's historically even been a sense of, oh, we need a new project open Next in order to be able to build Next outside of Vercel. Jimmy, you mentioned before we got on the air that you're doing some work in that space. Do you want to share about it?

Yeah, yeah. So we're really excited about this. I think as we were building up Next in the past few years, we were really focusing on making like, you know, the sort of best framework end-to-end as much as possible, like in terms of like,

something that works really well on dev, but also works the best as you deploy it. You know, we want to push for the best ways to build websites. And that doesn't just, you know, stop at like when you build it, it also matters like how you deploy it, like how you best serve static content or how you organize your middleware. OpenX allows you to deploy Next.js easily on serverless platforms. But I do want to say that like Next.js on its own, as always,

been pretty easy to self-host easily. Tim can talk more on that. The containerized mode where you can just run next start and that has been great. But that's just limited on its own because you

It will just allow you to have a simple node server that will respond to SSR requests and you run it in your own instance or in your own $5 VPS. That has always worked out of the box. What has not worked really well out of the box is really that Next.js story as an infrastructure, basically.

The framework defined infrastructure as being like Next.js is telling the provider, like doesn't matter if it's for cell or some other provider, like this is the serverless function I want you to create. And this is the route rules I want you to create. This is where the static files are, but the static files also need to have some headers, for example.

All of that is baked into Next Start. So that's the Node.js production server, or custom server if you're using that. And so basically all those rules are there. So it has the right static caching headers and things like that. If you're building a serverless platform, then you would have to figure that out manually, basically, because all these platforms have different formats. So there's not just one standardized output for all of these.

And that's like where things like Open Next, for example, and like serverless Next.js, I think is like one of the names of the other packages. And things like that, they are trying to create like these are the serverless functions. These are the route rules and then generate the route rules for like a specific service. So that could be like AWS or Azure or GCP or anything like that.

And then if you're using Next Start, for example, you do have to, once you're starting to scale, so it's beyond one instance of the Next Server. Most people run thousands, if not 100,000 plus of those, depending on the amount of containers that you're generating, basically. The thing is, we see very large websites self-hosting on Next Start or Node.js Server as well.

So it's not just that you have to use serverless per se, like Nexus runs totally fine on the server, like Jimmy said. It requires some extra setup. And like that setup was there. It just never explicitly documented. And like, this is exactly how you do it type thing. And that's what's changing. So I guess that Jimmy can talk a bit more about that.

Yeah. So on one hand, we're like, we're making sure like the documentation gets better around that side. We're going to update the docs soon with like, you know, those examples we talked about. We were going to show you, you know, the really simple steps of like how you can deploy on like

I think literally all of the providers you could think of. But what we're doing as well is working with the Open Next maintainers, which are great, by the way, in order to change the way Next.js' architecture itself so that we can avoid, in theory, having something like Open Next existing by taking their learnings and adapting it into our code base so that it

It then other providers, you know, Netify, Cloudflare, AWS can consume its outputs and ship that framework defined infrastructure as easily as we can. I think the tension was around if you want to do that,

Right now, the OpenX maintainers, they had to reverse engineer our code base. Quite a bit sold in the open. It's also, you know, the contracts are a bit unclear. The outputs are subject to change. And yeah, we can do a better job at documenting and creating and enforcing a standard behavior there. So yeah, I'm really excited about this line of work. I think we want Next.js to be the best as possible on every platform as we can. And we're investing a lot of time into this.

creating, you know, a set of like community maintainers there. And we want to, we want to make sure we support everyone in the community in that regard. And we just want to make sure that when you're self-hosting that that's not a bad thing, right? So,

There's really, obviously we'd love for you to use Vercel and there's many reasons to use Vercel, but that it would be the only place to host Next.js is definitely not one of our goals. It's more about making sure that everyone can succeed with Next.js. Day-to-day, if you're using Vercel, great. If you're not using Vercel, great as well. And there's many other reasons to use Vercel, my opinion as well, like preview deployments, things like that.

So it'll be exciting to see where this whole effort turns out. Because we just launched the new GitHub org that has all these storage templates as well, various different providers, serverless providers as well.

And some of them, they only support static, for example. So say you don't even have a server, you don't want to use Next Start, you want to use Next Export, for example, or the output export, then we have a starter kit for that as well. Awesome. Well, I think we have run through our time here. Thank you, gentlemen. This has been great. Any last thing you want to leave our listeners with?

If you upgrade it to Next.js 15, you're not done yet. And try to run it through a pack as well. It's opt-in still. And the reason for that is that we don't have built yet. But what we've seen in our own apps and people reaching out to us is definitely going to give you a big performance boost for development. So like you said, just faster iteration velocity, basically, for everyone.