cover of episode #481: Python Opinions and Zeitgeist with Hynek

#481: Python Opinions and Zeitgeist with Hynek

2024/10/17
logo of podcast Talk Python To Me

Talk Python To Me

AI Deep Dive AI Insights AI Chapters Transcript
People
H
Hynek Schlawack
M
Michael Kennedy
Topics
Michael Kennedy: 本期节目探讨了 Python 社区现状、流行的工具和未来发展趋势,特别关注了 Hynek Schlawack 在这些方面的观点。Hynek 分享了他对 Docker、虚拟环境、UV、多线程、LLM 等技术的看法,以及他对 Python 包管理、类型提示等问题的见解。他强调了小型团队在生产环境中面临的挑战和解决方案,并对 Python 3.13 的新特性进行了评价。他还谈到了自己制作 YouTube 视频的经历和动机,以及对 attrs 和 Stamina 等项目的介绍。 Hynek Schlawack: 在节目中,Hynek 分享了他对 Python 生态系统中多个方面的观点,包括 Docker 和虚拟环境在生产环境中的使用,以及新兴的包管理工具 UV。他强调了在小型团队中保持系统稳定性的重要性,并解释了为什么他更喜欢在 Docker 容器中使用虚拟环境。他还讨论了 UV 的优势和争议,包括其用 Rust 编写的特性以及商业背景。关于 Python 3.13 的多线程支持,他表达了积极的态度,但也指出了潜在的挑战。此外,他还谈到了大型语言模型 (LLM) 对编程的影响,以及他对 attrs、Stamina 和 Argon2 等项目的介绍。他认为 LLM 是一个有用的工具,但需要谨慎使用,并指出 LLM 可能导致代码质量下降的问题。

Deep Dive

Key Insights

Why does Hynek still use Python virtual environments in Docker?

Virtual environments in Docker provide predictable behavior and make it easier to reason about the application's dependencies. It ensures consistency between development and production environments, avoiding issues caused by missing environment variables or mismatched Python paths.

What are Hynek's thoughts on Docker for smaller teams and web hosting companies?

Hynek believes Docker is a trade-off between complexity and benefits like development velocity and reproducibility. While debugging can be harder, Docker provides isolation and consistency, which are crucial for smaller teams. He uses Docker for containerized applications but avoids Kubernetes due to its complexity.

How does Hynek make Docker builds faster?

Hynek uses multi-stage builds to separate the build process from the production container, ensuring no unnecessary tools like C compilers are included. He also employs judicious layering and build cache mounts to speed up dependency installation during builds.

What is UV, and why is it gaining popularity?

UV is a re-implementation of Python packaging tools like pip and virtualenv, written in Rust. It aims to simplify and speed up Python dependency management. UV's single binary approach eliminates many common packaging issues, making it faster and more predictable.

What challenges does Python's packaging ecosystem face, according to Hynek?

Hynek describes Python's packaging as a complex Jenga tower, with many legacy workflows and tools that need support. UV's success is partly due to the team's full-time focus on solving these long-standing issues, which volunteers couldn't achieve.

How does Hynek feel about UV being venture-backed and written in Rust?

Hynek is generally positive about UV's venture backing and Rust implementation. He acknowledges the community's initial skepticism but believes the benefits of faster packaging and a well-funded team outweigh the downsides. He also notes that many Python developers are now familiar with Rust.

What is Hynek's take on free-threaded Python?

Hynek is excited about free-threaded Python, seeing it as a significant opportunity to improve Python's performance and enable better threading APIs. He acknowledges potential challenges but believes the benefits outweigh the risks, especially with Meta's support in developing the feature.

What does Hynek think about the impact of LLMs on programming?

Hynek believes LLMs will benefit experienced developers by reducing trivial tasks but may hinder junior developers' growth. He worries that LLMs could lead to more poorly understood code being deployed, as users rely on AI-generated solutions without fully understanding them.

What is Hynek's opinion on Python's type annotations?

Hynek supports type annotations as they make code easier to reason about and help catch logic errors. He believes they improve API design and provide instant feedback on code quality. However, he acknowledges that typing is not a perfect fit for all use cases in Python.

What is Hynek's stance on using Homebrew Python on macOS?

Hynek advises against using Homebrew Python, as it can break virtual environments when Homebrew updates. Instead, he recommends the official Python.org installer, which provides predictable behavior and supports both Intel and ARM architectures on macOS.

Chapters
This chapter explores the advantages of employing Python virtual environments within Docker containers. It emphasizes predictability, ease of understanding file interactions, and consistency between local development and Docker environments. The discussion highlights the benefits over alternative approaches like `pip install --user`.
  • Using virtual environments in Docker offers predictability and easier reasoning about application behavior.
  • It provides consistency between local development and the Docker environment.
  • It avoids the complexities and potential issues associated with methods like pip install --user.

Shownotes Transcript

Translations:
中文

Hinnick has been writing and speaking on some of the most significant topics in the Python space, and I've enjoyed his takes. So I invited him on the show to share them with all of us.

This episode really epitomizes one of the reasons I launched TalkPython nine years ago. It's as if we run into each other at a bar during a conference and I ask Henik, so what are your thoughts on, and we dive down the rabbit hole for an hour. I hope you enjoy it. This is TalkPython to me, episode 481, recorded October 8th, 2024.

Are you ready for your host? You're listening to Michael Kennedy on Talk Python to Me. Live from Portland, Oregon, and this segment was made with Python.

Welcome to Talk Python to Me, a weekly podcast on Python. This is your host, Michael Kennedy. Follow me on Mastodon, where I'm at mkennedy, and follow the podcast using at TalkPython, both accounts over at fastodon.org. And keep up with the show and listen to over nine years of episodes at talkpython.fm.

If you want to be part of our live episodes, you can find the live streams over on YouTube. Subscribe to our YouTube channel over at talkpython.fm slash YouTube and get notified about upcoming shows.

This episode is brought to you by WorkOS. If you're building a B2B SaaS app at some point, your customers will start asking for enterprise features like SAML authentication, skim provisioning, audit logs, and fine-grained authorization. WorkOS helps ship enterprise features on day one without slowing down your core product development. Find out more at talkpython.fm slash WorkOS.

And this episode is brought to you by Bluehost. Do you need a website fast? Get Bluehost. Their AI builds your WordPress site in minutes and their built-in tools optimize your growth. Don't wait. Visit talkbython.fm slash bluehost to get started.

Henrik, welcome back to Talk Python with me. Good to see you, man. Thanks for having me. Yeah, it's great to have you back on the show. I've been enjoying a lot of your articles and your videos lately. And you know what? Maybe we should just get together and have an opinion piece by Henrik.

I can do opinions. Yeah, yeah, I know. You're great, Adam. And so we're going to talk about a whole bunch of different things. A lot of the things that you've been talking about, I'm also super interested in. Some cool articles you've written on Docker, UV. Last time you were on, we talked about running in production, and maybe we can review that a little bit and just...

Hi Brian, my last name is Slavak.

there's really not a lot of index around. So I'm just stick to the first name. So as you can hear, I'm a check living in Germany and I live, um, yeah, I live in Berlin and I work for a rather small web hosting company and domain registrar called value media. So we, you probably haven't heard of us unless you speak German because we only serve the German speaking market. Like we do speak English, but our stuff is in German. And, uh,

I would say you would call it individual contributor there. So you don't have managers. So everybody is sincere, rather small. Stability is like the key thing when I'm building systems. And that's like informs a lot of my opinions that you will probably hear today from me. Also, I like traveling and I don't want to be on paged in the middle of my vacation. I'm having margaritas on a beach. Nobody wants to have to put down the margarita because someone took down the server. Oh.

Also, I'm sure we're going to speak a lot more about that. But also nowadays, I'm also a YouTuber. So that's also very exciting. And thanks for the shout out in the last Python Bytes. I really appreciated it. Yeah, absolutely. You're very welcome. Awesome. And I think it's interesting to have people who work on small teams, especially smaller companies.

Because that's the common theme of what people do. I don't have it pulled up right now, but if you go to the PSF survey and you look at some of the demographics stuff, it'll say, well, what size of a team do you work on? Is it one person, two to five?

to a thousand. The vast majority is the two to ten different people. It's not Google, Facebook, Microsoft, right? So a lot of the advice that comes from these huge places is just, I don't know, it's not super relevant. Call it like Google cosplay and...

And I feel like my opinions that are about this whole production thing are informed by my distinct dislike for these kind of things. They're just like some big tech influencers having opinions on stage and everybody wants to mimic it. And it just doesn't make any sense. Because as you said, the long tail is huge. Like most people work for small companies. It's just like that. The things you should do are they're just different. And there are no disrespect to those companies. And they have a lot of employees. But in the general population, it's very small group of people.

people who have hyperscaling clouds and whatnot that they're worried about, right? I think I want to start this conversation off here by talking about some of the articles you've written. So I think this one's interesting. Let's talk about virtual environments. Love them. I do too. And you wrote this article called Why I Still Use Python Virtual Environments.

in Docker. And I do as well. When I didn't use Docker, I had them on my server as well. And I feel like they're a really interesting, predictable way to package Python. And you hear about them, obviously, oh, well, you're going to create a virtual environment so you can

have multiple projects in your computer and so on. But Docker, there's only one app running. What's the deal with these virtual environments in Docker? Yeah, I mean, you just said it, right? It's a lot more predictable in its behavior. It's a technology that we use everywhere too.

I just understand what this directory is doing. I find it very important that when I just go on a server or in a Docker container, I know what a file is doing or how things interact with each other. And I don't have to hunt them down somewhere else. Or another popular thing is that I write in the first paragraph is that people use pip install user and then set the Python user base directory. And sure, you can do all these things.

but why would you, right? Like, what are you gaining? And it's a lot of vibes, and I admit it in the article too, but I just find it easier to reason about. And not everything, we don't use only things we need, right? Like we use affordances in other places too. So why not this as an abstraction over,

an application artifact because we don't have anything better in Python. Yeah, and also one of the things that you point out in your article that I think is really nice, that's already the way you're working on your computer. And so when you go to your Docker container, it's the same. You don't have to think about, well, if it's misbehaving in production, that's because, well, maybe we've changed the Python path or whatever, right? It's exactly the same as everyone else is working with. Yeah, we forgot to set an environment variable. Oh no, nothing works now.

Yeah.

Exactly. That's been interesting. Let's talk about Docker just a little bit as well. I think that alone is a pretty interesting choice. You said you work on smaller teams and maybe web hosting companies give this a little bit of a plus plus slant on Docker or not. But Docker, no Docker, right? A lot of people I think see containers and they think complexity. They think it's hard to debug. If something goes wrong in the container, it's just

it's opaque to me. What would you say to people that feel like containers are too complicated? Obviously, the answer is, as always, it depends, right? It's a trade-off like everything else. We don't give Docker to our customers. Like, our customers, we sell mostly LAMP, like mostly WordPress, which is very interesting these days. Oh my gosh. I don't know if we want to go down that one, but it is super, super turmoil right now. Yeah, yeah. No, we don't. Let's not go down it. Matt doesn't know we exist. It's fine. Yeah.

Yes, you're trading off the things you just said, like the debug ability and everything against development velocity and reproducibility. So we, for example, we don't run Kubernetes because it's just too complex for us to run. And we run everything on premise. So like we do not have anything. We do not use any external services whatsoever.

Wow, okay. So everything runs into our own data center. So we are European. Some of the things we have to do, other things we do because Germans like it that way. We can say, like, our data center is right there. You can look at it. No data leaves it. Yeah, if you have to, I can come look at my code through the little...

rack window. There it is. You can try around inside and see how fast you get tasered by the security. It's all good. We use quite a bit of HashiCorp software. So we had Consul and Vault before for configuration, service discovery and secrets. So we use Nomad because it was a logical solution

add-on to run our applications. So that's where most of our web applications run. Everything that can run in containers usually runs in our Nomad cluster in Docker containers. And yeah, sometimes there are a bit of pain to debug, but you can SSH into a Docker container, right? First hot tip of the day, there's this thing called BusyBox, which is like one command. And depending on what name you call it, it's something different.

And it's just very few bytes. You can look at it, and it's static. And it gives me things like ping and...

I think Traceroute is in there too. Yeah, I can do all these basic things in those containers too. So it's not like it's a completely black box. You can SSH into them. It's no problem. A couple of thoughts. One, I agree with you. One more thing. One more thing before we do not run SSH clients on those Docker containers. This is an affordance that we get from Nomad and I'm sure other classic containers

managers have that too, that they allow you to enter containers from the outside. So like SSH into the host and that from there you get into the container. Right, right, right. That's SSH service. Yeah, I do that as well on my stuff. I've recently moved...

maybe a year or two ago to running, I used to have a bunch of smaller servers. I don't know, maybe I bought into the cloud computing as a bunch of commodity boxes and you can just make a bunch of little ones and that's the way you do it. And eventually I'm just like, you know what? I'm tired.

I'm tired of dealing with all these servers. Putting them all in one thing, running everything in Docker, keeping it isolated. And it's been really excellent. So similar access model there. I'm not sure this makes sense for you guys, but I think for folks who may be expected still to SSH into their server and mess with it, maybe this doesn't make sense for your customers, but maybe it does for you guys.

individually behind the scenes is I like to have a couple of tools, a little bit like you were saying with BusyBox already installed in my Docker container. So if I'm, I need to get into it and ask it questions, I can actually, you know, log in through some sort of Docker exec, you know, Docker exec dash it sort of ZSH or bash or whatever. And, and,

Ask it questions. What are your thoughts? Do you think if you put something like, oh my ZShell, so you have better command history while you're poking around, is that sacrilegious in a dog or a tater? How do you feel about this? You're asking me if we have a fan-search. Well, we are sacrilegious in a different way. Like our normal servers that we SSH into, we run fish on them because most of us like fish. Yeah, sure. We have very nice fish prompts running there. Everybody who...

claims they have to use Bash on their notebooks because they have to SSH to servers. Look at us. It's fine. You can do whatever you want. It's your servers, man. I'm with you. I don't have this strong opinions because like, as I said, we run everything on prem. So we have our own Docker registry. So I'm not losing sleep over how big my containers are. Like, obviously, I try to keep the surface small because like everything you install can probably be misused in some way, can be a problem like XZ, for example, right? Like there's...

Things can be dangerous, but if it helps you to get things done, whatever, right? Yeah. When you hear the security folks talk about living off the land, that's what they mean. They're like, oh, I somehow got into this machine and it turns out that this tool that I needed to access the database was just laying here and off it goes, right? They didn't have to get anything on the server. So that, in a sense, you kind of want to eliminate.

I do think the trade-off is, are you pushing to a Docker registry and shipping the thing all around, or is it just being built in some local-ish story, right? Interesting. Okay. So the reason that I found out about your article, your...

virtual environment article is I was actually looking around at your UV article because, wow, does UV speed up Docker builds? Oh, yeah. Oh, yeah. Jay Miller made a comment just yesterday. I saw that. It was pretty funny. He said, UV is the new AI technology.

And I suppose AI is the new blockchain. Everyone's excited about it. There's some minor level of controversy about it. But I'm a big fan. What are your thoughts on UV these days? First of all, for people who don't know, we're talking a lot. I have two-letter acronym. You said XZ earlier, now UV. Forget XZ, that was bad. But what's UV? Well, UV is...

re-implementation of Python packaging in rough, roughly saying, right? In Rust. Yes, Rust. They started at the bottom. They implemented uvpip to replace pip, uvvnv to re-implement virtualenv, and uvpip

compile, I think, was the thing. Yeah, yeah. To do our good old requirements.txt files, and they did it very fast. And already back then, I said in a video, it's nothing new under the sun. It's like, we've got nothing new

But this is how you approach this problem because what I think most people don't understand, and this is a bit of a tangent, is what an incredible Jenga tower Python's packaging is. Since we've been building these things for 30 years and there's so many loose parts, there's so many workflows that need to be supported. And oh my God, I mean, just look at UV's bug tracker, what kind of workflows people need to have supported. Then you understand why...

It's hard. And why nobody before, we did not have eight people, eight hours a day working on it to really move the needle.

This portion of Talk Python is brought to you by WorkOS. If you're building a B2B SaaS app, at some point your customers will start asking for enterprise features like SAML authentication, skim provisioning, audit logs, and fine-grained authorization. That's where WorkOS comes in, with easy-to-use APIs that'll help you ship enterprise features on day one without slowing down your core product development.

Today, some of the fastest growing startups in the world are powered by WorkOS, including ones you probably know like Perplexity, Vercel, and Webflow. WorkOS also provides a generous free tier of up to 1 million monthly active users for AuthKit, making it the perfect authentication layer for growing companies. It comes standard with useful features like RBAC, MFA, and bot protection.

If you're currently looking to build SSO for your first enterprise customer, you should consider using WorkOS. Integrate in minutes and start shipping enterprise plans today. Just visit talkpython.fm slash WorkOS. The link is in your podcast player show notes. Thank you to WorkOS for supporting the show.

With 0.3.0, they've now added proper workflow tools, which is very exciting, right? Now we are attacking Poetry PDM and their own Rai, actually, which they adopted from Armin Ronacher. And now we have cross-platform log files, which is very exciting for people like myself who develop on Macs.

and an ARM Mac, no less, and deploy to Linux. Just asking for trouble there. Yeah. Just as you say, this is so incredibly fast. Like, UVSync, you can call it after every command. Like, the thing that makes sure that all your dependencies are at the state that is written in the log file is faster than Git status. And I realized that. There I realized what Charlie Marsh means,

what he said on your podcast the other day, that UV is categorically faster. It just enables a complete different lifestyle. Let's call it lifestyle. You behave differently when all those commands are free, basically. That's UV. It's a workflow tool. It also installs Python. And I think probably one of the things that I was most interested about, your take, there was over on Python Bytes,

We covered this thing by Simon Willison where he kind of summarized a Mastodon thread about UV and whether it being written in Rust is detrimental to the Python ecosystem or not and all of those things. But I think it was there. Your take was, look, fast is interesting. But one of the really powerful things, I think, is here is a single binary that if it's on your computer, you can do all things Python.

And right now, without UV, it's been really challenging. Maybe I want to use pip tools or I want to use pip to install something. All of those things are predicated on several steps. Do you have Python? Do you have a right version of Python? Have you realized you've got to create a virtual environment because you don't have right access?

to where there's just, before you can get started, like, well, here's a whole set of conversations you need to have about not terribly complicated things, but things that people might not care about. And now with UV, it's just UV.

run and even you can even put in the comment in the top like these are the three libraries I need to run it and it'll just run also they don't break that's what I meant with the one binary right like I think most of Python's bad reputation around packaging is that things just break because homebrew updated your Python or because

you didn't activate your virtual length accidentally installed something into your global thing or you use pip install dash dash user and now it's in all your virtual lamps and you don't know why and there's so much unpredictability around these things and now suddenly we have like this one thing

that behaves in certain ways that people understand and that people expect it to behave. As I said before, this is not necessarily the way I would like it to behave, but I understand why it's so important to just narrow the envelope further

of packaging, of the behaviors that we expect and that we as a community endorse. Totally agree. There's certain things I would like to see different as well, but I'm really happy that Charlie and team are working on this and pushing it forward and maybe breaking some paradigms or expectations saying, look, you don't even have to have Python installed. We can base it off of, gosh, I forgot the name of the project.

Not dead snakes, but the Python builds, yeah, build standalone. That's it, exactly. And we'll just grab it, right? If you don't have Python installed, we'll just grab it. Now a side issue or maybe a tangent parallel issue is I would love to see something in Python itself that gives you a binary that you can then deploy.

UV is great for getting your machine up and running and getting Python dependencies on it and stuff. But Python build my folder, my package with an endpoint or something, like Go, like C++, whatever. Here's a binary. You just run it. It doesn't care what's on your machine. I know there's some packaging type stuff like Pyto app. I have an app that's based on that. But those are still kind of...

trying to piece things together in a funky way that Python will allow rather than something truly... It's the same like with the standalone builds. Python really fights you when you try to do this encapsulated things. And like my dream outcome would be if these standalone builds would be upstreamed to proper CPython components

Because this is really what I would like to have from Python.org, like these directories that I can put anywhere and they just work. But it's not trivial. And the author, the original author of those standalone builds, because it's been taken over by Astral now too, but the original author was Gregory Shorts, I think. And he also did this PyOxidizer project, which was very exciting to me.

but he kind of burned out completely on working with Python at all because it was so frustrating for him to deal with these things and how Python behaves and fights him when he tries to do these things. Yeah, these kind of changes would have to come from Python.

proper. Yes, it would have to definitely come from Python proper. Well, maybe someday. There's some interesting things that are coming along. We might even have Python in the browser. Weirdest things have happened. Yeah, and out in the audience, Tony agrees that it's super powerful and I haven't thought much about it. It won't bother me to do everything. It's amazing. Indeed. So you want to talk about the controversies to you or just a segue? Let me just ask you a question. So here, I think the controversy is twofold and

I accept that it's somewhat controversial, but I wouldn't say that it's controversial to me. I think there's two things. One, UV is written in Rust. Rough and ruster. Yeah, I see the challenge here. Yeah. Written in Rust. It's not Python, so that makes it harder. I don't know any Rust. I might be able to look at Rust code, so I think that's Rust, but I'd have to do some learning to actually do anything with it. And the other is venture-backed versus just open source. Yeah.

Like, is that healthy? I think for the rust part, like for me, it's like, look, a lot of libraries are written in C++ and I used to do C++ or C, but I'd rather not go to C. If there's a bunch of C and a bunch of rust, they're kind of similar to me. Like I'd rather not mess with those, but if I had to, I could learn it feeling. And so I didn't feel terribly worried about rust in that regard. What are your thoughts? Just some quick takes on this. Yeah, the quick take is I think,

It's a much more thing of context right now. And it's actually how the thread on Masterton started. We were just starting to get off having everything written in C. So we didn't have to learn C to contribute to certain projects because we ported them to Python. Like now the Pyrebble in Python 3.0

3.13, which I'm sure we were talking about, is now in Python, and suddenly we have more contributions, right? Now, people find this is a regression when we start moving back to Rust, which in certain ways it is. But then again, I think people underestimate how many Rust-savvy people we have now in the community. So

I think the upsides in this case outweigh the downsides. But of course, everybody has to decide it themselves. Yeah. And it's open source. If something goes terribly wrong. Yes. MIT Apache 2. Yeah. Yeah. There's already 646 forks. I'm sure there'll be some more.

I'm happy to see Charlie and team working on this. I told Charlie that a few weeks ago or a couple months ago when he was on the show to talk about some of these ideas. I think what I would really like to stress here is that it's not like Astro came to the page and just forbade everybody to do open source packaging. We had like

at least 10 years of trying to tackle this problem with our resources that we have as open source developers. And it turned out that we cannot do that. It's just a too complex problem. I've been like just a few months ago, I was literally saying that I would like to see someone to just put the money where their mouth is and just bankroll a good packaging tool. And this is literally what just happened. So I

more on the relaxed side of that because I realized we just couldn't do it. We tried. We failed many times. There's a whole graveyard of packaging tools. Now we have a good one. One day we will probably have to maintain it because VC is going to VC, but it's

It's fine. It's interesting, that is for sure. I definitely know if you want to get VC funding, having Rust as your foundation seems to be the trick right now. But another thing I want, while we're on this topic, you made a point in one of the YouTube videos, which I'll give a shout out to in a bit, but you said something to the effect of,

be kind to all the people who have worked so hard on this problem before and don't just trash on the other projects. Six months ago, this thing was mission critical to you and it was the way that you did everything and now I go, it's,

people are saying, oh, it's crap because you like, I don't, I think it's, it's important to keep some perspective, you know? Yeah. I mean, I've been on the receiving end of such excitement, so I know exactly how it feels. And it's just, you can be excited about something without saying that finally I can stop using X. I,

Ideally, add the maintainer of X so they really know that you finally don't have to use their stuff. Yeah, exactly. You don't have to be. Come on now. So real quick, give us some of the tips of what you're doing to make your Docker builds faster. I know this build cache mount thing is pretty interesting, letting you. I mean, I adopted some of the ideas you got in here and some of my own and Docker builds so much faster. Yeah.

I mean, the most important thing is to have multi-stage builds, which means that you have a separate...

container that builds your application, whether it's a virtual amphichip, as we said before, which I like better, or if you just build it to a directory and then somehow get it over. Because you don't want to have a C compiler in all your production containers. It's just a waste of space, as you said, living off the land. It's not good. You want to drop the hackers into a desert, not a lush forest. Exactly. I mean, they're still going to figure it out, but...

It's just nicer and easier to reason about. Then judicious layering makes a big difference that you are very clear about when you build what, that you basically sort your commands in the order in that they are most likely to change. So in a build container, that means that your last step is copying the application and the step before is copying the dependencies. Yeah, absolutely. And so I'll link to one of your articles here. But the other thing that I think is pretty...

it's pretty interesting that people maybe don't realize or haven't used a lot. I'm not sure if I see. And here you go. Is when you run Docker commands, you can say dash dash mount equals slash root slash dot cache. Yeah, that's build cache mounts that you mentioned. Yeah, yeah, yeah. And

What that'll do is when either, doesn't matter in this case, the way you got it set up, either pip or uv, if they, a lot of times when you pip install something, they'll say, oh, that's already cached locally. We'll just install it instead of download it, right? Yeah. And that'll store somewhere on the host machine, all the cache stuff, right? It depends, it depends. So this is something that I didn't spend a ton of time because you can configure where this is stored. And in the simplest case, it is just,

pulled from the last container that you built. That is like the simplest configuration that you can use that you say that. So now I tag a latest container, which I never did before because for deployments, I usually use numbers. But now there's always a latest tag. It will take the cache from there. So it's most likely pretty good because

Because it was literally the last build. Yeah, yeah. Okay, interesting. I see that it's part of the build cache, the Docker layer cache, not some folder laying around. I think it's actually part of the container, if I'm not mistaken, completely. Okay, yeah. So definitely lightens the load on PyPI a little bit and makes your stuff faster. So that's all.

That's all good. All right. What is next on my list here? Let me have a look. All right. Installing Python. That's probably maybe a segue that keeps us in the same realm, honestly. How do you get Python on your machine these days? If you're going to work on something, do you brew install it or do you download it from python.org? No. As my friend Justin once wrote, homebrews Python is not for you.

That Python is for homebrew to use in their applications. You notice it very fast because whenever they upgrade their Python, all your virtual nets break. So...

This is not great. So I'm on Mac OS and I'm on an Apple Silicon Mac, which makes things a bit more complicated because I need to develop some of my projects on Intel because I need to use a driver for a very shitty proprietary database. And of course, they don't have that for ARM. And it's written in C or some sort of binary layer. Yeah, yeah, yeah. It's a binary blob.

So I have like these two types of projects. And so the official Mac OS installer from python.org has this amazing property that you get Python 3.13-intel64. And when I create a virtual env with that, I know this virtual env is Intel. And if not, it's not. And this is very nice and predictable for me. So I really like this behavior. And on...

on servers we use dead snakes yeah very cool lately I've been switching to just UV VE and V give it a give it a Python version which I guess indirectly is using Python build standalone right they have a bunch of quirks based on the thing that we talked before that they fight that Python fights it's a little bit so I

I cannot use those in production, at least not for everything, because I just had some... Some of my packages just won't install on them. That will probably be fixed at some point, but yeah. Python.org is a safe spin thing. And I have to shill my friend's Glyphs tool, MopUp, which just allows you to upgrade those official installers very easily, especially when you use UVX, because then it's just one line. Yeah, MopUp is an interesting project from Glyph, as you pointed out, that will...

Update your Python, right? Yeah. Yeah, if you've installed it from python.org. All right, another thing that's interesting if you install Python from python.org is in the installer, you can customize it and you can check a checkbox that says, give me the free threaded Python.

Let's talk about free-threaded Python. What do you think about this? This is one of the more significant changes to Python and one of the bigger peps. You know, we went through the two to three controversy or challenges so much. And I think that was largely because strings were changed. I know there's a bunch of other changes, but it seemed to me that the real challenge of a lot of the upgrades of a lot of the packages has been like, well, that used to be bytes or strings. Now, what's the difference?

We've got to treat them differently and we've got to change that. That's the part that really took the work. And threading seems harder than strings. I don't know, maybe it's not. But is this going to be a big challenge for people or what do you think? Yeah, I guess we will see, right? Like normal code is probably not going to be affected a lot, but...

For example, it's a perfect timing since I'm maintaining Argon 2 CFFI. As the name says, it depends on the CFFI project to do the bindings. It does not support free threading yet. So I cannot build free threading wheels with that yet. Of course, people are already complaining in my issues about that.

Why isn't your puppy parallel? Come on now, get this going. Yeah, so there's probably more projects than just CFFI that are still stuck in some way because they are not important enough for someone from meta being parachuted into the project and fix everything. I personally am very excited by free threading and I was like one of the... I mean, there was... I don't know how to call it, but...

Bit of an election, right? To decide on whether or not. And it was like this, we made public statements about how we feel about this whole thing. And I personally felt good about it back then. I still do. I think it's a big chance, our one shot we will have because either this one works out or it's not going to happen ever. We will never have free threaded Python.

ever because this is like the perfect constellation of stars this is not coming back that's a really good point we've had the galactomy and other attempts before but this is it's

It's fast. It's accepted. It's already shipped. You're right. If this doesn't work. And we have Meta who also like put several paid people on the job to get it done. Again, this is one of those things that just cannot be done by volunteers on the beer money from GitHub sponsors. This is a big thing that has to be that needs serious money to be solved.

This portion of Talk Python to Me is brought to you by Bluehost. Got ideas, but no idea how to build a website? Get Bluehost. With their AI design tool, you can quickly generate a high-quality, fast-loading WordPress site instantly. Once you've nailed the look, just hit enter and your site goes live. It's really that simple. And it doesn't matter whether you're a hobbyist, entrepreneur, or just starting your side hustle.

Bluehost has you covered with built-in marketing and e-commerce tools to help you grow and scale your website for the long haul. Since you're listening to my show, you probably know Python, but sometimes it's better to focus on what you're creating rather than a custom-built website and add another month until you launch your idea. When you upgrade to Bluehost Cloud, you get 100% uptime and 24-7 support to ensure your site stays online through heavy traffic.

Bluehost really makes building your dream website easier than ever. So what's stopping you? You've already got the vision. Make it real. Visit talkpython.fm slash bluehost right now and get started today. And thank you to Bluehost for supporting the show.

I'm excited, but I don't think it's going to be that big of a problem. And I think it's going to do some very exciting things that we are not thinking about because a lot of the arguments against it was, oh, threads are bad. It's hard to write multi-threaded code, which is all true. The problem is that we just don't have good APIs for that. Like you can build good APIs for threading. You just have to look what Go is doing with channels. We can have channels in Python once we have good threading. Right.

But we never bothered to build these kind of APIs because it doesn't make any sense with the GIL. It's just, it's better to go async. That's a really good point. And I think there's an analogy with async. Remember, async.io shipped in Python 3.4, but async.in await, the keywords and the programming model shipped in 3.5. So for a while, it was like, well, you guys figure it out. And then the programming model got way nicer. We might have not gotten this shot if async.io didn't happen back then because it

It just showed how we can do like a test balloon in a community. We don't have to do it perfectly on the first time. I think we had even breaking changes in minor releases of 3.5.

Because we had to fix things up. So, yeah, I think I'm sure it was a bit of an inspiration. I remember a breaking change somewhere in Python, but further down the road where the async context. So the way to say a function was async, you could use a decorator before async and await came along. And my code, which was running on MongoDB, which was running on, which was using motor async.

they were using that older style to get their async. And then when it got taken out of Python, it just, well, stuff doesn't run anymore. There is no such decorator async, something I can't remember exactly what it was, but it took a week or two before I could run the new Python because that wasn't a big deal. You know, I just...

didn't use it, but it did break. There was plenty of breaking changes in AsyncIO. I was quite involved with that whole thing and I remember things breaking a lot in the beginning. But it's good. It's good that we don't carry that technical debt around with us now. Yeah, absolutely. I want to give a quick shout out to a PyTest plugin called PyTest-FreeThreaded from Anthony Shaw and friends.

And this one basically lets you run your tests in the free-threaded Python mode. And

and run them with parallelism and stuff like that. Because if you're going to have a library and you have tests for it, and it's supposed to support free threading, you should probably test it with free threading actually running at some point. People are trying to move their libraries along and I think they said, look, we tested this on one of the libraries that said it was free threaded, supported, and it segfaulted Python. So maybe it wasn't quite ready. So run your tests if you're going to support this. I think it needs to be stressed that

The trick to not break your code is to just not mutate global data in multiple threads at once. Just avoid doing this and you'll be fine. And most people don't. So I don't expect like this large scale breakage. The problem is more in the lower level things like CFFI and those kind of things that are going to take a long time to figure out, I think. Yeah, I agree. Like, does your web framework support free threading? Well, you're going to find out pretty quickly. You definitely will.

I'm personally excited about it as well. I think we're going to have a lot of good times. What else about 3.13 are you excited about? I feel like Python 3.12 and 3.13 was more like planting trees than a good harvest, you know? There's a lot of work involved

a lot of very, very hard work when it did these two releases that don't benefit us immediately. They built a foundation for great things to come. Like free threading is an example. Sub-interpreters are an example. Like internally, I mean, I shouldn't admit it, but I'm also a Python core developer, but I'm just not very active because I have...

of my own stuff going on nowadays. But internally, the 3.13 is kind of called like a secret 4.0, which everybody who has to do with CPIs have noticed because unlike the releases before, like 3.12, 3.11, it took quite a while for the ecosystem to...

catch up with all the changes to the C API because there's been a lot of changes. I mean, the Pyreple is great. Pablo has made big push for better error messages, which is also nice. And they continue to get better. Yeah, it's great. This is like the things that are truly user facing, but things that I don't really

care about that much. I mean, I use I Python, so I will probably keep using it still. I just hope that maybe PDB will get better now that you have a better REPL because my favorite debugger PDB PP has been broken for two releases and

I'm not really excited about anything from 3.12 and 3.13. I'm just excited for what those releases are going to give us in the future. I'm excited for just the entire Faster C Python initiative as it kind of works out over across, you know. 3.11 was probably the biggest one of those, I guess. Although we'll see about this free threading. This might, you know, if you've got a lot of cores, that's a massive difference if you can take advantage of it. If, yeah, that's a problem, right? Like, oh,

Because I think what was said that they will merge it if the penalty for non-threaded code is less than 10%, which it did. But single-threaded code is going to be slower in a free-threaded Python build. And it's something we have to keep in mind. Right. Yeah. Well, I guess it depends on your workload, what you're doing, right? But with little dev tools, I fixed this. What's new in Python 4? Here we... Ship it.

Although I got a few more, a few more spots. I got to do a little find and replace in this document. Looking at the what's new in 13. Okay. Also, I think, yeah, I think the new REPL is quite nice. It's been using the traditional REPL. I think it's been really challenging. Did you write a loop? No.

Oop. You want to go back and edit it? Well, you got to remember that was four lines up for the four part. And then the four again to get, you know, like that was just challenging to have a multi-line edits on there. Now a lot nicer. Well, just simple things like key bindings, sometimes not working or just printing weird stuff into your console. It's,

Yeah, but these are the things kind of like the virtual environment. Get your virtual environment set up, get your dependencies installed. It doesn't sound like that big of a deal, but it is friction to people who come to the ecosystem. Every paper card less is good. Yeah, that's 100% right. Definitely good. YouTube.

I mean, look, you're on YouTube right now, but we're talking about you being on YouTube on YouTube, so it's kind of meta. But yeah, you're doing a really good job with your YouTube videos lately. Thank you so much. Yeah, you're welcome. I think I've done a lot of YouTube videos, but even more, sit down by yourself behind a camera for course videos. I have like 270 hours of course videos. And you know, the editing...

I'm excited to talk to nobody. All of these things are not that easy to do. I want to just talk about what you're up to over there a bit. Maybe let's start with the silly reason why I started doing this. I'm very aware and self-conscious about how bad my spoken English is. So it's

It's my third language. So, and I don't speak it often enough. So there's this joke from my friends at a pike and when the just arrived jet lag, who only spoke English to the U S immigration officer, trying not to get detained is very different from the Hinnick, like a week later who spoke to his friends for a week. Right. So my hope was that maybe it forces me to like speak a lot more English in my everyday life.

It turned out I'm way too slow at producing videos. So that part didn't work out at all. I mean, you say like 300 hours, but for me, like the 25 minutes video that I've just published, that's like one hour of film material that I'm cutting down to 25 minutes. Yeah. So that's a lot of work, a lot of editing. And yeah, the more serious reason that actually worked out, and this is not going to be all of Feely material,

I do have a lot of people that follow me across my platforms and know me from PyCon. And I've heard the term PyCon cinematic universe put out, which I'm probably a part of. But this universe is tiny. Like the Python community is huge. People don't understand how huge they are. And...

I mean, just like this Twitter account that posts just like these trivial things that sometimes I'll write wrong. And they have like hundreds of thousands of followers. And the same is on YouTube. I haven't found any Python content on YouTube that I really liked, like for the things that I care about. Like maybe there's amazing data science stuff there, but there's not things that I care about. So I thought with a usual programmer's hubris, sure, I can do that.

And I'm trying to... We're going to build our own. We're doing... How hard can it be? And so I'm trying to do content that I wish existed for me 15 to 20 years ago. But to be honest, like the ultimate trigger is that

Before COVID, I had this nice lifestyle where I wrote one amazing conference talk per year, like scare quotes. And I could just travel the world and give it. And this doesn't work anymore. The conference world has more or less collapsed. Conferences have no money anymore. It's much harder to get invited. So I told you I've worked for a small company. So my

actualization and creativity comes mostly from the community. And I felt like being cut off. Like I couldn't do my creative work anymore because I couldn't pay for all those flights. The same happened to writing. I mean, whatever Google and Twitter are doing right now, unless your blog post lands on Hacker News front page, you basically get no more traffic anymore. It

it has been completely obliterated. Like my blog has like 40% less traffic than a year ago. And I don't think it's that my content got stale. It just changed. I thought I should try a new avenue. I should go where the people are. I've read somewhere that like YouTube is the second biggest search engine or something. So I'm trying my hand.

It's an interesting outlet. And you know, you look at, the numbers are ridiculous, right? I look at, oh, this person just posted a video on this and six hours later, it has more views than if it was a major television series or something like this. It's a weird world. It's a weird world for sure. I'm a niche in a niche, so I don't expect ever to blow up like this. But yeah, we'll see how it goes. It's still great. Let's talk a little bit about some of your projects now.

You talked about Argon. I think maybe one of the things you're best known for might be Adder's, would you say? It depends who you ask. I think Adder's had the biggest impact on the community because they are the direct predecessor of data classes that I designed with Guido and Eric back then that goes back to Adder's because people wanted something like Adder's in a standard library. All right, well, tell people about Adder's for those who don't know. Imagine data classes, but more powerful, right?

People sometimes try to compare it to Pydentic, but that's not that. Address is not trying to be a validation thing. Address is there for creating classes that you would have written yourself. And I call it like a class writing toolkit. And data classes keep catching up with what we do, but we are still ahead. So we have more features, we work better. And yeah, if

Especially for data science-y things, because we allow people to define comparisons in a custom way, so you can compare NumPy arrays and everything. And you don't need to use type annotations if you hate them. That's also a big feature we have. Yeah, so when you create a

a class, you know, you could just put some fields on it and that's great. But there's probably some other things you might want to do, like define a hash and define an equals and a not equals and stir and a repper and all these kinds of things, right? Yeah. You just get that out of the box, right? And maybe you want the class to be frozen. Maybe you want slots and these kind of things. Slots, interesting. I don't know, everyone knows about slots. Slots are kind of a cheat code for low memory, fast access. Yeah.

Tell people about slots, Dunder slots. When you create a class in Python, by default, it's a so-called dictionary class because it has a Dunder dict

where all the attributes are defined. And you can do with those classes whatever you want. You can just add more attributes. But if you define a slotted class, which doesn't have this dunder dick, but dunder slots, and you enumerate all attributes that this class will have, and it's then baked into the class. So this is the big upside. If you try to assign an attribute that doesn't exist, it will throw an error, which is good because it probably means that you mistyped

This is like the thing that you see from the outside. From the inside, it's much more interesting because those classes are more baked.

So they are just faster. They're faster and use less memory. Yeah, they use quite a bit. I think named tuples use more memory, you know, and you think how tuples kind of like as about as bare bones as it can get. But yeah, it's quite interesting. My experience is that even attribute access is a tiny bit faster than in regular classes. Yeah, and this is why in the new APIs, it's the default in others. Like data classes grew slots too, but it's optional in others. It's now optional.

opt out because it's really good. Yeah. Yeah. Because, because why not? And my experience is it's cool to have the dynamic nature of classes and types and you can do all sorts of stuff to, to change them. But most of the time you don't. Yeah. And if you're not going to, like you said, get the, I mistyped something catch because it won't let you do it. And the better memory and better performance, right? Yeah. I think this is something that a

Rust has changed too, the influence of Rust on Python has been that we got a little less dynamic with time, which I personally think is a good thing because I started moving away from that sooner. Like using dictionaries and tuples and all these things, it's

It's just very hard to reason about when you come back like a year later and try to understand what this code is doing. What is this star, star, KW args thing? What are we doing here? There's still so many APIs. You have a nice IDE. You press F12. You jump to the function and it's star, star, quarks. Thank you. Yes, exactly.

You know, if that works out well, there might be a big doc string that says these are the seven things that can actually go in these keyword arguments and here's their type. It's actually true or it's actually comprehensive. I feel a lot of times when you're doing that kind of stuff, if you end up back in the documentation, something's kind of gone a little bit wrong, you should probably be able to tell at least

Does it take arguments? What might they possibly be, you know? But then you go to the documentation, it's auto-generated. You're like, well, no, not getting far on this one. All right.

Well, since we have this, the opinion of Henik episode, what are your thoughts on types? What do you think? Like I just said, I was moving to more static things in general because it's just easier to reason about, easier to understand. And it informs the design of APIs too, which I just find the APIs that come out nowadays are...

are widely more accessible than we used to have. And that's Lukas Lange's keynote a few years ago at PyCon US. It's like, if something is hard to type, maybe it's just bad design. That's an interesting take, yeah. It's not always true, by the way. There are legitimate use cases for Python where typing is a bad fit. It's completely true, and it's not great for everything. But I personally just take any help that I can get that helps me to make my code more understandable and easy

Help me like ensure that I'm writing what I'm thinking I'm writing. Like types helped me the best with finding logic errors in what I thought I was passing around and what I was actually passing around because like things that work by accident because something gets auto converted somewhere are the most dangerous things because they explode when you are on the beach with your margarita.

Yeah, they do. And that's not good, as we already pointed out. You know, another thing that I really like about them, I'm kind of on your team with this one as well. And I really like that I hinted at before, you don't have to go to the documentation and go like, what is this? I know it says it takes an options. What is an options? You know, like, I have no idea what goes here. Still, this doesn't tell me.

But if it means it's this class or it's a dictionary, well, at least you got some hints on like, well, this is where I start knowing what I'm supposed to do here. And also with the editor, right? If I hit dot, it gives you a way better chance of just continuing to type instead of going looking up stuff. And also, that's something people don't talk that much about, but it forces you to give things names. Sometimes when you have problems to give a type a name that's

a sign that you don't quite understand what the thing is doing, or maybe it's doing too many things. And it just...

That way you have like this instant feedback about your design and how you think about things that I find, I find it very useful. And also I have to point is typing would be just marking things that could be none or not. That would be already like 99% of the value of types, like just helping to avoid non-type errors. Yeah. Optional or not optional. What are we going to do here? Yeah.

I do like that it's optional, that it's progressive. I came from static languages to Python and the fact that it didn't have types, I kind of wanted something. But I also played around with TypeScript. I battled with the TypeScript system where it's like, well, we don't know what this thing is, but you're supposed to pass in something that's known in the type system. I know this will work, just get out of my way and let me do it. I think there's a pretty good balance that's

I have a whole keynote in my head about the true Python superpower is the graduality. Because you can start with the biggest trash, shitty code in a Jupyter notebook and just work yourself up by putting it in a file, running rough linter, then adding types and so on. And you end up with production code that

the best kind, but you started with complete shit. And that's how my brain works. Like I like to try things out and then just polish the turds, massive to say. Polish it when it's proven that it's worth putting your time and energy to, right? Exactly. And I find this very frustrating with languages like Rust or Go, which I read a lot of Go. And Go won't even compile a thing if you have an unused import. And this is just so frustrating. Yeah.

Oh my gosh. Just eat my trash and make it work. I will fix it later, I promise. That's a pretty interesting take. All right. Well, let's give maybe a shout out to Stamina also. Like that's the story of Stamina. That's a good one. Yeah. It's like from my latest projects, kind of the most popular one because it just solved the problem we all have that we use Tenacity, which is a great... We all used Tenacity, which is a great package.

But it has no good default because it's also a very old package. And everybody's just copy pasting these standard settings around that make it behave in a good way. And it's also, at least back then, didn't do proper typing. So I just wrote this tiny shim around that so I can stop copy pasting the same things over and over again. Turns out a lot of people have the same problem. Yeah, so if you've got a function, for example, like your example here, that might...

crash because HTTP error. Like, you know what? If I just tried this again, I bet it would work. Right? Just add stamina dot retry. Specify the error. Yep. It's been a while since I checked out, but it's got exponential back off and other types of things. And jitter. Oh, and jitter. Okay, nice. Yeah. Yeah.

I use it in certain things. It's not very often that you need something like this, but when you do, you're like, oh yes, it's getting this. Like in my experience, you need it whenever you talk to external systems, because if you're doing this API cost, it will fail at some point. Just put it on it and just take it.

Nowadays, it has a lot of more features. You can now have a callable that decides where you really want to retry. For example, you get an HTTP error, but in HTTPS, there's no difference between HTTP error because you got a 500 back or because you got a 401 back. If you keep retrying on 401 or 403, you will get banned from the API, right? On a 500, you probably want to retry and these kind of things. Yeah, absolutely. Yeah.

Yeah, the whole rate limiting thing. That's also tricky to work with. But I suppose if you said, okay, we'll retry it, but put some kind of delay, maybe you'll be all right. The delay is built in. That's the whole point. It's exponential black off. One thing I do want to talk about as well is just Argon. Yes. So quick. Very cool algorithm. It's one of these so-called memory hard problems.

that is somewhat resistant to brute forcing over just massive parallelism and so on. Want to talk about that real quick? What do we want to hear? Well, just tell people what it is. Maybe, you know, what's your project about and what should people consider using it, you know? Yeah, so many, many years ago, I think it was like 2016 or so, there was this password hashing competition because we needed to find a way to...

hash our passwords in a way that is, as you just said, memory hard. If you hear about rainbow tables, you shouldn't be using MD5. All these things are all part of that. We've got to get past that, right? This is next level because people are running ASICs, so programmable boards that are just specifically for cracking passwords in parallel. The only way to get...

to fight this is make it memory hard. So like checking these passwords or hash, this hashing algorithm also needs a lot of memory to work. There's one more famous one, which is called sCrypt or yesCrypt, which is nowadays also part of the standard library. So there's that, but Argon2 won the password hashing competition. And yeah, it works. It's,

in wide use and yeah I'm maintaining it for now like six years or something or I think longer so if you're storing your own passwords certainly this is one of the good options do you know what the story is with oh gosh what is it is it pass lib I believe yeah this is really pass lib is throwing a lot of warnings about things that it's using that are being deprecated or even removed I

I feel like it was really great for a while, but now you go to it, it's like... Yeah, last edited four years ago, that's a problem. Yeah, it's not great. And I feel like it was on, almost on SourceForge or something for a while, which is really... I can't remember exactly where I saw it. Bitbucket. It was in Bitbucket. Bitbucket's fine. Yeah, okay, Bitbucket's fine. No, it's not. It's gone. Oh, is Bitbucket gone? Yeah, more or less. Yeah.

can see the URL. This is Heptapod, which is like the only Mercurial hosting that open source projects can use nowadays. There you go. Bitbucket deprecating Mercurial support, yeah. All right, but will they also have Argon2 support, which is, would you

which is nice. They use my library. Yeah, that's what I thought. Yeah, that's what I thought. Yeah, super cool. Awesome. All right, let's close this out with one more take before I think we're pretty much out of time, but it's been a lot of fun. LLMs, LLMs writing code for us. Are we just going to be out of work in a year or two? What do you think about this? I'm not super worried about it, but it's definitely changing things. I

When you say we, it can mean we, the two of us. The answer is clearly no. I think we as in Michael and Hinek will benefit from LLAMS, I think. But we as programmers in general, and especially junior programmers, that's going to be a different thing. And I wouldn't dare to guess how that will play out. Like,

am personally using LLMs. I use Anthropic, but I use it in a way that I feel like you have to be very senior to actually use that. Because when you ask it for code, you just get code with old libraries because it's trained on old code from GitHub, right? Copilot once gave me a

SQL injection code suggestion. What is the most common security problem with code? Yeah, SQL injections. Of course it's going to be in those LLMs and suggesting it to me. I think it's going to, it is a useful tool. It usually doesn't give me what I need, but it gives me enough to help me to ask a search engine what I actually need. So that's what I'm going to say. What it's actually really good for is writing SQL in my experience because

my brain, I somehow, I cannot write CTEs. Like, I just, the syntax, I don't know. I just can't. So I usually ask an LLM to write the CTE for me and then just write the rest of the query. Well, for me, that's regular expressions. That's why I go to LLMs. I'm like, you need to write me a regular expression with some capture groups. Nice.

because this is too much for me. I've been using LM Studio, which is really nice. It lets you download a bunch of different models. I've been running Llama and Mistral lately, and they've both been pretty good. Yeah, but they run locally, right? It's free. It's super nice. One thing that's pretty interesting here is if you talked about this as well, and I don't know what this trend means, but if you go to Stack Overflow Trends and you look at the most popular languages, and you look right around...

late 2022, early 2023, there is a ridiculous drop in the popularity of questions on Stack Overflow for Python, for JavaScript, and in

And interestingly, not a whole lot for the others, but that's when ChatGPT came out. So I don't know what it's going to mean, but it's clearly having some kind of effect somewhere. But that might just be making people more productive and they're not on Stack Overflow. That might be a Stack Overflow problem, not a developer ecosystem problem. The friction to ask an LLM is much smaller than asking actual people and then get some...

get trolled by some moderator or whatever. So I can see that people are going to prefer LLMs to talking to people. And also most questions on Stack Overflow are quite trivial, to be honest. So I think that makes sense. How do I connect to a database using this library, right? How do I disable SSL because I'm getting this certificate error all the time? The classics. Yeah.

But it also means that a lot of trash code is being written right now, right? Like the things that I just told you that where you and I, we look at this code and we say, okay, this in principle is useful, but I cannot put this into production because error handling is wrong or there's some security issue or whatever. This is something that actually concerns me. A lot of people are writing code they don't understand and putting into production. And I think that's going to have consequences.

consequences that are not going to be pretty. Yeah, I totally agree with you. And also my, one of my concerns you touched on as well, it's like, how do you go from beginning developer to slightly beyond junior developer? I don't know if LLMs are going to help people make that jump quicker or if they're just going to make that jump like a wider chasm because people won't need juniors because they're just going to ask the LLM to just do the thing, you know? Yeah. Yeah. This is, this is the problem. Like, like,

bean counters are going to look into Excel and I got to wonder what makes more sense, right? And the fact they're eating their seed for next year and there's suddenly, oh my God, why are there no new senior software engineers? Well, we didn't make them, but... Yeah, it's going to be like a kobold. Fewer people who are really, really needed to keep things working. We will never retire, Michael. Exactly. Exactly.

Yeah, we're going to be programming on the beach with our margaritas in our 70s. Well, let's leave it there. Is that a positive take? I don't know. I really enjoyed our conversation, though. And a lot of good advice from people. So thanks for being here. Yeah, thanks for having me. Yeah, you bet. See you later. This has been another episode of Talk Python to Me. Thank you to our sponsors. Be sure to check out what they're offering. It really helps support the show.

This episode is brought to you by WorkOS. If you're building a B2B SaaS app, at some point your customers will start asking for enterprise features like SAML authentication, skim provisioning, audit logs, and fine-grained authorization.

WorkOS helps ship enterprise features on day one without slowing down your core product development. Find out more at talkpython.fm slash workOS. And this episode is brought to you by Bluehost. Do you need a website fast? Get Bluehost. Their AI builds your WordPress site in minutes and their built-in tools optimize your growth. Don't wait. Visit talkpython.fm slash bluehost.com.

to get started. Want to level up your Python? We have one of the largest catalogs of Python video courses over at TalkPython. Our content ranges from true beginners to deeply advanced topics like memory and async. And best of all, there's not a subscription in sight. Check it out for yourself at training.talkpython.fm.

Be sure to subscribe to the show, open your favorite podcast app, and search for Python. We should be right at the top. You can also find the iTunes feed at slash iTunes, the Google Play feed at slash Play, and the direct RSS feed at slash RSS on TalkPython.fm. We're live streaming most of our recordings these days. If you want to be part of the show and have your comments featured on the air, be sure to subscribe to our YouTube channel at TalkPython.fm slash YouTube.

This is your host, Michael Kennedy. Thanks so much for listening. I really appreciate it. Now get out there and write some Python code.