cover of episode 2.5 Admins 230: Pool of Theseus

2.5 Admins 230: Pool of Theseus

2025/1/16
logo of podcast 2.5 Admins

2.5 Admins

AI Deep Dive AI Chapters Transcript
People
A
Alan
J
Jesse
J
Jim
专注于 IT 自动化和网络安全的技术专家
J
Joe
面临上水汽车贷款,寻求多种解决方案以减轻财务负担。
P
Patrick
Topics
@Jim : OpenZFS 2.3 版本发布,其中最值得关注的是 RAIDZ 扩展功能,这项功能的研发历时近 10 年。RAIDZ2 可以通过添加第七个磁盘来扩展存储空间。此外,快速重复数据删除功能的改进也值得关注,它降低了重复数据删除的性能损耗,并引入了重复数据删除配额和重复数据删除修剪功能,以更好地控制内存和存储空间的使用。我撰写了一篇文章,对快速重复数据删除和传统重复数据删除的性能进行了对比测试,结果表明快速重复数据删除的性能惩罚大约是传统重复数据删除的一半。 在实际应用中,快速重复数据删除在许多情况下都非常有用,尤其是在可以接受一定性能损耗的情况下。与传统重复数据删除不同,快速重复数据删除的性能损耗不会随着重复数据删除表的大小而急剧增加,这使得它在大规模部署中更加实用。 OpenZFS 2.3 的另一个重要改进是增加了对常用命令的 JSON 输出支持,这将使脚本编写更加容易。此外,该版本还支持最长 1023 个字符的文件名,这主要是为了更好地支持非英语字符集。 @Alan : 我认为增加文件名长度限制可能是一个错误,我不喜欢用户使用过长的文件名。虽然增加长度限制是为了支持非英语字符集,但我觉得应该保留 255 字符的限制,因为使用 255 个汉字或日文字符已经可以表达很多内容了。 OpenZFS 2.3 的 DirectIO 功能允许用户标记文件不进行缓存,这在某些情况下可以提高 NVMe 的性能,尤其是在高吞吐量的情况下,单核性能很容易成为瓶颈。多线程可以进一步提高 NVMe 的性能。 @Joe : 关于 OpenZFS 2.3 的可用性,Ubuntu 用户可以立即使用,而其他发行版用户则需要等待软件包更新。虽然 OpenZFS 2.3 支持从 4.18 到 6.12 的 Linux 内核版本,但实际使用中可能需要等待软件包更新,例如 Ubuntu 25.10 版本。OpenZFS 2.3 的主要部分是内核模块升级,因此在 25.10 版本发布后,可能很快就会被反向移植到 24.04 版本。

Deep Dive

Chapters
OpenZFS 2.3 is released with many new features, including RAIDZ expansion, fast dedupe, optional JSON output, support for longer file names, and DirectIO. The improvements are significant for home users and larger deployments alike, offering enhanced performance and flexibility.
  • RAIDZ expansion allows adding drives to a RAIDZ pool
  • Fast dedupe significantly improves deduplication performance
  • Optional JSON output simplifies scripting
  • Support for longer filenames accommodates non-English character sets
  • DirectIO enhances performance by controlling caching

Shownotes Transcript

Translations:
中文

Two and a half admins episode 230. I'm Joe. I'm Jim. And I'm Alan. And here we are again. Before we get started, you've got a plug for us, Alan. Managing and tracking storage performance. Yeah, this is an article on using a feature of ZFS that gives you counters per dataset. So when you have multiple different file systems, you can actually get IO stats from each individual dataset. So if you're

If your system is busy, you can help track down which applications or which of the data sets in your system is the one that's making it busy. Is it actually like these three different things that are busy? Is it that one VM that's causing the system to be busy or what? And it can really help you track down where all the usage is coming from. We've also got some pretty good tips on interpreting zpool-iostat histograms in there, which is a very useful tool that I don't think a lot of folks really know is there or what they can do with it. Yeah, a couple of those flags out there.

add a lot of very interesting output and yeah with the help of Jim's article you can really understand what all those different numbers mean. Right well link in the show notes as usual.

Let's do some news then. OpenZFS 2.3 has been released with quite a few cool features. The headline one, obviously, is RAIDZ expansion. This one's been in the work for almost 10 years. I remember Chris Moore and I collabing on how to get this started and going, but it is now completely done, works, and is part of OpenZFS 2.3.

Which means if you have a RAIDZ2 of six drives, you can add a seventh drive and it will reflow things and you'll get the extra space and it'll work.

So that's a really big win for the kind of home user use case. Next up is fast dedupe. So making deduplication not suck as bad. It's still definitely not for every use case, but fast dedupe opens up a lot more possibilities than there were before. Partly because it also introduces the idea of a dedupe quota. So you can constrain the size of the DDT and have it stop deduping when you're going to be using too much memory or too much dedicated space off a

you know, an NVMe you're using to store the dedupe table. And then to be able to cope with that, a dedupe prune feature, which allows you to delete entries that haven't deduped but are older from the dedupe table so that you can continue to add new entries and get the maximum value out of the dedupe, but without needing an unbounded amount of memory. If you're curious about the hairy details, I wrote an article testing the performance of FastDedupe versus LegacyDedupe for Clara.

And you can find it there. If you're unwilling to click and read all the interesting hairy details, I'll give you the TLDR. Essentially, it sucks half as much as Legacy DDoop. There's a significant performance penalty involved in enabling DDoop, essentially. And with Legacy DDoop, that penalty was so high, I never found a

real use case for it. I heard for years and years and years the apocryphal story of my cousin's ex-boyfriend's lover worked at this one place where Ddupe totally worked for them, but I never actually saw the place. Fast Ddupe introduces almost exactly half the performance penalty, and at that level of performance penalty, I

I can think of quite a few places where it would be useful, again, if you can live with the performance penalty. The big difference is that that performance penalty doesn't hockey stick up as bad as in legacy dedupe as well. With legacy dedupe, as the dedupe table gets bigger, the performance would get worse and worse and worse, like very quickly. And with log dedupe, fast dedupe, it doesn't really hockey stick up like that. And it makes on a bigger deployment, it will suck significantly less.

The biggest new feature in OpenZFS 2.3 is optional JSON output on the most used commands. So ZFS list and Zpool status now have a dash J or dash dash JSON and will output the data in JSON for you. And all of your scripts are going to be so much easier in the future.

One of the new features is support for file and directory names up to 1023 characters, so 1023. Yes, this is up from the old limit of 255. That's interesting, 255 and 1023. Is that because you support zero characters as well? No, it's because you have to put a zero byte at the end to mark that as the end of the file name. Right.

It's not EOF. It's the EOF name. Right. Yeah, so it's actually that there's an invisible null byte that gets added at the very end. So that takes the 256th or 1024th character. I got to be honest. I'm not sure I think this is a feature. I think this might be a fucking bug.

I really don't like it when users just like make stream of consciousness file and folder names on my freaking storage systems, man. When you need a horizontal scroll bar to read the file name, like you, you fucking did it wrong, man. Find a better way to name your file. That's, it's just not good enough. Yes. Although the use case for this is actually non-English character sets where each lettering

Letter actually could take four bytes. So the file names will still be the same 255 letters long on the screen, but because they're not the normal ASCII characters, they take more bytes in memory to actually store. So if you're writing in Russian or Chinese or Japanese or something, you now aren't limited to one quarter of the length of file name if you wrote the file name in English. Cool.

Call me an asshole if you will, I'd have capped it at 1023 bytes and you can fit as much Unicode or non-Unicode ASCII as you would like in there. That might actually be technically a misspeak in the release notes. It is 1023 bytes, not characters.

Because it will actually be 255 characters if they're Unicode characters. Right. And the call me an asshole part is like I would also retain the 255 character cap. So it's like you stop typing once you hit the first one, whichever one that is. I don't care. Whatever language. If you're not done in 255 characters, go away. Especially, oh my God, in Chinese or Japanese languages.

Like, those characters are entire words most of the time, man. Do you realize how much you can say in 255 kanji? Yep. Anyway, back to the feature list of OpenZFS 2.3. Other big one is DirectIO. This allows you, when doing reads or writes, to flag that write or that file as asking ZFS not to cache this.

So if you're writing a file and you know, I'm not going to read this again anytime soon, you can set this direct flag on it and it won't go in the arc and therefore it won't push other useful data out of the arc to make room for this new file you just wrote because you're saying, hey, I'm not going to read this again soon. I'm just writing it now. Or the same way where you can read it and say, don't bother putting this in the cache because I don't plan to read it again anytime soon.

So you can do that with, say, when taking a backup in order to make sure that you don't pollute your cache with the files there. You know, you're just reading every file. And at some of the government labs, they found this can help improve the performance on NVMe, where being able to make room in the cache was slower than the speed of just reading it off the NVMe. And so it was rate limiting how fast they could read off the NVMe.

I'd love to see those benchmarks done again with a new pull request that Clara's opened that allows evicting from the arc to be multithreaded to try to solve the same problem, but where we definitely still want this data in the cache. There it is, Alan. You just said the magic word, multithreaded. I've been sitting here like my teeth have been chattering. I wanted to interrupt you so bad because I'm like, well, of course it can increase it, you know, with really high throughput stuff on NVMe because that stuff bottlenecks on single-threaded CPU performance.

So if you get rid of the stuff that messes with the arc that's happening on that same CPU thread, you can get more storage throughput per what amount of work you can get out of a single thread on your processor. But again, like Alan was saying, you know, as much as we can break that reliance on single-threaded performance and be able to spread things out over multiple threads, with modern CPUs, that is an enormous difference in performance. Now, the other thing about that is, you know, if you're talking about storage, you've just got to go over a network.

Well, if it's a 1 gig network, your processor is going to be fine. It can keep that 1 gig saturated as long as your storage devices can keep up. But once you step up to 10 gig, it again becomes really, really easy to bottleneck on the performance of a single core. Yeah, not to mention when you get to 40 gig and beyond.

So when am I going to get this? When am I going to get all the cool new features? Ask Shuttleworth, man. You're an Ubuntu user. You could have them today. If you're on Ubuntu, technically, I guess you're waiting for 2604? Pretty much. Yeah. But other distributions will ship this soon. This has been available as the release candidate in FreeBSD 15's things for a while now.

If you build your ZFS from DKMS or whatever, then you can have it today. It came out on Monday. Yeah, because it says supported Linux kernels 4.18 to 6.12. So that's going way back. Well, sure, but how are you going to build it, Joe? Are you going to go build your own from source? I assume you actually want a package, which is why I said ask Shuttleworth.

I mean, for me, but for other people who really, really want it, it is supported in pretty old distros. Yeah, well, it still has to work with the oldest supported version of like CentOS or Red Hat Linux.

But the big deal is actually that they have support all the way up to 6.12, which was not available, I don't think, in the 2.2 branch. Also, I think odds are pretty good. You probably won't need to wait until 2604. There's no promises whatsoever. Don't think I'm talking from like inside information. I've not had any conversations with anybody canonical about this, but I would gently expect this probably to show up in 2510.

And even if you don't want to install 2510 yourself, the odds are usually pretty good because the majority of what we're talking about here is it really kind of comes down to the kernel upgrade. That's the really important part, the kernel module upgrade.

So I wouldn't be real surprised if not too long after 25.10 drops, assuming 25.10 makes the new release available, you may very well get the option to backport to 24.04 that's using HWE kernels. But we'll just have to see. I don't think we saw that with 22.04 shipping 2.15 or 20.04 shipping 0.8. We eventually did from what I recall, yeah. But also, if you want it...

Clara offers as a service as part of our ZFS support subscription. We will give you a repo with a working OpenZFS 2.3 for Ubuntu 24.04. Cha-ching. Well done, Alan. Good plug. Okay. This episode is sponsored by ServerMania. Go to servermania.com slash 25A to get 15% off dedicated servers recurring for life.

ServerMania is a Canadian company with over a decade of experience building high-performance infrastructure hosting platforms for businesses globally. ServerMania has up to 20 Gbps network speeds in 8 locations worldwide for optimized global reach. They have flexible custom server configurations tailored to unique needs as well as a personal account manager offering free consultations.

With 24-7 live chat support and support ticket response times under 15 minutes, you can always keep your systems running smoothly. Alan's been a happy ServerMania customer for over seven years, so support the show and join him at a hosting provider that truly delivers. Get 15% off dedicated servers recurring for life at servermania.com slash 25A and use code 25ADMINS. That's servermania.com slash 25A and code 25ADMINS.

Let's do some free consulting then. But first, just a quick thank you to everyone who supports us with PayPal and Patreon. We really do appreciate that. If you want to join those people, you can go to 2.5admins.com slash support. And remember that for various amounts on Patreon, you can get an advert-free RSS feed of either just this show or all the shows in the Late Night Linux family. And if you want to send any questions for Jim and Alan or your feedback, you can email show at 2.5admins.com.

Another perk of being a patron is you get to skip the queue, which is what Jesse has done. He writes: "I accidentally partially wrote an ISO over a backup USB drive and I'm hoping there's a way to recover the data. I'd been writing different Linux ISOs to a thumb drive and after plugging in a Western Digital USB drive, whatever software I was using to write the ISOs switched destination drive from the thumb drive to the Western Digital and I noticed a little too late.

I stopped the program right away, but my very novice recovery attempts have not been fruitful. I think the file system was either ext4 or fat32 or maybe exfat. So it depends a bit. Any data that actually got overwritten by the ISO, you can't really get that back. You know, it's been overwritten. There's different data there now instead. But most of the data that didn't get overwritten might still be there. It's mostly a matter of if the file system is recoverable or not.

The first thing will be when whatever program you're writing started putting the ISO over top, it will have overwritten the partition table. And so that means you can't even find where the file system starts. But if it was a modern disk, it likely has a GPT partition table, which keeps a backup copy of the partition table at the end of the disk. And so using GDisk or whatever normal utilities you'd use on Linux, you can recover that GPT partition table and...

take the copy from the end of the drive and put it back at the beginning. And kind of to the same degree, file systems like ext4 have a superblock near the beginning that's critical to be able to read the file system. But they also have a second copy usually saved near the end. And programs like fsck that are designed to recover, you know, in case the first superblock just was being written to when the power went out or something and got corrupted, be

be able to recover using that backup superblock and give you a multiple file system. But it really will depend how much data got overwritten. You know, you say you stopped it right away, but did you overwrite a couple of megabytes or a couple hundred megabytes? That'll make a big difference for how likely it is that you'll be able to get it back.

Not knowing which file system it is will make that harder. Although if you can get the partition table back, then the labeling there might actually tell you if it was FAT or exFAT or if it was actually, you know, if it's marked Linux dash data, then it's probably ext4. Like Alan was saying, with ext4, you've got a backup superblock, so you can usually recover from accidentally overwriting. I mean, you can at least recover most of your files from accidentally overwriting, you know, the early part of a drive.

However, with FAT16, FAT32, or XFAT, the FAL allocation table, which is what FAT stands for,

is unfortunately located at the head of the partition. If you overwrite even the first few sectors of a FAT file system, it's going to be effectively unusable without some fairly heavy-duty reconstructive surgery. And if you write more than the first few sectors, it's just done because you're going to get the directory table along with it. And if you nuke the directory table, there is literally nothing to tell anybody which sectors belong to which files or even how many files you actually have. And that's...

In that case, your last ditch is a program called Photo Rec. So Photo R-E-C.

It can basically scan through the raw data on the disk and look for headers that say, like, this is a JPEG, and then keep reading until it stops being a JPEG or it finds the next this is a JPEG thing. So we can recover stuff off corrupt file systems. Although originally this was written more for the flash memory card in my camera got zorched by the camera being low on battery or something. So it's really focused towards cameras.

in photos, but it can find a bunch of stuff and often get stuff back for you. Like you can find zip files, office documents, PDF files, JPEGs, and a bunch of other file formats. And

often be able to pick out a bunch of the files that were in the file system without having to actually try to interpret the file system because like jim said if you've written that bit of the fat table in the directory thing you know you're not going to get back even the file names but you might be able to get back a bunch of the files if that's the concern but it says in your question here that was the backup so if it's just the backup just make a new backup right

You have at least three copies of all your files, right? One correction to what Alan was saying. The photo rec is not the last stop. It's the penultimate. There's one further and you probably want nothing to do with it. But if it's worth about 800 bucks to you to find out whether or not you can get all the stuff off that drive and get it back...

The two big drive recovery places that I'm aware of are Drive Savers and Gilware. I've personally always been more of a Gilware person myself. They charge $800. That's a minimum.

The way it works is you send your drive off to them. You don't have to give them any money. They look at it and they will see what they can recover and they'll give you a good idea of how much data, if anything, they can get back. And then they'll quote you a price. Now that price is minimum 800 bucks. In my experience, that minimum usually, usually tends to also kind of be the maximum for single drives as opposed to like a rate arrays or pools.

And in your case, this was a simple file system, so it really ought to be. Now, if it had been a ZFS pool, they tend to charge way more for ZFS or butter recovery because trying to piece the file system back together from random pieces is a lot more difficult with copy-on-write file systems.

But again, if it's worth $800 to you to get the data back, the way it works is you send the drive off. They look at it. They tell you what they can get. And then either you say, yes, I'm willing to pay you $800 or whatever they quote, in which case they'll send you the data back on a new drive. Or you say, no, I don't want to do that. And you have the option of either them disposing of the drive in place or sending it back and you pay for the shipping.

I have used Gilware quite a few times and they are considerably better at getting data back off of screwed up drives and systems than I am. And for people who aren't data recovery professionals, I consider myself pretty damn good at it. So on the chance that you are willing to spend 800 bucks to get some data back, I would recommend that you try Gilware. Now, again, probably most of the folks listening to this show, I hope none of you are ever willing to do that because you all have backup.

I have never used it for my own stuff because I back up my stuff. I have experience with these services because I very frequently get clients who just before hiring me were not backing everything up as well as they should have. Jesse wasn't sure...

About the file system, there's a very slight chance it might have been NTFS and people listening might be in the same situation with NTFS. So is there anything there that they could do? The one kind of saving grace there with NTFS is it uses the MFT as in master file table that it keeps in the middle of the drive for some reason. So it turns out if you overwrite the beginning of the drive, you're not going to have killed that. Although I don't know if there's certain metadata and structural stuff at the very beginning that would make it

impossible to try to actually mount it without that, but it doesn't keep its equivalent to the file allocation table at the beginning of the drive like FAT does, so it might be, you might have a slightly better chance of recovery if it's NTFS and you overwrote the beginning of the drive. Patrick, who's also a patron, skipped the queue. He writes, based on your recommendation, I updated my DMARC settings to start getting the reports and I've started reading them. My question is now, what am I supposed to do with that information?

I just learned that some IP in Hong Kong has been annoying some Japanese webhoster I've never heard of by pretending to be me. My DMARC policy is already reject, but other than shaking my fist in the general direction of this IP, is there anything I can do?

Yes and no. Is there anything that you can do that practically is likely to give good results? That's a tougher question. So basically what you would do here is you would actually look up the abuse contacts for the actual IP address that the abuse is coming from. You would reach out to the people that administer that network and you would lodge a complaint that, you know, somebody on their network has been using their network to abuse another network.

Now, if they're on whatever in Hong Kong is the equivalent of like a Linode or a DigitalOcean or Azure or, you know, what have you, then that's pretty much all it will take. Like you'll pretty quickly be able to get them delisted. They will lose access to whatever resources they're releasing. However, usually what you see here is it's either going to be a case of it's a compromised server somewhere that's effectively abandoned. Nobody's really maintaining it, which...

which makes the odds really good that the network segment is in pretty much the same condition and it's going to be really hard to get anybody to care about anything going on. Or it's a, you know, quote unquote bulletproof host where like the entire network knows perfectly well that its purpose in life is to abuse everybody else in the world. And they'll laugh up their nose at you if you complain to them that one of their IPs has been abusing other IPs because, you know, well, yeah, that's what they give us money for.

The most effective thing to do with those DMARC reports usually is just round file them. It really depends on whether you've got the time to try to chase down those abuse reports and do something about it. And either just the aggression you got to work out somewhere or, you know, the sense of civic duty, the feeling that you know how to do it and you can do it and you have time and therefore you should.

To be honest with you, I did that last thing a lot more when I was younger. These days, I usually feel like I don't have enough time for as little success as you get chasing down bulletproof hosts. The last time I really felt at all effective about making bulletproof hosts nervous was when I worked for Ars Technica and I could scream about them on the front page of Ars. That will sometimes get results.

Sometimes. Yeah, really, the more useful thing for DMARC is especially when you're just first getting set up, getting any reports where your mail is being rejected, like your legitimate mail is being rejected because your DKIM and so on aren't working correctly. Once it's up and running and your mail is flowing correctly, yeah, it's mostly just going to be noise.

Bostjan, who's a patron, also skipped the queue. He writes: "I've got a 6 drive 8TB usable ZFS pool organized into 3 2-disc mirrors. I'd like to replace all 6 current disks with 2 10TB drives and keep the pool birthdate the same. How can I do that?"

Well, in theory, you could do that by a combination of re-silvering and using the device evacuation feature. You would first need to re-silver both drives in one of your mirror VDEVs with the two 10 terabyte drives and let that auto expand. That would give you the capacity that you needed to be able to then remove the other two mirror VDEVs from the pool. However, don't do that. That would be a very poor idea.

The thing is, when you use the device evacuation feature, which is relatively new in ZFS, it's only applicable if you're only using single drive or mirror VDEVs, and if every drive in the pool has the same A shift,

then you can remove a VDEV that you accidentally added. But this feature is intended to help you recover from, oops, I forgot to type mirror when I did my Zpool add to put a couple new drives in the system. So that allows you to remove those two new drives you accidentally added as single VDEVs rather than having to expand both of them into two drive mirror VDEVs in order to maintain your redundancy.

When you do it that way, it's fine. But what the tool is not designed for is to take a pool that you've been using for years with data distributed equally amongst a bunch of VDEVs and nearly filling the VDEVs, and then you decide to change the topology by removing VDEVs. Will it allow you to do that? Yes. Unfortunately, the way that it works is all the blocks that were on those VDEVs that you've removed...

They still essentially look like they're on the missing VDEVs at first, but when you go to look for it, then ZFS sees, oh no, those VDEVs are gone. There's a lookup table that tells me where those blocks have moved to. You might think of it also as kind of like sending mail, postal mail, to somebody whose address has changed. You send it to the post office. The post office has a note on file that mail to that person at that address should go to the new house, and it eventually gets there, but there's a delay.

Now, again, if we're only talking about a few blocks here or there, this is really not a big deal and you shouldn't be concerned about it. If you added a drive or two and like they're there for a couple of days and, you know, there's maybe a few hundred megs of data that gets added and you're like, oh, well, this needs to go now. That's fine.

But for what you want to do, no, you really need to just go ahead and create a new pool with your 10 terabyte drives. Migrate all your data to it. And then, Alan, do you know of any way to keep a Zpool birthdate? Really, the date is just cosmetic. So you can just set your computer clock wrong before you run the Zpool create command. And you'll get, you know, in Zpool history and like ZFS property of the data set and so on, it will...

Put what it thinks the date is when you created it. And, you know, if that happens to be some date from 2015 that you're putting in from your original pool, then you could do that. I don't know what the value of keeping the birth date of the pool really is. It's, again, purely cosmetic. It doesn't actually matter for much.

But yeah, to Jim's point, the biggest problem with using the device evacuation, in addition to the fact that you're going through that lookup table, it's going to make everything slower. That lookup table has to live in RAM because of the way indirect VDEVs work. And so you're basically wasting a whole bunch of RAM until every block of that data has been overwritten. Like the table will shrink over time as you overwrite the data and it writes it directly to the right VDEV, not indirectly to the removed VDEV.

But I'm guessing a lot of this data on here is not data you're going to change. And so it's going to just stay in this indirect table, taking up a bunch of your RAM forever. And yeah, just making a new pool is a much, much better idea. I think that the idea behind keeping the birth date, it's kind of like uptime bragging, you know, back in the day. I think Boston just wants to be able to say, look, I've maintained this data successfully for this entire amount of time with no failures. And here's the proof. Here's my pool birth date.

And that's not really going to be accurate here because you have gone from one pool to the other. There are two ways to look at that. One is instead to say, well, look, all the metadata on your files is going to remain intact. So the real measure of like, you know, how great a storage administrator you are is like, look at your oldest data. Like, how long have you maintained that completely intact?

And along those lines, I've still got files on my machine that have intact date stamps from like 1988. As they've, you know, gone from one machine to the next, the next, the next. On the other hand, you know, if you're just really focused specifically on the pool birth date and whatever, you know,

There's two things to note here. One is that, yes, you can get the same pool birthdate by looking up what it was and setting your computer's clock to that and then creating the new pool. However, that's lying. So either you accept the lie or here's my suggestion. If you really just kind of want to be kind of crazy with this and, you know, kind of note that, you know, you're doing the thing.

I would be really tempted to set my system time to Unix time, like the birth date of Unix time, January 1st, 1970, and create your pool then. Let somebody do the ZDB-U and look up your pool birth date and see that it was born on Unix day. That's pretty cool. Yeah, you don't have to use ZDB to do that. If you do zpool history and then the pool name,

It's a ring buffer that shows the last so many things that happen in your pool, but it always keeps the original creation command at the top of that ring. So it'll have the date and the exact command you use to create the pool. Right, well, we'd better get out of here then. Remember, show at 2.5admins.com if you want to send any questions or feedback. You can find me at joerest.com slash mastodon. You can find me at mercenariesysadmin.com. And I'm at Alan Jude. We'll see you next week.