Two and a half admins, episode 234. I'm Joe. I'm Jim. And I'm Alan. And here we are again. And before we get started, you've got a plug for us, Alan. Controlling your core infrastructure, DNS. Yeah, so in this article, Kyle talks about why controlling your own DNS and not just relying on your upstream provider can give you a lot more control over your network and...
can also improve the performance of your entire network. Most times at an office when people complain that the internet is slow, it's actually the DNS that is slow, not the internet. Right, well, link in the show notes as usual. Let's do some news then. Fraud with Seagate hard disks. Dealer swap Seagate investigates. Now this is kind of interesting because I feel like after the last several years, we basically always expect when we see fraud and drives in the same sentence,
Sorry, Western Digital, we expect to see your name in the rest of the headline, and this time it was Seagate. That kind of made me sad because we have so few hard drive manufacturers now, and Seagate has been doing such a great job of, like, aggressively being the good guys for the last several years that my heart kind of fell. But this isn't Seagate's fault. It's not Seagate who's committing the fraud. It is...
resellers who are committing fraud. And basically what's going on here is several years ago, you may recall there was a particular cryptocurrency called Chia and Chia tended to bottleneck on hard drives rather than on, you know, RAM or CPU. It was a file coin. It was specifically about selling storage as the thing backing the coin.
So while Chia was popular, the demand for large hard drives spiked through the roof. It had a similar effect on the hard drive market as, you know, AI nonsense tends to have on the – well, and for crypto for that matter, normally has on the GPU market.
And the thing about that is when Chia became much less popular, all of a sudden these massive Chia farms started finding themselves with huge inventories of still relatively large hard drives that weren't making them money anymore. So what do you do with them? Do you unleash all these used hard drives on the market with thousands of hours of operation?
Well, you could do that and it wouldn't be fraud if you did. But if you're the kind of person who's running a massive cryptocurrency farm, you're probably not going to take that option instead. Maybe you think about hacking the firmware and resetting all the smart data and reselling all those drives as new. So you're selling supposedly new drives as a third party vendor on a platform which allows third party vendors to list their products online.
And frequently then you wind up with another layer beyond that. Maybe you're wholesaling these as a third-party vendor to somebody else who buys them wholesale to then a retail purchaser who then is selling them as retail. It's just on and on and on.
Regardless, what you wind up with is there is a huge glut right now of these Seagate drives that were used for many thousands of hours in Chia farms, and the firmware has been hacked to make them appear to be new. However, they are not reliable in the way that new drives were because they actually have thousands of hours of operation on them.
As this story has come to light, it has required some digging. Seagate did immediately get onto it and determined that, you know, no, we're not selling these. These things are, you know, coming from Chia Farms. And then beyond that, you're looking at the various big retailers to see like, well, what are their policies? If this retailer accidentally, hopefully, you know, sells you one of these bogus, not new drives, how will they handle it? And so Seagate has...
has issued a statement saying, you know, we take this matter very seriously and are conducting a thorough investigation. As part of the earlier statement, Seagate did not sell or distribute these drives to resellers. Basically, somebody else sold these drives into the channel somewhere. And it sounds like a bunch of these kind of drop shippers in Germany thought they were getting reputable drives and were not because all the various levels of middlemen and so on.
But anyway, Seagate's looking into that and making sure that suppliers have a way to make sure they're getting real drives from Seagate, not somebody in between. They say, we encourage anyone who suspects that they received a used drive marketed as new to help this investigation by reporting directly to Seagate, fraud at seagate.com. Additionally, customers who question whether the product they purchase are as advertised by the seller can use the
online warranty checking tool that Seagate provides. Any suspect drives and or sellers can also be reported anonymously through Seagate's ethics hotline. The other thing we should probably mention is, you know, if you just bought a whole bunch of Iron Wolf drives and now you're really nervous, you probably don't need to be. These types of scams almost always involve the data center branded drives.
There's not a whole lot of real difference under the hood between, say, a 16 terabyte Exos and a 16 terabyte IronWolf Pro. For the most part, they're basically the same drive under the hood, but nobody's buying IronWolves for, you know, data center type stuff. Although people will frequently buy, you know, refurbished Exos to use for home labs and whatnot. The actual difference there still is under the hood when you move from the Exos line onto the IronWolves is,
tends to revolve around the IronWolf drives are a lot more likely to be a bit quieter because that is one of the things that retail purchasers tend to value that people who are throwing a drive in a data center, they won't even notice. If that drive is whisper quiet or if it's like Ariana Grande screaming in your ear, you won't know the difference in a data center with 10,000 drives all running. But yeah, if you're concerned, use the Seagate warranty checker on new drives that you buy and make sure that they...
aren't already out of warranty because it turns out they weren't new. You know, you want to check that early because most of these places will only let you return it for a short period of time. But yeah, this reminds me of the story we talked about a while ago with, was it Water Panther and these other companies that were, yeah,
selling used drives, not really pretending they were new, but were fudging the smart data to be different than it was originally. Oh, Water Panther was absolutely pretending they were new. Because Water Panther was not selling them as refurbished Seagate drives, they were selling them as brand new Water Panther branded drives. Yeah, and of course the choice of WP as their acronym is like, hmm, I wonder if that's meant to trick people into thinking they were buying Western Digital drives. Yeah.
We'll also link in the show notes to a tool on GitHub called OpenSeaChest. This is a set of open source utilities. It is actually released by Seagate themselves, which will allow you to check. There is essentially a lower level set of metrics you can access on the drives called farm rather than smart.
You can access some farm values actually using SmartMon tools themselves, but it's going to be better to use the OpenSeaChest utilities. And you can grab those from GitHub and run them directly. Again, although Seagate released them, they are open source. So if you want to make sure that you can actually review everything the tool is doing, you don't like proprietary stuff, you still get to use these. Yep, and they're available in most package repos. So it's in Ubuntu, FreeBSD, Debian, Gen2, Nix.
All kinds of different upstream package repos actually have this tool in the package sets as well. As we get closer to October this year, when Windows 10 goes out of support, something that's been doing the rounds is an old Microsoft article
Windows 11 on devices that don't meet minimum system requirements and it was updated towards the end of last year But the bottom line is Microsoft are doubling tripling quadrupling down on this saying don't hack Windows 11 onto devices that we don't support and we've talked about this before but I think it is worth at least one more time talking about this to say
If it's your own machine, maybe, but if it's for anyone else, just forget about it. It's not worth it. At some point, Microsoft somewhat blessed a registry entry hack. They published it as a support doc on their website. This is how you could be able to upgrade a Windows 10 box to Windows 11 that wasn't currently supported. And then later they went and took that message down and said, if you use this, you should roll back and go back to Windows 10.
And it really underscores the point we've made in the past that if you have to do hacks to be able to run it, you probably don't want to do it because eventually it's going to break or they're going to even purposely try to force you to go back. And rolling back never works as well as you would hope. So we don't really have a whole lot new to cover here. We've already told you, don't do it. If you want to do it as like, you know, a toy VM or something, you know, on something you control and you don't mind the whole thing blowing up,
then fine. But even if it's your own system and it's important to you, don't do it. You will end up having problems if you hack an unsupported version of Windows onto a device that Microsoft does not want it running on. And so the only news today is that Microsoft themselves have come back and essentially, I'm not saying that Microsoft knows who 2.5 admins is, but essentially Microsoft went and updated that article to say, hey, those guys on 2.5A were right. Don't do it.
So when we're telling you not to do it, and Microsoft is telling you not to do it, please don't. You need to upgrade, if not to Windows 11, then to some other operating system that your hardware does support if you want to continue using hardware that Windows 11 will not support. Yeah, we're not saying that this is a good thing. Microsoft should not have done this, but they've done it, and there's nothing you can do about it. I don't feel like that quite covers all the nuance. There are always going to be legitimate reasons why
that a software vendor, including an operating system vendor, might say, look, we can't support this old hardware anymore. It doesn't have features that we really care about. This is important to the direction moving forward. I think all three of us on this show might argue that there really is no truly necessary and compelling hardware feature that justifies Microsoft doing this.
But that's kind of orthogonal to the actual point, which is that it's Microsoft's show. They're the ones who control what Windows does and what it will do it on. So when they tell you, don't do it, well, you probably better not. If you don't like that, if you don't like the idea that Microsoft can just arbitrarily force you to go buy new hardware or can arbitrarily force you to go from Windows 10 to Windows 11 when you don't like Windows 11, whatever, then your response should not be, well,
look at me, you know, I have more coding resources than Microsoft can bring to bear, so I'll have it my way. Your response should probably be to pivot away from Microsoft and towards some competing alternative that you feel offers you more control over your experience. That might be a Linux distribution. It might be FreeBSD. I don't know. Maybe you want to go be one of those weird glowing fruit people. I don't care. But the point is,
You don't fight City Hall and you don't fight Microsoft. If you don't like what Microsoft is doing, you need to get off of their turf.
Okay, this episode is sponsored by SysCloud. Companies big and small rely a lot on SaaS applications to run their businesses. But what happens when critical SaaS data is lost due to human errors, accidental deletions, or ransomware attacks? That's where SysCloud comes in. It's a single pane of glass to back up all critical SaaS applications, such as Microsoft 365, Google Workspace, Salesforce, Slack, HubSpot, QuickBooks Online, to name a few.
syscloud also helps with ransomware recovery, identifies compliance gaps in your SaaS data, monitors for unusual data activity, and can automatically archive inactive SaaS data to save on storage costs. Plus, it's SOC 2 certified, so data remains secure in the cloud. Over 2,000 IT admins already trust syscloud to protect their SaaS data.
Head to syscloud.com for a 30-day free trial and for a limited time, use code 25admins to get 50% off your first purchase. That's syscloud.com.
Let's do some free consulting then. But first, just a quick thank you to everyone who supports us with PayPal and Patreon. We really do appreciate that. If you want to join those people, you can go to 2.5admins.com slash support. And remember that for various amounts on Patreon, you can get an advert-free RSS feed of either just this show or all the shows in the Late Night Linux family. And if you want to send in your questions for Jim and Alan or your feedback, you can email show at 2.5admins.com.
Another perk of being a patron is you get to skip the queue, which is what F has done. They write, I see multiple web service platforms allow you to attach block storage to a VPS. Would it be a bad idea to configure one of these as a ZFS backup host similar to an rsync.net or zfs.rent? I know ZFS should ideally own the entire drive, but I'm not sure how block storage from a SAN affects this.
I see Azure Docs mentioned that their hard drive class managed drives are actually backed by three separate drives. And this is going to be a backup host specifically for ZFS receive capability. This kind of thing is not ideal. We definitely prefer for ZFS to have as direct access right down to the bare metal as humanly possible.
But with that said, this isn't a terrible idea. For a large degree, you are still relying on the data integrity features of whatever the cloud backend is, which you have no control over and you just have to hope it's all good. But it should be good or you wouldn't be able to use it as block storage in the first place. ZFS will layer on some additional safeguards. You will still have, in addition to whatever data integrity safeguards the cloud host offers,
you will still have all the checksumming and everything else that ZFS offers on top of that. So this is all a good thing. Are there possible scenarios where something that's hinky and this really complex cloud storage backend that is beneath a level that you can see or touch
could interfere with your data integrity. Yes, that's possible, but that's just kind of part and parcel of what you signed on for. Yeah, so in general, like comparing it to ZFS.rent, where you're going to have one hard drive, having a virtual hard drive out of the sand here is not going to have any worse parity because either way, ZFS has one device. And if that device returns the wrong data, ZFS can tell you that it did, but it can't fix it. But it's going to be no worse than something like ZFS.rent.
So I think the biggest thing is if their block storage is inexpensive enough for this to make sense, then yes. The fact that it's backed by at least three hard drives or, you know, they have two levels of redundancy or whatever might mean that the pricing makes it less interesting. But in general...
Yeah, SAN will work fine. Like, for example, Delphix, the company that did a lot of work on ZFS for the last decade or so, their whole product almost only ever runs off SANs. And they're running like Oracle databases and dealing with big commercial data. And all the storage comes from SANs and it works perfectly fine.
Safety wise, I don't really see a big issue. If you don't provision your pool that's on this block storage with any redundancy at your own pool level, of course, you are entirely at the mercy of your cloud hosts data integrity features. And if they corrupt a single block, then it will not be recoverable for you there. You'll have to get it from some other place, hopefully that's still intact instead.
And I think most of the time, just the way that cloud storage prices are, you're probably not going to be real interested in setting it up redundantly. You probably are just going to do it singly. But if this is a backup and it's not the only source, well, then you just consider that an additional layer. And is it bulletproof? No, but it's an additional layer and additional layers are good.
In all these cases, if a VPS has this kind of block storage, it's basically over the network. Like as far as the VM is concerned, it shows up as a local drive, but it's unlike the main drive that is hosting your VM and the VPS. It's not storage that's probably near to the hypervisor. It's a separate storage node on a different network that's further away. And so the latency will be higher and like
If you can't generate enough concurrency to pull many, many blocks over the network to keep the network busy, then that latency is going to make it a lot worse. But for a backup, it's totally fine. Tony, who's a patron, also skipped the queue. He writes, how do you recommend picking hardware to run virtual machines?
RAM is easy since my workloads ask for a certain amount of RAM. How about CPU speeds and generations, etc.? How can I know how many IOPS I need? How can I know what ZFS config will give me these IOPS? Thanks. Yes, there's kind of a lot that goes into this. On the CPU front, for speed and generation, etc., generally you don't have that many options when you're buying the hardware. It's generally the newest and maybe one back.
because the very newest one maybe is not available in full quantities yet. And so even if you want the newest, maybe you have to accept one older than that. Otherwise, you have to wait like three months before you actually get the hardware or something. But in general, most hardware suppliers aren't going to have much older generation stuff laying around because they're trying to offload that inventory as soon as they can because the value of it keeps going down and they will make less money if they don't sell it.
So CPUs, speeds, and generations, you don't have that much choice. Although when doing a hypervisor machine, you often do have the choice between a large number of slower cores or a smaller number of faster cores, usually for about the same money. And it kind of depends on your workload.
If you're going to run a lot of VMs, more cores is probably better. Or if you're doing a lot of workloads that it really, that workload can only really use one CPU at a time, then having fewer CPUs but faster can be a lot better. But you know, you'll notice with like the Intel SKUs, the servers with the really, really high clock speeds and fewer cores are a lot more money because they know you're only going to buy that if you actually need that extra clock speed per core. And they're
going to milk you for all the money they can. I don't know that the comment about not many choices in CPUs is really accurate because we weren't really given a lot of details about, you know, what scale and scope we're looking at here. And if you're talking about, you know, you might be buying a used machine or, you know, buying whatever parts your local PC store has around, you know, what have you.
The only real hardcore thing that you have to have is your CPU must support hardware virtualization. Most modern CPUs do. Basically all AMD CPUs from the last decade support all the hardware virtualization features you need. On the Intel side, it gets a little dicier and you need to be careful. Intel has been known to release i7s that don't support hardware virtualization. They're not doing that right now, but you need to be careful about that.
Beyond that, like Alan said, you need to figure out what it is that you need. Do you need a lot of parallelization? It doesn't necessarily mean a lot of VMs. It means do you need a lot of cores in your VMs, whether it's one VM that you want to allocate 16 virtual cores to or whether it's eight different little VMs that you want to have two cores apiece. Either way, ideally, I want at least 16 cores allocated.
I should say really at least 16 threads, but the ideal part is still 16 full cores because now we're getting the difference between core count on a processor and thread count. On most modern processors, they have hyperthreading, and hyperthreading basically allows one core to potentially execute two threads
sort of pseudo-simultaneously. Basically, if one thread blocks on something that the CPU only has one instance of, like a floating processing unit, a thread might block because it needs the floating processing unit and can't access it right now, and hyper-threading might allow a different thread that doesn't need that feature to go ahead and execute simultaneously right now.
This will, if you enable hyper-threading, it absolutely will increase the total amount of work that your CPU can do. However, the latency involved may go up significantly because you may wind up with threads getting stalled for long periods of time due to, you know, hyper-threading, SMT, whatever you want to call it, allowing some other workload through while the first one stalls. So,
Kind of be careful about that. Ideally, you want all the cores and you may want to consider actually disabling hyperthreading because the additional latency may not be worth the small amounts of additional work you can get out of CPU. Typically, you're only looking at about like a 15 to 20% bump out of having hyperthreading enabling versus turning it off, like even in an essentially ideal workload for hyperthreading.
So that pretty much covers CPUs. RAM, you said that this is easy because you know how much you need for your workload. But I would add, I mean, this is this show. So I assume you want ZFS underneath all this, in which case I would recommend that you only allocate about half of your available physical RAM to VMs. Leave the other half available for the host and for ZFS. If you're not running ZFS, do remember you still need some RAM for the host. I would generally recommend ZFS.
At least two to four gigs is a rule of thumb, but always more is better if you can afford it. Don't skimp on RAM. This brings us down to drives, and this gets really, really important. If it's just like, you know, some home lab whatever, and it doesn't really matter, and you're not doing much with it, and it's not that important, then sure, whatever consumer SSDs you can find are probably going to be okay. Although even then, I would advise people especially don't get excited about NVMe M.2.
Most NVMe M.2 consumer drives perform very, very well on unrealistic, super easy workloads and horribly on really tough workloads with a lot of concurrency, saturation for long periods of time. They'll fill the right cache and the performance will just fall off a cliff. They also have the NVMe.M2 specifically have thermal problems because, you know, it's this little itty
teeny tiny stick, like a stick of RAM, and it can't dissipate heat very well. And so if you have a long running, really heavy workload, it can thermal throttle itself to the point that it drives you batty. Frequently, SATA drives actually outperform M.2 NVMe for virtualization workloads. But moving on beyond that, if this is something that you take seriously and you want to have predictable performance, good performance all the time, you've got several VMs, you depend on them,
Get data center grade SSDs. It makes an enormous difference. They may not look faster. And again, these really, really simple, unrealistic benchmarks that vendors like to use to get the biggest number to get you all excited. The data center grade drives frequently don't look very good on those benchmarks.
But again, when you hit them with real workloads, they perform much better than even on paper faster prosumer SSDs, which are really good at light workloads that don't go on for too long, but they'll fall off a cliff once you ask them to do too much.
Whereas you hit a true data center grade SSD, for one thing, you're probably going to have about double the write endurance per terabyte of the drive that you would on a prosumer level drive. That can be really important. And you also will, on the good ones, you'll have hardware QoS, which minimizes the latency you experience when you've got a lot of concurrency. And finally, you should have power loss protection, which will allow your sync writes to complete orders of magnitude faster, even at a log VDEV.
Or if you're not using ZFS, you know, just period. Again, sync writes go way faster because with power loss protection, what happens is the drive accepts the write into cache and it can say, hey, this write is safe now and not be lying because even if it hasn't made it to the actual metal of the drive, it's only in the DRAM cache, that power loss protection will allow that cache to get written out to the metal when the drive is powered back up again if you have, you know, a power event. So,
Again, you're looking at massively higher performance on sync writes, much lower latency on really nasty workloads, a lot of concurrency, and probably double the write endurance of the same size prosumer drive.
As far as how many IOPS, more is always better. It's really going to come down to how much can you afford and how much does your workload really justify. How to configure ZFS for the most IOPS? The obvious thing is using mirrors rather than something like RAIDZ. RAIDZ is really terrible for a VM-type workload. If you need...
a lot more tuning than that to try to get the most, you probably should just reach out to Clara and get a performance analysis rather than something I could provide on the podcast just because the answer is different for every use case. And even just not just VMs versus not, it's really what you're running in the VM is going to change what the right tuning is. I would say here that you're looking at things in the wrong direction if you're asking how many IOPS do I need?
because the metrics that you see written on drives for how many IOPS they'll provide are unrealistic in the same way all their other benchmarks are unrealistic. And they don't map very well to like what you actually want in your actual workload. Essentially, it really more comes down to keep adding IOPS until you're happy.
If you want to maximize IOPS for any given number of drives, the more VDEVs you can cram into that number of drives, the more performance you will get in the real world out of that same number of drives. So if you have, for example, eight drives,
The highest IOPS would be running those eight drives as eight individual single drive VDEVs. Of course, you have zero redundancy, so don't do that. Your next highest performance is going to be four two-wide mirror VDEVs because you've got four VDEVs. Essentially, every VDEV is going to offer you a given number of IOPS associated with the type of drive that you're using. And for the most part,
This gets complicated. I don't want to get too deep into it, but for the most part, you can think of a VDEV as providing roughly the same number of IOPS as any one drive within that VDEV. There are some exceptions. RAID-Z VDEVs will be a little slower,
Mirror VDEVs get crazier because like read IOPS can be, or not always, depends on your workload, up to the number of drives in the VDEV, not just one for the VDEV. But again, in general, think of a VDEV as providing the amount of IOPS that one disk would. So if you want to multiply your IOPS, then you multiply your VDEV count. Yeah. Although I've been pleasantly surprised with the Micron data center SSDs.
We throw our FIO torture test on it and it does exactly what it says in the manual. And I was very pleased to see, obviously these are a U.2 like server hot swappable grade NVMe drives. But when they say they can do 600,000 IOPS, we actually got FIO to pull 600,000 IOPS without having to do, you know, a crazy number of jobs or anything. I was very surprised.
Which is lovely, but the odds that you know or can reasonably determine how many IOPS you need for your specific workload ahead of time, not attached to any given driver topology, are like zero. Good point. IOPS as a concrete number are useful as a very, very vague rule of thumb idea of a level of performance you'll get.
but it's not something that you can start out and say, I need X IOPS, so I'll buy this many number of this kind of drive. This is this many IOPS per drive, and that gets me what I need. That's a lovely idea, and I wish it worked that way, but unfortunately, it just doesn't. Yeah, definitely can second that. I think the most reasonable use for most people for IOPS is a metric.
is to start out with a workload and say, I'm not happy with how well this workload is doing on my storage right now. Then you look at your storage and say, okay, I've got, for example, I've got four Rust-based VDEVs
Those are going to offer me roughly 200 IOPS per VDEV. So I've got roughly 800 IOPS on this pool. If I want things to go twice as fast as they are now, then I'm probably going to need to double that IOPS count. That's useful. That's achievable. At that level, everything makes sense and the rules of thumb are great and you can use it usefully. Just like I said, don't try to get fancier than that.
Stuart, who's a patron, also skipped the queue. He writes, I heard a passing mention of being able to delegate datasets to a container on a previous episode. It sounds like exactly what I need to provide my friends a way to SSH into a container and replicate datasets from their ZFS pools onto mine, and vice versa.
If this is indeed possible with this feature, how would I go about using it with LexC on Proxmox? I tried to look around and I could find no information gluing the two halves together. Any help or pointers to resources would be excellent. It just so happens that as of recording of this episode today, Clara published an article about isolating containers with ZFS and Linux namespaces, which describes how to use LXD to...
delegate different ZFS datasets into containers, and then be able to run Docker and have it use the native ZFS driver or whatever your workload might be, like in your case of the SSH. And so hopefully this recipe will be enough to get you started on Proxmox as well. I do wonder a little bit how you're planning to do this if you're going to have multiple containers, how is somebody going to SSH into container one versus container two?
I don't know if you're doing a VPN or what, but you don't have to do separate containers to achieve what you're after, although there are some advantages to doing it that way. So there's two different types of delegation in ZFS. There's user delegation, where you can give certain commands over a certain data set to a user. So for example, if you're just even doing your own backups with Sanoid and Syncoid, you can delegate it so that the
the syncoid on the backup can run as not root and be able to read and write the data sets or even on both sides so that you don't have to have root ssh keys laying around on your machines and so that can work pretty well and you can do that even to give your friend you know they have the ability to zfs receive to pool slash friend name but not any of the other data sets how
However, because of the way ZFS works, they would be able to see the other data sets by running ZFS list and so on. And depending on your file system permissions, walk around and look at stuff and you might not want that. So with the container delegation, which can do jails on FreeBSD or namespaces on Linux like LXC or LXD.
You delegate a dataset to that container, and then inside that container when they run ZFS list, they will see the datasets you've delegated to them and all the children of those, and the parent datasets up to the root of the pool, but they won't be able to see all the other datasets.
So if you have, you know, pool name slash friends slash Joe and pool name slash friends slash Jim, and you delegate those to two separate containers, when they run ZFS list, they'll see their data set and the parents, but won't see each other's data sets and won't even know that they exist. And then, yeah, you'd be able to do your ZFS send receive from there. And depending on how you set up the containers, they can have like a fake root inside that container even and be able to do whatever they need to do.
Sounds like yet another argument also for not just dumping all your stuff in the root data set of a pool because everybody will be able to see it no matter how you delegate it. Well, they'll be able to see the data set in ZFS list. They won't actually be able to reach it because you're doing a mountain namespace in Linux. So they won't
won't have access to anything beyond what you set up the LXC to be able to see. But yes, it was a design mistake at the very beginning of ZFS that's too late to fix to allow you to use the root of the pool as a dataset. It just shouldn't have been a mountable dataset. Or there should have just been a way to manipulate the root dataset in the same way you manipulate child datasets would also have solved the issues. I think someone's almost got a rename that lets you just swap the root with a new empty dataset to...
make the route into a normal data set and get the route back to being empty. That would make recovering from a lot of people's poor decisions a whole lot less painful. Yeah, it would have made my recent cleanup a lot less painful as well. Right, well, we better get out of here then. Remember, show at 2.5admins.com if you want to send any questions or feedback. You can find me at jores.com slash mastodon. You can find me at mercenariesysadmin.com. And I'm at Alan Jude. We'll see you next week.
Bye.