Two and a half admins, episode 237. I'm Joe. I'm Jim. And I'm Alan. And here we are again. And before we get started, you've got another article to plug, Alan. Why FreeBSD is the right choice for embedded devices. Yeah, so we talk about...
What the advantages are of FreeBSD and specifically the BSD license when you're building an appliance or embedded device that you're going to want to sell to people and what some of the advantages are starting with an operating system that was designed around being able to apply to that use case. If only TiVo had read this, eh? Yeah, then there might not even be a GPL3. Right, well, link in the show notes as usual.
HP briefly implemented and then rolled back a 15-minute wait time for people phoning them up for support. Sure, it was brief. Sure, this was the first time it ever happened. Sure, they've never artificially inflated wait times to get people the heck off the phone before. The surprising thing about this story is not that a tech giant was artificially inflating their wait times to try to get people to not call, which is a more expensive form of providing support than...
just having them chat with an algorithm in a text box on a website. The surprising thing was that HP actually admitted to it. And I would have to assume it's because the internal documents that were leaked gave them no other option than to fess up at least to some degree.
Now, HP's public relations response to why they did this was, oh, you know, we just wanted to make everybody's experience better. We figured if we upped the wait time, more people would realize that we have these great self-serve, you know, web support options.
The problem is the self-serve web support option is where people are getting the phone number to call in the first place. So this idea that, you know, oh, people just didn't realize it was there. I'm sorry, I'm not buying it. How did they get their number? Did they look you up in the yellow pages? I don't think so. Yeah, and that's definitely been the thing, not with HP, but other companies like...
When you're on hold waiting for a person and you're constantly being told, oh, you can just do all this stuff on the website. It's like if the website was working, I wouldn't be waiting on the bloody phone for you, would I? Yeah, because what I really want to do is speak to some random human about a technical issue on the other end of a phone somewhere. No, no. Like Alan said, if you give me the option to just like tappity tap, tap, tap and open a ticket and like go on about my day, you'd better believe that's what I'm doing.
Yeah, but this is something of a boomer filter, right? For printer stuff, maybe. I will admit, I wouldn't want to try to support anybody using a printer. But also, if I had a problem with a printer, I wouldn't bother calling the phone number. If I couldn't figure it out from what was on the website, I would just bin the bloody thing. But, you know, I'm not a good person.
It's funny that HP got their hand caught in the cookie jar right now, but this is not an HP problem. This is the entire tech industry. And even that's really narrowing the scope a bit. I don't think it's just tech companies either. Everybody is trying to get people not to call in for support because staffing support centers is expensive and companies would much rather just throw algorithms at the problem. Here, talk to our chat bot. Surely that will make everything good enough that you'll stop bothering us and we won't have to pay people.
But it can get a lot worse than this. Matter of fact, I just had to deal with a Microsoft issue. A client of mine has Microsoft 365. They set up the trial program and they wanted to move their email hosting from their current provider over to Office 365. So far, so good.
But when they set up the account, they only had a single admin account. And that admin was not me. It was somebody at the client. And he made the rookie mistake of only setting up a single form of the mandatory MFA. You can't do an Office 365 setup right now without multi-factor authentication. And by default, that's Microsoft's authenticator.
Microsoft's authenticator doesn't work like the Google authenticator or Authy or, you know, the million of those essentially stateless ones where you get a number from your authenticator and you type it in, you know, on a website and you don't actually need to communicate between the two. Microsoft's actually does a push to your device and you have to accept the push automatically.
on your mobile device in order to unlock the website you're trying to log into. And for whatever reason, the Microsoft Authenticator was not getting the push. This was not an issue on my client's phone. It wasn't an issue that could be resolved by uninstalling and reinstalling the Authenticator or anything else. It simply wasn't getting the push.
So obviously you have to talk to Microsoft 365 support, right? Well, there's a problem. When you go to look up this issue, what you find is an endless sea of threads where people ask Microsoft support community, you know, hey, I have this issue. I can't log into my admin account. What do I do? And these Microsoft employees and volunteers with the MVP flag come in and give you a phone number and say, call this phone number and they'll help you with that. And you call that phone number. And what does that phone number tell you?
It tells you, log into the admin portal and open a ticket.
And all those endless threads are full of very angry people telling these Microsoft employees, when I call that phone number, it doesn't give me any support options other than log on and create a ticket, which I can't do because I can't log on because the MFA isn't working. And it's just, it's endless loops of this nonsense. It took me about three hours to get past this issue. And if anybody else has this issue, I'll tell you right now what the secret is. The secret is you don't call for technical support. You call for billing support, right?
Now, when you call the billing support number and you demand to speak to a human, you get to speak to a human. And let me tell you what, that human is in no way surprised when you do not have a billing question, you have a technical support issue. They know exactly what to do with that. They will open a ticket for you right away, and then you get to wait for the team to call you back. You can't get connected directly, but you do at least get the bloody ticket opened and
which is impossible unless you know to drill down through the freaking accounts receivable department to get to it. None of this is a mistake. I refuse to believe that nobody at Microsoft is aware that they have created this issue. They just don't want to deal with it. Yeah, well, you notice there's never been a way to call Amazon for support when you buy things. There is, actually. It's very difficult to find that number, but if you can find it...
Those people are badass. Once you find the voice support number for Amazon purchases, those people can and will do anything to make you happy. In the UK, you just go into the customer support thing and get them to call you. And it happens within like 20 seconds usually. I've never had enough of a need for Amazon. It takes some drilling in. I found out the hard way when a local delivery company was screwing me over really badly delivering a 65-inch TV.
and basically just telling me to go pack sand. They didn't care. And I wasn't getting anywhere with the, you know, the, the online support with Amazon. And I got really upset and I took a social media about it. And some friends on social media were like, click here, here, here, here, here, here, and here to find the voice number and call them. And they're very helpful. And like, I didn't want to believe it, but then I did finally find the number and I called them and yeah, the, you
You don't even have to get escalated. Like I got connected to the very first tier, the person who answered the phone and explained to them this delivery company was screwing me over and had no showed me, you know, twice two separate delivery windows. And she immediately pulled like $300 off.
off of the price that I'd paid for the television and called that company and gave them a tongue lashing that had them call me back in 30 seconds, very politely apologizing and asking, could they bring the TV over right now? And an hour later, I had it in my possession. So like,
Yes, there is a human that you can get hold of for support at Amazon. You just got to drill down to it. That Microsoft shit though, man, that sounds Kafka-esque to me. Yeah, I definitely felt like I woke up in a cockroach body. You're right about that one.
At risk of getting a little bit political, it was quite funny to see that the doge.gov website, when it first went up at least, was not brilliantly secure. It wasn't secure at all, really. The website had not been online for long at all before people started investigating its API endpoints and discovering that the database was completely unsecured and you could push whatever you wanted to into it. So a couple of people made rogue posts. The stories went live all over the internet.
And on the one hand, it's kind of a nothing burger of a story in that people fail to properly secure new websites all the freaking time. On the other hand, one of the things that really made this stand out for me is it had been less than a week since the same Mr. Elon Musk had rather famously, probably not notoriously enough, made the statement on X that,
This R word thinks the U.S. government uses SQL in response to concerns about Doge's access to various databases. And, you know, the guy who thinks that the United States government isn't using SQL anywhere probably isn't the guy that you want to be making decisions about security or databases or architecture or software.
Much of anything else, really. Well, and anybody who thinks that they fixed it now entirely and it's definitely secure is definitely fooling themselves. Like the amount of planning that's required to actually make the types of changes they're trying to make, it very clearly has not happened.
What website was it? There was some website. It was in 2017 that the FCC left some things unsecured on a public comment website. And as a result, for a while, anybody could post their own proclamation on behalf of the FCC, complete with a watermark and the whole nine.
With the most famous example of that particular hack being somebody issuing an apology to the American people from the FCC that read, and I quote, Dear American citizenry, we're sorry a jeep pie is such a filthy spineless cock. Laughter
What this clearly shows, though, is how much of a rush this whole thing was, right? Like the whole Doge thing and, oh, let's just chuck up a website and not worry about the security of it seems to be indicative of the whole Doge endeavor to me. Like it's not even a startup mindset. It's literally just a YOLO mindset. It's hard to come up with one coherent takeaway from all this beyond, I think, really the thing that you have to keep in mind is just
When an organization doesn't take information security seriously, what are they taking seriously? And does it matter? If you've got somebody running a government organization that is so sloppy that they're allowing actions to be taken in that organization's name automatically because they've left APIs unsecured, what does that tell you about how well it's being run? What other kind of mistakes are being made?
Okay, this episode is sponsored by Factor. Ready to optimize your nutrition this year? Factor has chef-made gourmet meals that make eating well easy. They're dietitian approved and ready to heat and eat in two minutes. Factor arrives fresh and fully prepared, perfect for any active busy lifestyle.
With 40 options across 8 dietary preferences on the menu each week, it's easy to pick meals tailored to your goals. Choose from preferences like Calorie Smart, Protein Plus or Keto. Factor can help you feel your best all day long with wholesome smoothies, breakfasts, grab-and-go snacks and more add-ons. Reach your goals this year with ingredients you can trust and convenience that can't be beaten.
Jim tried Factor and said the meals were quick and easy to prepare and liked that there was plenty of variety. Eat smart with Factor. Support the show and get started at factormeals.com slash factorpodcast and use the code FactorPodcast to get 50% off your first box plus free shipping. That's code FactorPodcast at factormeals.com slash FactorPodcast to get 50% off plus free shipping on your first box.
I recently decided that I needed to sort out my backups. I had various USB drives and a Wyze thin client and things were just not properly organized. And so I thought, right, I'm going to have my NAS, which I've had for ages. I'm going to have a proper on-site machine and I'm going to have a proper off-site machine, not a NUC connected to a USB drive and a Wyze connected to another USB drive. And I got some help from you guys and I worked a little bit of it out on my own as well.
And I've now got a situation that I'm pretty happy with, I would say. This is the part where if I had a soundboard, I'd be pressing the button from Monty Python's quest for the Holy Grail and the peasants rejoiced. Yay!
So really, the first thing that I had to do was sort out my data sets on the NAS because like so many people who first get into ZFS, I created the pool and just started dumping shit into the root of the pool, which you should never, ever do, right? It just limits your flexibility. It's not... Nothing's wrong. It's not going to lose your data, but it just...
You're missing the point of using ZFS, yeah. Yeah. So I had some datasets and then I had just some directories in the root. And so I needed to clean it up first. And because I'm on a slightly older version of ZFS, it was just a case of manually copying and deleting stuff.
So once I did that, I had my 16 datasets and nothing in the root of the pool. So everything could now be replicated properly. Yeah, you know, when I made this change, when I first got more into ZFS, I kind of set a rule for myself of keeping the size of each dataset small enough that I could play Jenga with them when I needed to move to different size hard drives.
Because at one point I had a four drive RAID Z1 and I wanted to go to a six drive RAID Z2. But that meant it would really helpful if I could move a bunch of the data to some of the new drives early. And that required them being small enough that they would fit on just one disk or something. And so kind of having them be a more manageable size was kind of my guiding feature of deciding what goes in its own data set versus what doesn't.
the big things are if you're going to want different settings on it, different ZFS properties, and B, keep each data set small enough that you're able to actually work with it. So I wanted to end up with an offsite backup that was encrypted. And the advice that you two gave me was, if your NAS is unencrypted to start with, then send it to your onsite backup, encrypt it while you're sending, and then do a raw send of that
to your offsite target, which means that that will be encrypted and won't have the key ever loaded. And so it will just be completely encrypted forever. Right. And technically, the encryption actually happens in the ZFS receive process on your onsite backup.
What happens is when you do the ZFS send and pipe it to ZFS receive, you're receiving it to the child of an encrypted data set on your on-site backup. And so this is important to understand. It's one of the things that people have trouble getting their brain wrapped around for like how to encrypt
the backup of a formally unencrypted dataset because that first full has to be able to happen and you can't really set the option afterwards. It needs to be encrypted from the get-go. So the way you get around that is you encrypt a parent dataset on the target, then you do the ZFS receive from your unencrypted source
during the receive process, it becomes encrypted. And at that point, you can even get rid of the encrypted parent if you want, because you've already properly set the encryption property with the right key on your target data set. Although I prefer to do a thing where like I'll have a data set that's literally just named encrypted and all of my encrypted data sets will be beneath that root with that key. And so it makes it easy to manage.
Exactly what I did. So the pool name is onsite and the pool name is offsite for the offsite one. And then so it's onsite slash encrypted slash invoices, for example, or slash podcast current. Yeah, in my setup, it's very much the same except for...
There's a host name in between this. It'll be encrypted host name and then datasets because I'm backing up more than one machine. But you're only doing one, so it didn't make sense to have that extra level in there. But one thing that I had to do was make sure that the encryption key was always loaded on the onsite box.
so that I could copy from the NAS and encrypt it as it receives it on that on-site box. Yeah, there's a couple of different approaches to that, but let's talk about yours first. Well, what I did was I just created a script
which is just ZFS load-key-L and then the path to a text file with the encryption key, the password effectively. And so that loads the key, but then you also have to mount your datasets as well. So I did an and and ZFS mount-A.
And so then I just put that in my crontab at reboot that script. And so now every time I reboot the machine, the key is loaded and ready to go. Well, if you just are going to have the password in a file on the machine, you can just set the key location to file colon slash slash the path to the file. And it can be done automatically by the system without needing a script or anything. Oh, really? Yeah.
If your key format is passphrase, yeah, you can still set a key location to a file and the file will just have the passphrase. Right, so I just kind of did an extra step that I didn't need to. Yes, but there are other things people can do. If your ZFS is compiled with support for curl, you can also have it be an HTTPS URL. So, you know, if you're thinking of something more for a production environment, you might actually have a server that decides whether Joe's machine that just booted up is allowed to get the key to decrypt its data or not.
So maybe that machine proves that it's a blessed machine and then it gets the key and gets to keep going or whatever. Or I've seen people who want to do something like this but are trying to use, say, a rented dedicated server from somewhere. They'll have a small operating system that boots up and is just enough for them to SSH in and then run ZFS load key and ZFS mount and allow the encrypted stuff to happen without the password ever being stored on that remote server.
Pipe curled bash is old and tired. Pipe curled ZFS is the new wired. Well, this one is ZFS mount of the data set would actually make the curl request and go and get the key, but yes.
So one thing that's really important to make this work is raw sending from the onsite to the offsite. So it's encrypted already on the onsite, and so it's crucial to do a raw send to the offsite to maintain that encryption. Yeah, by default if you do zfs send of an encrypted dataset, it will decrypt it and send it in plain text, and that's not what you want in this case. That's what the -w flag does for you instead.
Or if you're using Syncoid, which I do, Jim's tool, then it is Syncoid dash dash send options equals W. Yes, because that's the argument for ZFS send and the send options argument for Syncoid is for passing arguments directly unparsed straight down to ZFS send. There's also a receive options argument if you need to do the same thing to pass raw arguments directly to ZFS receive on the target side.
So one of the things you can do when setting up this replication is have it be delegated to a regular user. So instead of needing to have root on either on-site or off-site, you can do it as a special user that isn't allowed to do anything else on the system, right? Doesn't have sudo, doesn't have access to anything, just has access to do the ZFS and receive. And so you can have ZFS allow send to Bob, and now Bob, the backup user, will be able to do a send of a dataset.
If you have any commercial interest in that feature, please do get in touch with Clara. Because one person on Jim's forum did, and they're looking for somebody to split the cost with them. Ah, right. And I suppose it's worth mentioning that I use Tailscale to do the send to the offsite. But I was a little bit concerned about someone getting physical access to the offsite box and then getting into my network, effectively my tailnet. But it turns out there's a very handy feature in Tailscale ACLs
where you just go in and tell it, "This box can talk to that box, but it can't talk back to me." So on-site can SSH into off-site, but off-site cannot SSH into any of my other devices. That can also generally be accomplished simply by not having any secrets on the target box that allow it access to the source.
If you're using a key to log into the target, that doesn't give you any access back into the source. You can see what the IP was, sure. You can see what the user account is that connected, but it doesn't give you any privileges to actually get in.
Your tailscale ACLs are a nice additional touch in that you're not exposing, you know, any kind of network functionality. But it's not something that's required just to keep somebody from being able to SSH, you know, backup stream. Oh, yeah. Yeah. And it can have other uses in just that if there's a lot of other stuff on your VPN, you'd make just not want to bother sending all of that traffic all the way to the offsite backup when it's not for the backup.
And just reducing the amount of traffic going across the slowest part of your link can be a win on its own, aside from the security aspects.
So Joe, have you tested doing Restore yet? Not yet, but I am somewhat monitoring to make sure that they are arriving properly. I do need to up my monitoring game significantly, as in automate it, but I've been manually checking and it seems to be all good so far. Sanoid dash dash monitor snapshots? More like SSH-ing in and checking sizes and stuff, but...
I'll look into that one. No, I think Jim was giving you a recommendation rather than... Yeah, you can literally just run Sanoid dash dash monitor dash snapshots on the command line and it will let you know whether you've got fresh snapshots and according to the policies that you've got defined there on your target. You'd still be shelling into your target to do that
at the simplest level. Then the next level beyond that would be scripting it to run from your crontab and piping the output to healthchecks.io. And then the next step beyond that is setting up a proper Nagio server that actually, you know, pulls and uses Nerpy to run the monitor snapshots command for you. All definitely in my future.
But the monitor snapshots in particular will be really nice for you, Joe, because it's a single command you can run on the target that will check every single data set and let you know in one fell swoop whether or not they're all up to date. And if one of them is not, it'll tell you exactly which one is stale.
Okay, this episode is sponsored by SysCloud. Companies big and small rely a lot on SaaS applications to run their businesses. But what happens when critical SaaS data is lost due to human errors, accidental deletions, or ransomware attacks? That's where SysCloud comes in. It's a single pane of glass to back up all critical SaaS applications, such as Microsoft 365, Google Workspace, Salesforce, Slack, HubSpot, QuickBooks Online, to name a few.
syscloud also helps with ransomware recovery, identifies compliance gaps in your SaaS data, monitors for unusual data activity, and can automatically archive inactive SaaS data to save on storage costs. Plus, it's SOC 2 certified, so data remains secure in the cloud. Over 2,000 IT admins already trust syscloud to protect their SaaS data.
Head to syscloud.com for a 30-day free trial and for a limited time, use code 25admins to get 50% off your first purchase. That's syscloud.com.
Let's do some free consulting then. But first, just a quick thank you to everyone who supports us with PayPal and Patreon. We really do appreciate that. If you want to join those people, you can go to 2.5admins.com slash support. And remember that for various amounts on Patreon, you can get an advert-free RSS feed of either just this show or all the shows in the Late Night Linux family. And if you want to send any questions for Jim and Alan or your feedback, you can email show at 2.5admins.com.
Nick writes: "What is the best way to utilize NVMe drives next to a bunch of SATA drives in a storage server? Let's say there are 4 slots for NVMe and 6 for SATA. Isn't it a bit overkill to run 4 drives for read/write cache? Would I be better off to have a separate pool? Or does it depend on the use case? I guess half of the rack could be flash and half rust."
Nick didn't specifically say ZFS, although he did say pool, but pool is a term that applies to more than just ZFS. So we're going to try to answer this one as generically as possible, if only to avoid raising Joe's hackles because he doesn't want this to become the all ZFS all the time show for some reason. This is a question that I see a lot. And the first thing that I always want to ask when people start asking about NVMe versus SATA is which form factor are we talking about? Now, in this case, Nick specifically is talking about bays in a storage server.
So I feel like we're talking about U.2, which is hot swappable and, you know, goes into bays just like SATA or SCSI or whatever, as opposed to M.2, which is the consumer form factor where you have like the little teeny tiny, you know, basically stick a gum that plugs directly into your motherboard.
Now, I always caution people, be really, really careful about your expectations with NVMe M.2 because although it looks really, really super, super fast and it may be on some workloads, it's usually terrible for difficult workloads of the kind that you might be expecting to perform in a server that's, you know, you've got 10 freaking drives in.
But we're talking about U.2, so I'll assume that our NVMe U.2 drives are genuinely tremendously faster than our SATA drives in the server. And the answer is going to largely depend on your storage system. What kind of tiered access does it support? How good at it is it? You know, what kinds of things are you going to do?
For the most part, I would tend to recommend separate storage pools, whether we're talking ZFS or not, rather than doing, you know, some kind of a tiered access strategy with faster versus, you know, slower storage. People always have really, really high expectations of tiered performance storage. And I very rarely see them borne out in the real world because sure, it's easy to say, oh, well, you know, we'll put all of our new rights onto this super, super fast NVMe and we'll, you know,
We'll slowly move that down to the slower SATA later if it's not accessed too much. And we'll just kind of bubble things down to the slow pool or up to the fast pool as it requires. But in real life, if that server is busy, well, where are you coming up with the extra latency and throughput to actually move things back and forth from the fast tier to the slow tier to begin with?
This is a strategy with some real limits to it. And particularly, I think most of the folks that are listening to this show are probably not going to be asking about $5 million servers with a couple of hundred drives in them. They're going to be talking like, Nick, you know, I got 10 bays I want to fill in, you know, what do I do? And at that scale, the better answer is almost always going to be you want to segregate your workload for yourself.
Don't ask an algorithm to do it for you. The algorithm is stupider than you are. You're the smart one. You're the human. You're hopefully the storage administrator or, you know, system administrator. You're an engineer of some kind. So say, hey, these VMs over here, they're running SQL databases and they need all the IOPS in the world. And I know that I've got this cluster of, you know, extremely high IOPS, U.2 drives, right?
that the performance is not gonna fall off of a cliff if a cache on board each individual drive gets filled up. They're gonna be reliable and super duper fast. So my toughest workloads, I'm gonna put on this relatively small pool of the fastest drives I've got
And these other workloads, maybe I've got bulk file storage. Maybe I want to run, even if it's still virtualization, maybe I want to have one VM that does the database stuff and one VM that does bulk file service to deliver, I don't know, streaming videos, just regular files, what have you. Your IOPS needs are a lot lower. So that's the stuff you put on the Rust or on the SATA SSDs or whatever.
But yeah, I would advise you to segregate your own workload for yourself intelligently rather than just hoping that some algorithm built into a storage system can do a better job of it than you can. Because in real life, it can't. Yeah. Oftentimes when I see this, it seems like people are just taunted by the fact that they have these extra bays, whether they're M2 or U2 or whatever, and they just want to use them for something and make things faster. And it's like, if you don't really have the use case for it, like if you have four NVMe bays,
Just use all NVMe and don't use the database if you're really after the speed. And if you're not, then use the database. Don't get sucked into trying to do some hybrid of the two just because you have a mix of slots. If you have a use case where it makes sense, maybe, but otherwise, probably not. The one thing we've seen be very helpful is having metadata VDEVs.
So putting that on NVMe... Specifically, data center grade, high performance NVMe, not just any old NVMe you found lying around. Yes. Not your Evo 990s or whatever. Not any consumer targeted Samsung whatever. I don't care if it says pro on it. It's not appropriate for that use. Right. And so...
If you're going to do metadata drives, you do need more than one. They have to be a mirror. So I would recommend using like three of your slots or maybe even all four as one really big mirror of the metadata. But you're not going to have that much metadata if you only have six drives. That server connected to a JBOD with like 100 hard drives and then four NVMe to hold the metadata could be really good. Like if you have a workload that involves a lot of something like rsync where you're going to be
statting all these files all the time, but not actually reading the data for them or always reading the data sequentially, but needing to be able to randomly seek through the metadata, that can make a big difference. But the ratio doesn't make sense in this case. If you have four NVMe and six SATA slots, you're not ever going to have enough not metadata to need that much metadata separate storage. And so in that case, you would just use all the NVMe and just leave the SATA slots empty. So yeah, mixing the two, as Jim said, is fantastic.
probably doesn't really make sense. Either segregate them or just pick the one that's the better for your use case, right? If you want lower cost and don't need as much performance, go SATA. And if you really need the performance, ignore the SATA drives and buy the biggest NVMe drives you can afford that are good enough quality. The last thing I'll say on here is there is one other very common use case that Alan missed here. And that is what if you just need as much storage as you can cram into that box?
And if that's the case and performance isn't a big deal and what you have is six SATA bays and four NVMe U.2 bays, it's perfectly fine just to fill all the bays and put them all into one big undifferentiated pool. That pool will behave essentially as though you had 10 SATA bays. You'll spend more money than if you had 10 SATA bays and bought 10 SATA drives, but probably not a whole lot more if you're buying data centigrade stuff on either side to begin with.
So if that's what you need, that's fine. Just expect, you know, like I said, you don't expect it to be running a whole lot faster because your pool has some NVMe and some SATA. You instead expect, yes, whether I'm talking about ZFS or whether I'm talking about conventional rate or whatever, it's absolutely fine to mix in these faster drives in there. If I can make all the capacities, you know, line up nicely. You just treat it as though it were 10 SATA bays rather than six and four.
Right, well, we better get out of here then. Remember, show at 2.5admins.com if you want to send any questions or your feedback. You can find me at joerest.com slash mastodon. You can find me at mercenariesisadmin.com. And I'm at Alan Jude. We'll see you next week.