Two and a half admins, episode 236. I'm Joe. I'm Jim. And I'm Alan. And here we are again.
And before we get started, you've got an article plugged for us as usual, Alan. Isolating containers with ZFS and Linux namespaces. Yeah, so this article goes into the details of how Linux namespaces work, and in particular, how we implemented support for them in ZFS so that you can delegate a subtree of your ZFS pool to, for example, an LXD container and have the root inside that container be able to manipulate all those datasets while not being able to see the rest of the ZFS pool.
And so you could use this to have multiple tenants that aren't aware of each other, or even just to keep applications separate. And we talked about some of the advantages of that, especially when applications need different versions of the same dependency or something. It's really hard to have all that on one system. But with the namespaces and containers, you can have these kind of conflicting app stacks running at the same time. Right. Well, link in the show notes as usual.
Armed to launch its own chip in a move that could upend semiconductor industry. In a move set to enshitify the industry. I mean, let's be honest here. You know, we're talking about another situation where what was formerly an open market is beginning to get reshaped into a vertical silo.
And that's not ever really to anybody's benefit downstream. It means that you start seeing, you know, the space shrink, the margins increase, the useful new features tend to show up less frequently because you've got fewer minds at the helm. You know, it's just...
You've left the attract phase and you've begun the extract phase on that particular market. What about competition? Isn't it great to have another competitor to the likes of Qualcomm and
It's not another competitor. It's all the competitors getting squashed into one, essentially. I mean, obviously, it's not happening today. It's not happening next week. But you can't really be like, you know, the host of this open forum with, you know, freely competing vendors when you're running the show, but you are also a quote unquote competitor yourself. Right.
The only way that ever works at all in the real world is basically when there's an actual government that's mandating that it happens and enforcing rules of so-called fairness. And they still always heavily favor the incumbent. I mean, we've seen this with like telecom split ups in the USA. You know, the U.S. government says, you know, hey, AT&T, you actually –
have to allow these people to compete with you. You know, the incumbent local exchanges versus the competitive local exchanges, you have to resell them access into your data centers over some of your copper and allow them to compete. And because the government mandates that that has to happen,
It does. But if you think AT&T is just like a super friendly, happy kumbaya, you know, share and share alike with all the SELECs, you have not worked in that industry. Yeah, this one's interesting in that, you know, we saw late last year with the lawsuit between ARM and Qualcomm about the licensing of the ARM cores and so on.
It seems like ARM felt they were being cut out by companies like Qualcomm, NVIDIA, and I think to some degree even Apple, kind of making their own chips and using their own cores and basically not relying on ARM to provide most of the IP anymore. And so ARM was feeling that they weren't getting a slice of every ARM chip to the same degree anymore. And so they've decided that they want to compete in the server market, especially around the data center. And I don't think it is...
an advantage to the data center space to basically quash the chance of a bunch of different competing CPUs versus getting down to basically the official one from ARM is what there will be. I'll tell you who this is really good news for, and that's RISC-V fans. Alan's kind of got that, you know, bit into an unripe persimmon look on his face when I say that, but...
The reason I say this is good news for RISC-V fans is RISC-V has represented for a while the more open alternative looming on the horizon for folks who feel hemmed in in the ARM space. And when ARM itself moves to encircle its arms and take that whole space kind of back into itself and say, hey, we are the premier vendor of this. And by extension, I mean, there's just – there's no way to avoid the implication that RISC-V
other people trying to create their own arm, you know, properties that they're going to be second class citizens. How can they not be? They have to compete with the firm that owns the IP and is producing their own IP based on it. How do you do that? So, you know, for folks who that bothers, well, risk five is the logical next step. Obviously, of course, there are going to be, you know, other companies that they don't want to make that move.
maybe they're not uncomfortable with having to compete with ARM's own designs directly and they stay where they are. I don't think that's a good move because I think, again, history has given us plenty of examples of what happens when formally open platforms begin to close, but that's where we are. Yeah, I guess my only reaction to the RISC-V part was there's still a ways to go before we have a RISC-V CPU that's kind of on par with a 192-core ARM server.
Oh, absolutely. But this is where the pressure comes from to get RISC-V there because now you've got all this pressure because the formerly open platform becomes less so. You have more people who become ideologically interested in pushing RISC-V because it's open and that's important to them. And so therefore now they want those pragmatic challenges resolved where before they might not have cared so much.
Which is why I say, again, this is really great news for RISC-V fans because this is going to bring a lot of that pressure that that platform needed to make it mature faster. Yeah, I agree. It's just...
I don't know that everyone will be as pleased with the results on the RISC-V side as they might expect in that most of the really interesting parts of the RISC-V cores that eventually come out of this are not going to be open source. Like the really deep tech stuff that companies spend a billion dollars creating, they're not going to just be open source because it's based on RISC-V. But as long as everybody's aware of that, then yeah, I do think it is
the best chance we have of getting something that's a little more usable for everyone. It's not even that RISC-V is open that interests me. It's the fact that it is a third architecture. It's diversity within architectures, which has to be a good thing, surely. Listen to Joe Ressington and his woke DEI nonsense.
So I think the thing that really underlines how scary this is, is we're not just talking about Arm announcing that it's entering the fray. We're also talking about Arm having acquired Ampere, which is pretty much the biggest and most impressive Arm
ARM-based server CPU vendor. Correct, Alan? Yeah, small clarification. It's SoftBank that's acquiring Ampere, and SoftBank is one of the major holders of ARM, but it's not ARM directly buying Ampere. But it's the same thing, but technically not. Yeah.
But yeah, so the Ampere is definitely the hardware I've been the most interested in on the ARM side. I've actually used their 80 and 160 core servers and been looking very much forward to playing with their 192 core machines and really seeing how they can beat the pants off an equivalent Intel machine, even though
the cores aren't necessarily the same, but the power is so much better and they have a lot more memory bandwidth and more memory channels and so on. They were really pushing the envelope on a bunch of this stuff and having PCIe 5 a lot sooner than it was available in most of the other platforms and so on. It was very,
Very interesting. And yeah, so if ARM via SoftBank is basically using Ampere as its starting point for making ARM-based server chips, then it's going to be a much more interesting place. And I think one of the main reasons ARM would do that is because of the inroads Ampere managed to make.
Ampere's machines were being used at Oracle for their cloud, at Microsoft for their cloud, and elsewhere. Basically everywhere that wasn't Google and Amazon who had built their own ARM CPUs specifically for their clouds. And I think ARM seeing that trend that
All the hyperscalers were building their own CPUs and only licensing the bare minimum of that from Arm meant that when Meta showed up as one of the last hyperscalers that hadn't made their own chip yet, Arm's like, yes, we will make chips for you. And while that makes sense for Arm, I don't know that it's good for the rest of the industry. It's funny that you mentioned Meta because when you made the clarification that it's SoftBank acquiring Ampere and not Arm themselves,
Obviously, that's completely valid. But the whole time I was thinking, you know, this feels like complaining. Oh, you know, Facebook did that thing. But Meta did the other thing we're talking about. Like, yes, there's a distinction, but it's it's kind of hard to be sure how much a normal person cares about that distinction.
The WordPress.com 100-year domain. New day, same damn grift. And to be fair, grift is maybe a little strong, but I don't think it's that strong.
This is not the first time somebody's offered a hundred year domain registration. Network Solutions did that way back in the day, or at least they claim to. The problem is you can't register a domain for longer than currently 10 years is the maximum because ICANN is who you actually register the domain with and they only accept registrations for up to 10 years.
So what you're actually doing when you pay a registrar for a, quote, 100-year domain is you're entering into a trust with them that says they will keep renewing your domain as many times as necessary until your agreement expires. So this is not just the one-time thing. You're relying that that company is actually still going to be around and in business and honoring its obligations. And relying on that for the next century is just pants-on-head crazy, right?
The current average lifespan of a company listed on the standard and poor 500. So this is like the public companies only equivalent of the fortune 500, you know, 500, very, very large companies. The average lifespan of those companies is 15 years right now at its maximum in, I believe it was 1965. The average life expectancy of a company on the S and P 500 was about 60 years. You,
You may notice that in neither case is this even as much as a single human lifespan. And at this point, it's not even close to a single human career. Heck, a lot of these companies aren't around for as long as some people keep their jobs. So how much sense does it make to fork over a giant wad of cash to somebody and just trust that they'll keep your website online for the next century? Yeah.
Yeah. And knowing the rest of the history of what's going on with WordPress in general, it's facts very largely of we need to raise a whole bunch of money quick. Can we get people to pay us for the next hundred years and we'll have all this money and just sit on it? Because the other things you're relying on, in addition to that, WordPress.com will be around, is that.
the price of renewing the domain every year isn't going to go up some amount that they decide it's not worth it anymore. Or they don't lose their accreditation to do domain registrations. Right? I can change the rules and decides that we don't like you anymore or whatever. There's so many things could happen over the course of ten, 20 years, let alone 100 that I don't see why you would ever do this. And like looking at the historic trend back when domains were $70 for two years at the beginning,
We kind of saw this trend where they got cheaper every year, in which case getting someone to pay up front maybe almost made sense. But since then, I don't think I've seen the trend where domains weren't getting more expensive every year.
And if they're going to keep going up in price, why would you want to prepay for it? Because it's not like you're locking in the lower price. You're only, as Jim said, locking in a promise from this company that they're going to keep doing it. And to Jim's point, since they're not able to register for 100 years, they can at most do it for 10. But in previous versions of this...
They've seen that they didn't even register it for the 10 years. They just did one year at a time to save the capital up front, which really shows that they were trying to do this to get a bunch of money now instead of actually trying to make the most money. Because if you know the price of domains is going to go up over time, you'd want to buy as many of the years as you can now at the lower price, as long as the future value of that money is less than the level of inflation. Keep in mind that even if we're just talking about simple inflation,
A century gives you – it's either three or four doubling periods based on expected inflation alone.
So if the average cost per year, if you take the cost of the 100-year domain registration and you divide by 100 and you come out with, you know, they're charging you, you know, whatever, $5 a year, $7 a year to register that domain, and maybe they can pay that price right now. But even just based on simple inflation, their costs are going to triple or quadruple before that century is up. What are the odds they're still going to be honoring that?
that far down the line when literally nobody left is alive from when that deal was struck. Nobody's going to care. Nobody's going to honor that deal. You're wasting your money. Yeah. Even at 2% inflation per year over 100 years, that's 725% inflation.
And yeah, at some point the coverage is going to cut you off. It gets back to a thing we covered months ago on the show about how would you keep some data around for 100 years? It's like, well, you'd have to build this whole society and a group of people that care about it and a trust and all this other stuff. And it's like, yeah, nobody's doing that for your domain name for this little money. Hang on, we're talking about WordPress here, which is fostering a great community at the moment. Yeah.
Over there at Late Night Linux, I know you folks like your predictions. So I'm going to go ahead and give you some hot takes for 100-year predictions. Three things that will not be around in 100 years.
Windows, Linux, and FreeBSD. None of the three will be production operating systems in 100 years' time. So what are the odds that WordPress.com will still be around and operational in its current context? Will domain names be around in 100 years? Probably not. Yeah, the IPv4 network almost certainly won't be. The domain registry concept itself is...
I mean, there probably will be transitions, but there's certainly going to be a major update or two between now and 2125. There will be some massive changes to the domain registration system at all. Like I said, I'm sure there'll be transitions. I'd be very surprised if the current one just got completely torn up and shredded to the winds and something entirely new came in its place. But is it all going to look the same?
No. Are the costs going to be the same? How could they be? They might go up. They might go down. But just simple inflation alone, as we've already covered, this is ridiculous. It's nonsense. Don't fund whatever it is that WordPress.com is trying to come up with a bunch of capital for right now.
Okay, this episode is sponsored by SysCloud. Companies big and small rely a lot on SaaS applications to run their businesses. But what happens when critical SaaS data is lost due to human errors, accidental deletions, or ransomware attacks? That's where SysCloud comes in. It's a single pane of glass to back up all critical SaaS applications, such as Microsoft 365, Google Workspace, Salesforce, Slack, HubSpot, QuickBooks Online, to name a few.
syscloud also helps with ransomware recovery, identifies compliance gaps in your SaaS data, monitors for unusual data activity, and can automatically archive inactive SaaS data to save on storage costs. Plus, it's SOC 2 certified, so data remains secure in the cloud. Over 2,000 IT admins already trust syscloud to protect their SaaS data.
Head to syscloud.com for a 30-day free trial and for a limited time, use code 25admins to get 50% off your first purchase. That's syscloud.com.
Let's do some free consulting then. But first, just a quick thank you to everyone who supports us with PayPal and Patreon. We really do appreciate that. If you want to join those people, you can go to 2.5admins.com slash support. And remember that for various amounts on Patreon, you can get an advert-free RSS feed of either just this show or all the shows in the Late Night Linux family. And if you want to send in your questions for Jim and Alan or your feedback, you can email show at 2.5admins.com.
Another perk of being a patron is you get to skip the queue, which is what Joshua has done. He writes: "Can you please ask the guys about simple geo redundancy using VPSs? Currently our internal tooling is a bunch of docker containers on a single Linode VPS. We are relying on said tooling more and more and I would like a way to have HA and geo redundancy. Everything I find seems to mention using the Linode load balancers but that doesn't solve the geo aspect.
I don't understand how to have a load balancer that spans data centers. Is there a straightforward and cost-effective way to achieve what I'm after? Before we get started, I would just like to point out simple geo-redundancy. No, you can't have that. You can have geo-redundancy, and we'll talk about that mostly at Allenwell, but simple? No. Actually, it's not that complicated.
Okay, well, I thought it might be fun to put this to the hybrid cloud show guys. And sure enough, they have answered this question on episode 24. Now you two have not heard that episode. So you're going in cold on this. So it'll be very interesting to see how your approaches differ. I imagine if I had to guess what the cloud answer to this was, is when you have something like the Linode load balancer, and then in
In the cloud, while it's the same location, they have this concept of availability zones, which are often, you know, in the Amazon case anyway, are separate buildings that are just generally in the same area. And the idea is that if something goes down in one of them, the others probably don't. Although we've seen from the history of the cloud, that's not always how it works. But you
But yeah, as you mentioned with a load balancer, the main problem is a load balancer basically accepting the incoming connection and then proxying it to one of multiple backends and either trying to balance the load across them or doing failover of switching over to the other backend if the first one goes down. But for geo-redundancy, that doesn't help because then you need that load balancer to be in a place that's not down when the data center is. And in general, you want the load balancer to be in the same place as the backends because you don't want to add more latency by proxying. So
To solve this exact problem for my previous company where we did video distribution, we used software called GDNS-D, which was written by a developer who worked for Logitech at the time to do geoload balancing of downloads of drivers from the Logitech website originally. And so with this tool, you can install it and use it as your DNS server instead of bind, or in our case, you can delegate just a subzone to it or something. And you give it
a list of servers and like a, a GIP database, like the one from Max mind, the GIP lookup, they probably available in your package manager. Um,
And it will look at the IP address of the incoming requests for DNS and use that to geolocate the people and then route them to whichever server is closer. Although in your case, you probably don't actually need that feature. And it also has what's called failover mode, where it will be pinging all the servers all the time or actually making HTTP requests to them to make sure not only does it ping, but the web server is up and it's giving like a 200 response, not a 500 or a 403 or whatever.
And it basically will dynamically change the IP address of the host name. So when you connect to service.mydomain.com, it will by default go to the first host, your Linode VPSs. And if ever that's not working, it would just automatically change the DNS to point to the VPS you have at some other provider or at a different Linode location with a different IP address. And that way, when one goes down, it automatically switches over to the other.
And then depending on your configuration, if the first one comes back up, it can automatically switch back or it can stay stuck on the old one. Because, you know, if there's a database or something, once you fail over, you might not want to fail back until you manually resynchronize or something like that. I think it's probably also worth mentioning the dead simplest form of load balancing is literally just round robin DNS.
You set up multiple DNS servers. Each DNS server has, you know, multiple possible resolutions, you know, A records for the host name in question. And those go to whatever data centers you happen to have working on
machines at. The reason that I say, you know, simple geo-redundancy is an oxymoron is yes, that part is simple so far. It's easy to have round robin DNS and to be able to balance things between different data centers, but that completely elides the question of how do you actually keep your application in sync behind the scenes so that you've got the same information available at all these redundant data centers in the first place. And you'll notice that
Neither Alan nor I have touched that at all because that's the part at which it stops being simple. Yeah, the main problem with round robin is generally the browser's not good at it. And, you know, you can get half of the attempts to connect will go to the one that's not working and it'll break. Sure, but I feel like it's worth bringing this up because if you're not looking for like really advanced algorithms and particularly if, you know, you're okay with the old school version
of saying, hey, you know, I'm a human and I respond to emergencies when they happen. In a lot of cases, it's sufficient to say, okay, so yeah, there's not going to be an automatic failover with just doing simple round robin DNS. However, if I set my TTLs to five minutes, that means that, you know, when my monitoring system lets me know, hey, this node went down, I have the option of responding to that however I would like to, whether it's to go in and remediate
move the failed node from the round robin or, you know, just try to bring it back up, whatever, you have those options. Are you going to get intelligent load balancing that takes into account geographic proximity or, you know, the amount of load on an individual node or, you know, what have you? No, if you want all that stuff, then you go into the more complicated solutions like, you know, Allen's GD and SD, or I assume, you know, whatever cloud-based stuff the hybrid cloud guys came up with. But if you literally just want,
geo-redundancy around Robin DNS is the simplest and most old school way to do it. Like I said, it's just...
I hate hearing questions about simple this, that, or the other, because that's not usually the real problem. The real problem is distributing the application in the first place, and we haven't touched that. So if that was your question, sorry, we don't have an answer for it. But if you just wanted a simple way to route, essentially, incoming requests to multiple data centers, that's where we have you covered. Yeah. So again, like I said, with GDISD, you can automatically failover, but...
especially as Jim was talking about, the real complexity is keeping the application in those different locations in sync. And so you could be using ZFS and replication and keeping a copy of it at the second place and then only flipping that one to live instead of a read-only once you're doing this DNS change, but that probably
probably means doing like Jim said and doing it manually because you have to take the backup out of read-only mode and make sure it stops trying to sync with the primary that's now dead so that when the primary comes back up, you're going to reverse the thing and send the changes back and so on and possibly have to deal with things getting out of sync. And so there's a lot more complexity to it. But for just the, if this VPS goes down, please send those requests to a different VPS.
Especially in something basic like, say, the website for 2.5 admins. If the VBS goes down, you can have it just go to a copy of the WordPress running somewhere else. But you do have to figure out something to keep the MySQL in sync there so that the backup copy isn't missing the latest three episodes because that's the last time you synced it or whatever.
But if setting up a whole bunch of this is too much for you, like Jim said, you can just edit the DNS manually. Or if you're using something like Amazon Route 53, you get a web GUI to go in and modify the DNS records. And you're just going to either change it from the primary to the backup. Or if you were load balancing, you would have both and you would just remove the broken one. If you want to have the automatic failover and you don't want to have to set up like GDNSD by yourself, you can do that.
One of the companies that I use is called DNS Made Easy, and they have a service where, you know, as part of your subscription to use their servers for your DNS, you create a special kind of record in your DNS file that basically says, here's a list of hosts in the order I want you to try them, and here how I want you to monitor them. So it'll like try to load a certain URL from the website, and if it works, that one's up, and it will choose the servers in a certain order so that if one goes down, it automatically starts writing the traffic somewhere else.
Usually what you see this kind of thing used for is at the application server level. So you're eliminating a single point of failure at the application server level. The part that nobody wants to talk about is the single point of failure that usually still exists at the database level because all of those individual application servers, which may very well be geo-distributed, they're all relying on back-end connections to the same database or database cluster. And if that database or cluster goes down, then...
All of your geographically distributed whatever is no longer doing you any good because it no longer has a single source of truth to pull from. That's the can that isn't really getting kicked very far down the road that usually nobody wants to talk about. But, you know, as long as you understand that, you understand that we're really just talking about geographically distributing typically, you know, front end application servers, then, yeah, these are the answers that get you where you need to be.
Yeah, and so with our thing, when we're using DNS Made Easy, we have three locations, and they use MariaDB and circular replication. So when you do an insert on A, it then replicates to B, and then C, and back to A, and then A ignores it because it came from itself.
And with certain settings in MariaDB, you can say, if you're on A, your automatically generated IDs are ones that are divisible by three. And you can have them offset so that the insert IDs are different on each machine. So they'll go up by three each time with a different offset. And that way, technically, you can have inserts happening on all three databases at the same time. And once they catch up with the replication, you won't ever use the same ID twice.
But that gets pretty hairy. And obviously, if one goes down, the ring is broken and replication will go so far and then hit the broken link and won't come all the way around again.
And so if you have A, B, and C, and A goes down, if you make a change on B, it'll show up on C, but any change made on C won't be able to make it rack round to B because A is down. And that's why we use DNSP8EZ's failover in this specific order. So if A is down, go to B, and everything stays on B. And if that's down, A goes to C, because the database can only keep up in one direction until we actually get things back up and restore the loop.
This is probably a pretty good spot for us to go ahead and pick this up and move on, but I just want to get the last word, which is, this is why I said you can't have simple. Exactly. Paul, who's a patron, also skipped the queue. He writes, I have a PC, my Homelab, and my VPS backed up to both my backup Pi and Backblaze B2 using Restic.
I'd like to implement a solution where I could use B2 as a backup target and potentially my Raspberry Pi 2. My question is what software would you recommend? I'd prefer a free/fos or at most pay once solution. I would also like to have some options to do health checks using healthchecks.io to make sure the backups actually complete on a somewhat regular basis. So the major difficulty here is the Windows part of it. You basically are wanting to tie
the most consumerish possible desktop operating system to, you know, the most sysadmin-y type backup procedures, which I get that. I feel that, you know, myself every time I have to touch Windows, which is why I try not to touch Windows. Unfortunately, I can't give you a solution that I have a lot of personal direct experience with, but I can tell you the solution that I would absolutely be diving into if I were in your shoes right now. There's an open source tool called Copia, C-O-P-I-A.
It's another essentially Borg or RESTic-like backup application. It is snapshot-based and does fulls and incrementals. It's got support for Windows, Mac, and Linux, which I thought was really nice. You know, potentially you can maybe tie together some more things and do things the same way. Maybe you wind up doing your server stuff the way you do your mom's PC backups rather than the other way around. I don't know. But either way, Copia directly supports backing up to S3C,
S3 or also to Backblaze. So it's got you covered that way. And as far as your desire to do your tight end with health checks,
You'll need to do some scripting, but you absolutely can do that. Copia has a snapshot verification command, so you can put together a little batch file or small shell script to have Copia verify a particular snapshot and then output the results of that command to healthchecks.io, which gives you the monitoring tools that you're looking for. Doing the snapshot verify or the equivalent thing in a different tool if you choose that
that instead is a good thing for your health check. You also want to probably check a couple other things. You want to make sure, obviously, that the size of the backups is what you're expecting. If the amount of data backed up suddenly goes up by a large amount or down by a large amount, then it might be a configuration problem in one way or the other. And especially, you know, if you're backing it up to Backblaze and you're paying based on the amount of data you're backing up, then you might not want it to be
backing up stuff it wasn't meant to or whatever. But the other thing is, you know, while having health checks for your backups is very good, you want to make sure the backups are happening. That doesn't exempt you from actually having to test restores and make sure that everything actually works when you're restoring the things as well.
And it really gets into deciding, especially with the Windows PC, what are you backing up? Are you backing up just your mom's data files, like the stuff under her My Documents in a couple places? Or are you trying to make sure that you can restore and all the applications work from the restore, which is a much bigger problem? Yeah, Alan's suggestion about monitoring for the...
large or small incremental sizes, that's a good idea in general because it's not even necessarily that you're looking for a misconfiguration issue. While that can uncover misconfiguration, that can also be your first indicator that something big happened that you might want to be aware of on the system that you're backing up.
If you're normally seeing about, you know, a gig or so of change data per day and all of a sudden you see 30 or 40, that might be an indication of, you know, anything from a ransomware attack to just, you know, somebody like if it's a production situation with multiple employees. You could have somebody that decides to have Dropbox and Google Drive on the same folder and they start dumping the same files into each other. You know, there's there's all kinds of problems there.
that can suddenly result in backup sizes either getting much larger or much smaller. And it's not a thing of like, oh no, I know for a fact this is broken. It's more just like, this is unusual enough. I should probably take a look at it.
And seeing those backup sizes is really good for that first indicator. Yeah, and I've heard of more than one place where that giant jump in the size of the backup or snapshot was how they first became aware that ransomware had hit some machine on their network.
For more on helping decide what to back up and some of the common pitfalls that people run into when taking backups, you should check out the webinar we did a couple of weeks ago. RAID is not a backup in other hard truths about disaster recovery because it talks a lot about the pitfalls of deciding exactly what to back up. Maybe it doesn't apply directly to your mom's Windows PC when we were talking about
Do you back up the installed applications or do you assume with your package manager you're going to be able to reinstall the same application with the same version that's going to take your existing config file and database? And well, the same thing really does apply to Windows. Our solutions maybe don't, but you really have to decide are you backing up just the files and you'll do a fresh install of Windows and reinstall the apps and then just restore the files or are you actually going to try to restore the whole system as it was? And depending on how
production important the machine is, your answer to that might be a lot different. It's about time we started charging you for all these Clara plugs, Alan. I'm not charging people that go to the webinar, so... Fair enough, fair deal.
Right, well, we better get out of here then. Remember, show at 2.5admins.com if you want to send any questions or your feedback. You can find me at joerest.com slash mastodon. You can find me at mercenariesisadmin.com. And I'm at Alan Jude. We'll see you next week.