cover of episode 2.5 Admins 228: Century-Scale Storage

2.5 Admins 228: Century-Scale Storage

2025/1/2
logo of podcast 2.5 Admins

2.5 Admins

AI Deep Dive AI Insights AI Chapters Transcript
People
A
Alan
J
Jim
专注于 IT 自动化和网络安全的技术专家
J
Joe
面临上水汽车贷款,寻求多种解决方案以减轻财务负担。
Topics
@Joe : 为了确保数据在100年后仍然可访问,需要考虑多种机制和因素。百年尺度的数据存储不仅仅是技术问题,还涉及社会挑战,如如何让后代对数据存储保持兴趣并维护相关技术。 @Jim : 百年尺度的数据存储没有经过验证的答案,因为数字数据尚未存在一个世纪。大多数公司存活期较短,因此依赖单一公司来维护百年尺度的数据存储是不现实的。 @Alan : 百年尺度的数据存储是一个社会挑战,不仅仅是技术挑战。硬件和软件的短期化设计给长期数据存储带来了挑战,需要跨代的基础设施和组织来维护。

Deep Dive

Key Insights

What are the key challenges of storing data for 100 years?

The main challenges include ensuring generational infrastructure, managing human resources over decades, and maintaining hardware and software compatibility. Data must be migrated to newer formats and hardware regularly, and institutions must be built to sustain interest and funding over a century.

Why is generational infrastructure critical for century-scale storage?

Century-scale storage exceeds the lifespan of a single human career, requiring organizations to recruit and train new talent over generations. This ensures continuity in managing and maintaining the data, as well as adapting to technological changes.

What is the 3-2-1 rule for data storage?

The 3-2-1 rule recommends having three copies of data, stored in two different formats, with at least one copy in a separate location. This strategy mitigates risks like hardware failure, natural disasters, or format obsolescence.

Why are open standards preferred for long-term data storage?

Open standards are preferred because they are not tied to a single commercial entity, reducing the risk of format obsolescence. They are also more likely to be maintained and supported over long periods, ensuring data accessibility.

What are the limitations of using hard drives for century-scale storage?

Hard drives have a limited lifespan and require frequent replacement. Additionally, maintaining the hardware, software, and expertise to read the data over decades is challenging. Funding and institutional support are also critical to sustain the effort.

How does RAID technology help in long-term data storage?

RAID technology, particularly with ZFS, helps mitigate data loss by providing redundancy and error correction. Regular scrubbing ensures data integrity, but the hardware must still be replaced periodically to maintain reliability.

What role does cold storage play in century-scale data preservation?

Cold storage, such as tapes or optical media, can serve as a backup in a multi-factor strategy. While it cannot be monitored or updated as easily as warm storage, it provides an additional layer of protection against catastrophic failures.

Why is institutional funding crucial for century-scale storage?

Institutional funding ensures the continuous financial support needed to maintain hardware, software, and human resources over a century. Without sustained funding, the effort to preserve data is likely to fail as priorities shift over time.

What are the risks of using proprietary file formats for long-term storage?

Proprietary file formats risk becoming obsolete if the supporting company goes out of business or stops maintaining the format. This can make it difficult or impossible to access the data in the future.

How does the concept of failure modes apply to century-scale storage?

Failure modes in century-scale storage involve planning for decades-long risks, such as weak management or institutional neglect. Redundancy and resilience must be designed to survive multiple failures over extended periods.

Shownotes Transcript

Translations:
中文

Two and a half admins, episode 228. I'm Joe. I'm Jim. And I'm Alan. And here we are again. There's an interesting essay called Century Scale Storage by Maxwell Neely Cohen for the Harvard Law School Library Innovation Lab. It's very interesting.

very long but it's very interesting yeah so it's a 12 000 word essay on basically if you wanted to store some information and definitely be able to access it 100 years from now what mechanism would you use and it kind of walks through a number of the different things that it is today and what the pros and cons of trying to use them to do it and just some of the other considerations that go into how do we make sure we can keep this data for 100 years

And they start with the story of the RAMAC drive, one of the first hard drives that IBM invented 70 plus years ago. And not that long ago, the Computer History Museum managed to find one of like four that still exist and get it working. And when they read the data off of it, they found random data from like an insurance company and a car manufacturer and so on, and were still able to read it.

And, you know, we look at today's hard drives and you really don't expect that 70 years from now you'll be able to get that hard drive to read much. But they go on to point out some of the problems they face doing that. The parts are no longer being manufactured. The machine to actually make those parts doesn't exist anymore. And so basically it took the collaboration of a bunch of different institutions and companies to fabric cobble the necessary materials.

machinery to make the hardware to make recovering this data even possible. And that was just to read data off a single unit that happened to have survived. Out of all the ones that were made, only I think three specimens are actually known to exist. And this, as far as I know, is the only one that works. Fabric cobble is good. I like fabric cobble. One of the other things that the article points out early is that if you're asking the question, how do I store digital data in such a way that will survive for 100 years?

It's always a guess because we haven't had digital data to keep around for a full century yet. So there is no proven answer yet. And this builds on eventually to way more towards the climax of the piece. But the thing that really resonated with me is when he starts talking about the fact that really, if you want to guarantee century scale storage, the reason that century scale specifically is so interesting is because while it's

Certainly within the plausible reach of a single human lifetime, it is thoroughly outside the scale of a single human career, which means that if you want to keep data available for a century, you actually need generational infrastructure. You need more people than just yourself and more people than just your friends that are the same age as you. You need an organization that can recruit new talent that will then step in and replace people as they retire out of older jobs.

And that's where the chaos really comes in because it turns out that managing humans at scale for long periods of time is really, really difficult. And you can't just fix it in code either. Yeah, as ever with a lot of tech things,

It's as much a social challenge as a technological one. Yeah, and I go on to talk about even if you have a company set up for it, most companies only last about 15 years and 50 is really kind of stretching it. Even back before, you know, 60 years was the average, but the average keeps going down currently. So even if you built a whole company around trying to keep this one set of data going,

it's likely it wouldn't survive. And so you really have to think a lot like to Jim's point, not just how are we going to keep the data for now, but how are we going to get other people to be interested in keeping this data around and have them learn how the tools for this work and keep it going. And it's going to have to go through multiple generations of people. And the point of this can't get lost. The desire to do this can't get lost. And that can be some really interesting challenges.

And you also, you have to think about failure modes in four dimensions. Think of it kind of like building a RAID array, right? So if you build an array with, you know, RAID 5 topology, diagonal stripe parity, single parity, you can survive one failure and recover, right?

So think about this in terms of like decades rather than drives. Like if you have one decade of weak management that just kind of completely forgets really what they're supposed to be doing and aren't enforcing policies and making sure that new hires are happening, you know, can your organization survive that one failure? Well, can it survive two in a row?

You're now looking at uptime across decades and centuries rather than just talking about uptime like from day to day within a single business. But the principles are the same and how you have to be thinking in terms of failure mode is pretty much the same.

And when you start considering human beings that you haven't even met and who may not exist during your own lifetime as like replacement components that are just supposed to fit into place, you start to see just how challenging this really is. Yeah. Like we've talked on the show before about how the current generation of young people

Young students don't really understand the concept of like directories and subdirectories and file systems so much as like I save a file in a program in a cloud and that's it's just there. And so if you're talking about this data, the concepts that we use to even talk about the data aren't likely to exist in a couple of generations, let alone have someone that understands how it works today in order to keep it going.

And the other thing they point out, you know, the hardest thing with the REMAC and so on was if you're trying to keep a specific piece of thing working, you need to keep the hardware for that, which might involve keeping the machinery to make that hardware working. And then also the software. And right now, both hardware and software are really tailored for the short term, right? They're disposable and replaceable. But if you don't take care of it, then you end up with hardware that doesn't work that way. Like,

nuclear power plants that were designed on VAX, and then they stopped making the hard drives to replace the failed drives. And then eventually, they managed to kind of make it work with some virtualization on normal x86 machines, but they're still pretending to keep this old thing working rather than actually having modernized it. And so there are these different approaches to how you would do it. And so they looked at some museums and how archivists that do this professionally deal with

also non-digital stuff. And the professional archivists recommend making and storing multiple copies of anything in multiple formats, right? Your typical 3-2-1 rule. You need three copies of it, two different formats, and at least one of them in a different place. Because, you know, what if there's an earthquake or a hurricane or a fire that takes out the building? If that's where all three of your copies are, then you have no copies now, and so on. And then when storing digital data, they recommend...

Try to use file formats that are widespread and not kind of esoteric. And hopefully they don't depend on a single commercial entity. So if you're using a file format that depends on one company, if that company's gone, nobody's probably going to maintain that file format. So try to use something that's more widespread and likely open source so that it has a much longer longevity. Yeah, an open standard is the key here. Not necessarily open source, but an open standard. Yeah.

And if you're looking at things from an archivist's perspective, you not only want to look for the most open protocols and most open standards and most widely used, you also ideally, like the first thing that you want to store with your collection is a Rosetta code of sorts. You want to start with the absolute easiest possible thing to read and interpret and

even if technology changes, even if human language changes, that will then lead you through all the steps to enough knowledge to decode the actual data once you get to that part. Yeah, and so even if you're relying on something open source, maybe if you're archiving this, you want to archive a copy of the source code for the open source thing that can read this file.

still likely to have trouble 30 years in the future getting it to compile on an architecture that doesn't exist anymore or whatever. But archiving more of the pieces and tools to be able to make use of it with it will...

make more sense there. Yeah, archive the compiler as well. Basically, you need a whole self-hosting operating system, but even that requires something to bootstrap it and it gets very complicated very quickly when you're trying to be like, how would we reimplement an operating system and then a compiler and a source code from scratch if we don't have a working compiler and source code in

while we will still have an operating system, I'm sure 50 years from now, it'll probably be in a different language for a different architecture than what we have now. And I'm sure we will still have emulators for x86 because of how many emulators we have for Commodore 64s and 6502s and so on. But, you know, you don't want to have to count on that. We'll see, Alan, the first stage of that bootstrap process is getting them to type in a bit of code from the back pages of a 1983 Byte magazine. Yeah.

But the other thing they recommend basically is use non-proprietary formats that are platform independent, don't rely on one specific operating system or CPU architecture, that are unencrypted.

lossless and uncompressed. Because don't assume you're going to have the same decompressor in the future. And you definitely want lossless because the technology is going to be better in the future. We don't want to have lost some of the data for no reason. And so you want all of those factors into the file format. And then you're going to need hardware that can read it, software that can actually understand it, and many copies of it for it to hopefully survive.

So then they talked about what are the problems with doing this with hard drives? And they say, well, hard drives don't last very long, but if you use a RAID array, and especially something like ZFS where you're going to be able to do a scrub and ensure the bit rot is taken care of,

then you can keep it for a while, but you're going to have to figure out how are we going to fund the money to replace this hardware every four or five years to keep it going, have people that know how to use it and will care to replace the hard drives, and importantly, going to migrate the data to newer, bigger drives that will last the next five years, the next five years. And I

Ideally, possibly also look at converting the data into the current formats, right? Maybe you don't have to do it every five years, but eventually whatever file format you're using is probably going to be deprecated and you're going to want to convert the data to something that's going to last longer. It's probably not going to last all the way to the 100 year mark, so you're going to have to convert it a couple of times over the course. But that means having somebody that knows how this stuff works, how to read the old thing, how to construct the new thing so it's going to be good enough.

If we started this 30 years ago with small hard drives, it would have been on like FAT16 and then we would have upgraded it to FAT32 and then we would have moved it to something like ZFS.

And then eventually will be whatever comes after ZFS in 50 years and so on and so on. So in order for it to last, especially using any digital thing, you're going to have to keep migrating it to newer hardware and keep migrating it to newer software and having people around that are going to do that. And like Jim said, you're going to have to design redundancy so that one or two of the people that are supposed to do that not doing it.

Two or three times in a row is still not enough to lose all the copies of the data. Some of us did start doing that 30-something years ago, Alan, and some of us still have files that were originally saved on Apple II ProDOS. Very nice. If you need a couple of those old Nagel, like, line drawing images from the 80s that they used to put in, like, all the barbershops and stuff, if you needed that in, like, you know, a raw bitmap file format, I got you, buddy. Yeah.

You can just hex dump it straight to your terminal and there you go. As long as your terminal happens to, you know, have Apple II video display RAM architecture. Well, at one point I had to buy a SCSI card to plug an old hard drive from some ancient Mac that my dad had done a lot of writing on.

And it turns out that those files were Clarisworks files. I don't know if you remember that word processor. And that proved relatively tricky for me 10 or 15 years ago to convert into something that was usable. I'm not sure I'd be able to do it today. WordStar was a right bugger to get stuff out of in the mid-2000s. I started having clients who had done things in WordStar in like the late 80s or early 90s and still had floppies and wanted access to the data.

The first challenge is, can I get this data off of a 20-year-old floppy drive that's just been moldering in a cabinet somewhere? And then the second question is, how close to intact can I extract this data from this ancient, very proprietary file format? And that particular piece of software was one of the gnarliest. The other one was it was very difficult to get stuff out of abandoned versions of Microsoft Word.

Because in the early days of Windows, Microsoft was making a new version of works with an incompatible file format. The last one about every other Thursday or so. And nobody ever really used Microsoft works for anything, quote, serious, unquote. Well, to anybody except the people who had actually been using it.

So you would find a ton of like little mom and pop small businesses that had been very earnestly like doing everything in works because it came with the computer and it said Microsoft on it. And it sounds a lot like Word. So they thought that was, you know, the right thing to do. And then you wind up orphaned and abandoned. Yeah. And so currently to do a hard drive based century scale thing, you'd want to have multiple separate machines.

full of hard drives using good RAID, so probably ZFS, in multiple different locations managed by multiple different people and replicating the data and scrubbing it regularly. And then every couple of years, replacing the hard drives with new ones and keeping that up to date. So that's upgrading ZFS and eventually copying it off to whatever replaces ZFS or even just newer ZFS or whatever it is and constantly keeping that data alive and refreshed across

all those copies with all those people and

Like you've said, designing it from RAID all the way up. So this server is going to have some components that fail, but even if we lose this whole server and that whole server, we still have these other three. And if we lose those three people, we have these other people. And it really depends on how much data you need to store. If it's a very small amount of data, there are some more interesting formats. But as soon as you're talking about any scale where etching it into something isn't really an option, then I still think hard drives are the best way because...

Cold is really not possible to do at sentry scale. And so it has to be warm. You want something where you can constantly check it and verify it and synchronize it. You know, interesting thing here is like when you get bit rot so bad that one of these copies is damaged, can you get a good reference copy to get one of the broken copies back to working again? The main reason why we have printed material that's lasted so long is we have the ability to reprint it.

If we don't have to have a book from 400 years ago, we just have every 50 years somebody prints a new copy of it. And as long as we have a copy and we haven't, you know, mistranslated it or changed the text, then we still have a good copy. And so being able to synchronize those is one of the biggest reasons that doing it with hard drives is, in my opinion, much better than doing it with tape.

because you can randomly access it and check all of it much more efficiently than with tape. And tape drives are a bigger upfront investment. For huge scales of data, they have some interesting trade-offs, but it really depends on the scale you're looking at whether that makes any sense. I think that something that we've not talked about is how to finance this. If you can build an institution of people that are interested in keeping this alive, then maybe you can get fresh data

contributions of money all the time. Which ultimately is what you have to do, because again, it's supposed to last a century. You're not going to be around to pay attention to it. Even if you have an unlimited budget to throw at it, once you're dead, you have to have done something to keep all the people who are still alive. And you're not from deciding to just use all of your money for something different now that you're gone and can't say anything about it anymore.

So ultimately, there is no way around it. At CenturyScale, it's all about the institution, really. Yeah, and so that was really what they said in the article was that, quote, attaining CenturyScale storage using hard drive systems is less of a question of the technology than one of institution building, funding, real estate, logistics, culture, and a commitment to digitally preserving everything surrounding and interfacing with your storage system.

If you're going to keep an operating system working and up to date, then that has to work too. And like we talked about, CPU architectures change, hardware changes, all these things change. And if you can't continue to roll that data forward so that it stays on a modern thing, then you have to keep all of the not modern things working as well. And that very quickly webs out to we have to have our own chip foundry that still works.

or something like that in order to keep this going. And that's the main reason why I think hard drives are the way to keep this rolling forward. Because if we have the data and we can keep rolling it forward every five years, not trying to do this only once a generation or something, we stand a much better chance of continuing to have it.

There are at least some interesting ideas to explore along the lines of cold storage kind of being like one of your three in the 3-2-1, right? So I don't think it's really questionable that if you really want to be certain that you have the data intact for a long period of time, you have to be storing it warm.

at least in one copy. But what if you said, okay, so we always have it warm because that's going to be the one that we can detect failures the most quickly when it's warm because we can, you know, scrub it constantly and we can see what's going on. We can find out instantly if something's up, we can replace components, all this good stuff. But what if one of our forms of backup was something along the lines of, you know, like Microsoft Glass or tapes or whatever, you just...

periodically you do make a cold backup from your hot archive. And that's one of your three. So you say, okay, well we can't monitor this cold one and we can't update it as rapidly. And there's, you know, all these other issues with it, but it is still another factor. Basically, you know, it's a multi-factor backup and this is one of your factors and it would suck as a standalone backup method, but as part of a greater strategy, you know,

There might be some worthwhile both cost savings in there and, you know, kind of like you try to arrange things so that different classes of failure are unlikely to happen at the same time. And just kind of the failure cycle, like when the bathtub curve looks like on the cold storage cycle.

is different enough from the warm storage that it seems like it might be useful as part of a deliberately staggered strategy. And like what I was talking about before with, can you survive one decade? You know, when the institution loses its way, if it picks it up again later, can you survive two decades? You know, that kind of thing we're talking about. Well, maybe that's kind of where the cold storage comes in. Okay. So everything just completely sucked for 20 years straight and your entire hot archive is

Caught on fire, everything's gone, blah, blah, blah. But maybe the complete idiots didn't manage to destroy the treasure trove of like Microsoft Glass that you had in a separate location. And when the institution picks up its policies and its will to continue again the next decade, they can bring all that back onto hot storage and kind of pick up with where they left off.

Yeah, I think if you consider things like what if there's like an EMP and that will take out all the computers. And so having that cold in a bunker copy of it might be how you go and regenerate those warm copies again. So yeah, I think that's important. And I think the biggest thing is that, you know, the 3-2-1 rule is the minimums. Yeah. So maybe we need a lot more than three copies in more than two formats in more than two locations. And

To your point about class failures, like maybe we add even more things like on top of storing with RAID, we actually generate parity for it, like those old dot par files that we used to use when we had really bit noisy internet connections that might corrupt data as you were transferring it is, hey, if we add this extra parity data in exchange for 10% more storage, we'd be able to survive this many bit flips and get the original data back. And maybe part of the answer from this study where they looked at

hard drives, tape, removal media, optical type media, and so on, is that, well, if you're wanting it to last at least 100 years, maybe you want to try all of those. Because hopefully they're not all going to fail at the same time, right? Maybe burning Blu-rays will last 100

or maybe they won't last as long as we want. But if we have that and hard drives and tape, it's likely that their deaths won't all line up with each other. And we'll be able to regenerate whatever replaces Blu-ray as our removal media and whatever next generation of tape is and whatever the replacement for hard drives is and be able to keep rolling all of those forward so that when we get to 100 years, one of these is still usable forever.

But that just brings us right back again to the real point, which is that the institution is the most important part of all this. Because as we enumerate different ways to additionally store the data and more copies in different formats, the cost of doing that and maintaining it keeps going up. Every time we mention another way to technically improve this process and make it more recoverable and less likely to lose, it costs more and more. And again, we have to convince somebody to keep paying for that.

And Alan and I can tell you right now today, even businesses that are very active and day to day, somebody is extremely concerned about the bottom line and, you know, keeping the mission moving forward. It's already hard to get them to invest in like any kind of backup at all because it's just not how the typical human brain works most of the time.

Okay, this episode is sponsored by ServerMania. Go to servermania.com slash 25A to get 15% off dedicated servers recurring for life. ServerMania is a Canadian company with over a decade of experience building high-performance infrastructure hosting platforms for businesses globally. ServerMania has up to 20 gigabits per second network speeds in eight locations worldwide for optimized global reach.

They have flexible custom server configurations tailored to unique needs, as well as a personal account manager offering free consultations. With 24/7 live chat support and support ticket response times under 15 minutes, you can always keep your systems running smoothly.

Alan's been a happy ServerMania customer for over seven years, so support the show and join him at a hosting provider that truly delivers. Get 15% off dedicated servers recurring for life at servermania.com slash 25A and use code 25ADMINS. That's servermania.com slash 25A and code 25ADMINS.

Let's do some free consulting then. But first, just a quick thank you to everyone who supports us with PayPal and Patreon. We really do appreciate that. If you want to join those people, you can go to 2.5admins.com slash support. And remember that for various amounts on Patreon, you can get an advert-free RSS feed of either just this show or all the shows in the Late Night Linux family. And if you want to send in your questions for Jim and Alan or your feedback, you can email show at 2.5admins.com. Sten writes...

I'm using trunas-scale as my home NAS, and I'm using its inbuilt Cloud Sync feature to send backups to Wasabi. This is not quite what I'd call a backup. Under the covers it runs rclone-sync, and if I delete all my data on my NAS, the next run of Cloud Sync will happily delete all the data on the remote end. I would love it if trunas replaced rclone for this use with RESTIC. Obviously I could set that up on my own,

But I was thinking about using Wasabi's object versioning behind the bucket that Cloud Sync writes to.

Are there examples you've seen where people have configured retention that way? Am I missing any obvious concerns? How should I configure the bucket? People often mistake a copy of their data for a backup. And so, like you said, doing Rclone sync is going to make a copy of all your data in the object storage at Wasabi. But if you have no snapshot or something, like you talked about the retention, if you don't keep multiple copies of your data, then

All you have is one copy of what your data looked like. And if I accidentally deleted a file or this file got corrupted by the application crashing, your backup has a corrupted copy and is no more useful than the original copy that was broken. And so a real backup, you're going to have history, not just a copy. And that can be the problem with something like Rclone Sync. Obviously, the downside is keeping...

versions is going to take more space and cost more money and be more to manage. So you have to decide what your threat model is. So if your concern is your NAS dies and you want to be able to restore the files, then that sync feature makes sense. And if you want to make sure that you're not paying to store copies of files you've deleted,

Again, that can make sense. But if your threat model is I might accidentally delete a file or I might get hit with ransomware and not notice soon enough to stop my backup from erasing the good copy in the cloud with the encrypted copy, then you want something that's a bit different. We're looking at exactly the reason why I frequently advise people that the answer to their question about backing ZFS up to cloud services like Amazon, my answer is usually no because of exactly problems like this. It's just...

it doesn't tick all the boxes I need it to tick. And you can't really make it behave the way I want it to behave, which is like a nice, simple replication target on another pool where you can just replicate your snapshots nice and cheaply and everything block level deduplicated at the snapshot to snapshot level. And like everything's just done and manageable and cheap and easy. It just, once you have to leave that model and go back to something that is based more on concepts of the old tape drive style backups, it's,

everything gets clunky and expensive and a giant pain in the ass, just like old school tape drive backups. Yeah, so the object versioning will help you, especially in the case of a file got corrupted or accidentally changed or something. But it's not clear what happens to those previous versions of the object when you delete an object. You know, if you only ever had one version of the file and then you delete it, can you configure the cloud to do what you want and maybe retain that for an amount of time before it gets expunged?

In theory, as long as you can mark the objects in Wasabi immutable, and as long as you can keep track of which objects belonged to which backup run, there's certainly a way to turn it into something like what you want.

you would basically have to re-implement ZFS snapshots on top of Wasabi by saying, okay, Wasabi, this group of objects is all in exactly the condition it's in right now. They all have to be immutable because they're all part of what we reassemble into this backup that I made on this date.

But how much donkey work that's going to be on your end, I don't know for certain, but I suspect it's going to be pretty immense unless somebody has already given you a framework to work within to easily associate objects within Wasabi with one another, make them immutable, set expiration dates, like all this kind of stuff that you would need in order to reimplement that functionality.

There's also the question, and I don't know the answer, of, you know, how large are these objects? Because when ZFS does what we're talking about doing, it does it at the block level, and blocks are usually rather small. By default, they're going to be 128 kilobytes for just a random block out of a random file on a random data set with, you know, nothing else configured, or 4 kilobytes or maybe even just 512 bytes for metadata blocks or small files.

So this is all very, very efficient. But if Wasabi, for instance, is, I don't know, maybe it's creating one gigabyte individual objects that it reassembles together. Well, then that's only going to be the level of granularity that you have in terms of deduplication from one, quote, snapshot to the next, quote, snapshot. So how well it's all going to work depends.

I don't know all the answers to be able to tell you that, and I suspect it's going to be a lot of work to find out. But I'd be very interested in your results if you get started putting it together. Yeah, but to your point, if a tool like RESTIC works better, you can just install and run that in the Docker containers or whatever that TrueNet scale provides and be able to back up your data using your own tool. You just won't have the TrueNet GUI managing it for you.

Right, well, we'd better get out of here then. Remember, show at 2.5admins.com if you want to send any questions or feedback. You can find me at joerest.com slash mastodon. You can find me at mercenariesysadmin.com. And I'm at Alan Jude. We'll see you next week.