cover of episode 2.5 Admins 215: Still no VLANs

2.5 Admins 215: Still no VLANs

2024/10/3
logo of podcast 2.5 Admins

2.5 Admins

AI Deep Dive AI Insights AI Chapters Transcript
People
A
Alan
J
Jim
专注于 IT 自动化和网络安全的技术专家
J
Joe
面临上水汽车贷款,寻求多种解决方案以减轻财务负担。
Topics
Joe 认为长期数据存储应选择热存储而非冷存储,定期检查和复制到更新的介质才能保证数据安全。他强调冷存储是一种赌博,因为你不知道数据在需要时是否仍然可用。他还指出,修复损坏的硬盘数据恢复成本很高,约为每块硬盘1000美元。 Jim 认为长期数据存储,特别是磁性介质,应定期检查和复制到更新的介质,否则数据容易丢失。他指出,与冷存储相比,热存储更经济高效,因为验证冷存储的成本很高。他还强调,如果必须使用冷存储,磁带比硬盘更好,因为硬盘是机器,更容易损坏。 Alan 认为长期数据存储应选择热存储,并定期检查和复制到更新的介质。他指出,所有存储介质都会随着时间的推移而降解,熵总是会获胜。他还强调,即使是冷存储,也需要定期读取数据并将其写入新的介质。他认为,定期检查和复制到新的介质,热存储比冷存储更合理。他还指出,数据存储时应考虑文件格式的长期兼容性,并定期将数据转换为现代格式。他还指出,存储介质的物理损坏也可能导致数据丢失,数据存储应使用校验和等机制来检测数据损坏。他认为,要确保数据长期保存,需要热存储、校验和和冗余。

Deep Dive

Key Insights

Why are 1990s hard drives failing to preserve music industry data?

About 20 percent of hard drives from the 1990s stored in temperature and humidity controlled vaults by the music industry no longer spin, making the data unreadable. This highlights the need for regular data checks and refreshing to newer media to ensure long-term data preservation.

Why is cold storage not the best solution for long-term data preservation?

Cold storage involves parking and stacking data without regular checks, leading to potential data loss due to media degradation. Warm storage, where data is regularly checked and refreshed, is more reliable for ensuring data integrity over decades.

Why are FAA air traffic control systems so outdated and unsustainable?

The FAA's air traffic control systems are outdated due to a lack of funding, technology refresh, and spare parts. Many systems are over 20 years old, and some are even 50 years old, with no clear plans for replacement. This has led to critical safety and operational issues.

Why is it difficult to replace air traffic control systems?

Replacing air traffic control systems is challenging due to the high risk of disruptions and the complexity of these systems. The FAA is often risk-averse and hesitant to modify or update systems that are deemed to be working, leading to long planning and deployment cycles.

Why is it important to have a maintenance and replacement plan for critical systems?

Critical systems, such as air traffic control, require regular maintenance and planned replacements to avoid technical debt and ensure long-term reliability. Without a plan, systems can become unsustainable and pose significant risks.

Why is it recommended to use a router to isolate devices from a roommate’s network activities?

Using a router to isolate your devices from a roommate’s network activities is recommended because it creates a separate network, protecting your devices from potential intrusions and security risks. This is a straightforward solution compared to configuring VLANs.

Chapters
The podcast discusses the unreliability of cold storage for data archiving, highlighting the music industry's experience with failing hard drives. It emphasizes the importance of warm storage, regular data checks, and using appropriate media like tapes for long-term data preservation.
  • Cold storage is a gamble; regular data checks are crucial for long-term preservation.
  • Hard drives are less reliable than tapes for cold storage due to mechanical failure.
  • Warm storage is more cost-effective and reliable than cold storage in the long run.

Shownotes Transcript

Translations:
中文

Two and a half admins, episode 215. I'm Joe. I'm Jim. And I'm Alan. And here we are again.

Music industry's 1990s hard drives, like all hard drives, are dying. What a shocker. A lot of hard drives don't survive for 30 plus years. You've got to be kidding me. So when the music industry goes to archive some stuff, they eventually switch from tape to hard drives because they're more reliable. So they take their master material and write it to a hard drive. And it sounds like from what they're talking about here, they actually write it to two hard drives because they're not entirely without sanity.

So they write it to these two hard drives and then send it to Iron Mountain, where it's kept in a temperature and humidity controlled vault. That means that it should be OK. But, you know, these hard drives are just parked and stacked and put in boxes or whatever and put in a vault. And so currently when they go to access them, they find that about 20 percent of the ones from the 90s don't spin anymore. And so they can't read the data off of them.

And if that happens to the primary and the backup, what you have is two bricks instead of a copy of your all important original master recording of a music album or whatever. And it really goes to show that if you want your data to survive, it needs to be stored online and actually checked on a regular basis and copied to newer media and refreshed. Otherwise, it's not going to keep.

Yeah, this episode is sponsored by the monthly ZFS scrub. Yeah, in short, warm storage, not cold. If you really want the data to be there, as soon as you can sign data to cold storage, it's a gamble. You are gambling that it will still be there when you need it. You don't know, literally, because that's what it means for the storage to be cold.

cold, you've essentially abandoned it. You don't know what its condition is. You've Heisenberg your data. Whereas if you just keep it in a pool online, usually by the time you go through all the hassle of saying, well, we have to verify our cold storage, you know, every so often. And that means breaking it out and connecting it and, you know, loading it up and verifying all the individual bits of the data.

it would have been cheaper just to keep it all warm in the first place. With that said, even if the industry doesn't want to pay for warm storage, the real takeaway here is that if you must do cold storage, tape is a better idea than hard drives. And the reason for that is that hard drives aren't just the medium and the recording data. Hard drives are also machines. And if anything to do with the motor that needs to spin a platter up to several thousand RPM,

goes wrong while it's sitting in cold storage for decades, well, now you might still be able to get the data off. But you've got a really expensive and dicey forensic recovery proposition in front of you where you need to find a working drive with the same platters, which may be really difficult if this is, you know, a 30 or 40 year old drive model, transfer the platters over and, you know, a surgically clean environment, you know, all this nonsense that

If you have it done even to a brand new hard drive that failed, you'll discover Drive Savers or Gilware or whoever is probably going to charge about $1,000 a drive for it.

So we recommend warm storage. If you must do cold storage, the Iron Mountain part or similar climate controlled environment, not optional. You need it. You need to take that seriously. And if you're using magnetic media, you want tape because it's simpler. There's not a motor and there's not platters that need to be split up to a certain speed. You've just got tape wound on a reel. And even if something goes wrong with the reel mechanism, it's a far less cumbersome procedure than

to basically prototype out the same kind of cartridge to rewind your tape on than it is to try to surgically repair a hard drive with a frozen motor. Is cumbersome one of those words you've only ever read, Jim? Cumbersome can be pronounced either way. Okay. Yeah, there was a great comment from a user on Hacker News, your abracadabra, saying, all media rots, right? Optical media rots, magnetic media rots, and loses magnetic charges. Bearing sees, flash storage loses its charge, etc.

Whatever you do, entropy always wins. And we've talked about this before. Even if you're going to do the cold storage, tape can have problems as well where the media can rot. In all those cases, if you're going to put it in cold storage, part of that process has to be actually reading it. And when you're going to read it anyway, it probably makes sense to write it to fresh media. So at least every 10 years and probably even more often than that, you want to make sure it all still works and that you're

you're writing it to fresh media so that it's never old media that's continuing to degrade, right? And really, once you're doing that much to it, warm media probably makes a lot more sense because now you're checking it much more frequently. But no matter what you're doing, you need at least some parody or replication or copies or something. If you're going completely cold, you want to spread that across media.

But the other thing they talk about in the article is also the problems they have with file formats, right? If you saved the original finished track in some proprietary file format, you now need to go to some archive and get a version of the software from the 90s that can actually open this file.

which means an operating system and a computer that can run software from the 90s in order to read this file format and then try to convert it without losing any quality to a modern format. And if you're

If there's an analog stuff in there, you're not going to get bit-for-bit perfect copies. And so you also have to think about, you know, when you're refreshing your stored data to a newer media, making sure the format is one that's going to survive as well. Yeah, in the case of music, you want to have the raw audio. The other one they mentioned that I found kind of humorous was

People who've archived a bunch of data or music or whatever on CD or DVD-Rs in those binders and have now found that the binder has resulted in the media bending so much that it's not actually possible to spin it properly in a drive to read it. Not one I had considered before. But yeah, the other big problem here is obviously if you're just storing on hard drives, especially with a 90s file system, you don't have anything resembling a checksum to tell is there any distortion or damage to any of this material

these original files. But again, you know, if all you have is a second copy on the same model of hard drives that you bought from the same batch at the same store, then it's likely most of the failure modes are going to affect both those drives, not just one. And if it's the first time they've checked since the 90s, I feel like they've done themselves a disservice here. This all really just comes back to the fact that if you want to absolutely know for certain that your data will survive for several decades,

Cold storage is not the answer. Cold storage is the answer when you want a pretty good gamble that your data will survive. If you need to know that it survives intact, you need several things. One of them is warm storage. Another is not just a checksum for the whole file, but hopefully lots of checksums for individual blocks of the file. And the final thing that you need is some form of redundancy, whether it's parity or, you

You may recognize that I've just described basically three fundamental building blocks of something we talk about on this show a whole lot. Can you guess what that is, Joe? Butterfess. You're fired. Out of a cannon.

The thing is though, if you imagine how much data do you think we were talking about in the 90s? You couldn't even buy hard drives more than, like what were the top end ones in the mid 90s? Like maybe a gigabyte? Not even, probably. And I think about 20 years ago, I made a bunch of electronic music, right? And I had an 80 gigabyte hard drive and I thought, I'm never going to fill this up. Eventually I did fill it up and I bought another one.

But you're talking about less than 100 gigabytes for basically all of the source files and everything to make the music that I made 20 years ago. Now I've got two 18 terabyte drives in my NAS.

It's really not expensive to archive stuff long term in warm storage because storage keeps getting bigger and bigger and cheaper and cheaper. And 18 gigabytes now in 20 years is going to seem like nothing probably. Right. But also as part of that, when you're replacing those drives with bigger ones and keeping enough parity and so on, you also have to consider, is the file format this is in something I'm going to be able to open 20 years from now?

and start considering formatting or reformatting stuff as it makes sense. And that's where open source stuff and stuff that's not encumbered by proprietary things and patents means that it's more likely you're going to be able to have a program that can play back that media in the future. And Joe, in answer to your question, in 1995, a premium PC would typically be sold with a roughly 200 megabyte hard drive. Right.

But what about if you were Hollywood rich? Maybe like four or five hundred? Well, the thing is, yes, they're Hollywood rich, but they're also Hollywood stingier. They wouldn't have been just chucking random hard drives in a vault to begin with. In theory, if you were Hollywood rich and wanted to spend the money, one gig hard drives were available as early as 1980. Now, they weighed 500 pounds each, but you could get them. Yeah, so we're talking about hundreds of megabytes of data. Yeah.

that would just be so easy to have sitting on a NAS somewhere and replicate it off to two off-sites, let's say. But, you know, in general, Iron Mountain specializes in storing physical files and the stuff you have to keep for legal reasons for seven years because of the government or whatever. It is not a cloud backup. It's not really meant for primary storage.

And I understand why Hollywood wants to keep this stuff in a vault, but Iron Mountain has data centers specifically for storing warm data because it turns out that's the only way to do it reliably. Well, it's like we talked about a couple of episodes ago with my city cat picture and how long it was going to last. Someone needs to be taking care of this data in one form or another, or it's just going to die. Yeah, like we say, if there aren't three copies of it, it doesn't really exist. And if you're not checking those copies, you might as well not have them.

Okay, this episode is sponsored by 1Password. Imagine your company's security like the quad of a college campus. There are nice brick paths between the buildings. Those are company-owned devices, IT-approved apps, and managed employee identities. And then there are the paths people actually use. The shortcuts worn through the grass that are the actual straightest line from point A to B. Those are unmanaged devices, shallow IT apps, and non-employee identities like contractors.

Most security tools only work on those happy brick paths, but a lot of security problems take place on the shortcuts. 1Password Extended Access Management is the first security solution that brings all these unmanaged devices, apps and identities under your control. It ensures that every user credential is strong and protected, every device is known and healthy, and every app is visible. 1Password Extended Access Management solves the problems traditional IAM and MDM can't touch.

It's security for the way we work today, and it's now generally available to companies with Okta and Microsoft Entra, and in beta for Google Workspace customers. So support the show and check it out at 1password.com slash 25A. That's 1password.com slash 25A.

FAA air traffic control modernization efforts are a mess. Yeah, so the Government Accountability Office published a report looking at the FAA's air traffic control systems, of which there are 138 separate pieces, and found that at least 51 of them are FAA.

completely unsustainable, including like lack of spare parts, shortfalls in funding to sustain them, lack of technology refresh, funding to actually replace them, and so on. And another 54 of them are potentially unsustainable for similar reasons. And looking at some of these systems, which for security reasons are just labeled as System A and System B and so on. But System A is now 30 years old and it's completely unsustainable and is crumbling

Critical to safety and efficient operations. And the completion date for the associated investment to actually replace it with something modern is 2035. System B may be in slightly better shape as it's only 21 years old and will actually complete its replacement in 2034. You move down the list and we see a couple that are only six or seven years old and are also listed as unsustainable. One of them, in fact, is only two years.

And it's still listed as unsustainable. Now, I have some questions about why a system that was only put in place two years ago was already unsustainable, but there we go. And on the other side of that, we have a couple of systems that are 50 years old in that list. That is fantastic.

Five zero, dear listener. Yeah, and a lot of this came out of the story we covered months and months ago where the NOTAM or Notice to Airmen system went down because somebody accidentally deleted a file they shouldn't have or, you know, they were manually editing an XML file or something and it made the software just not work.

And we see that, you know, a lot of this is custom built software that the consultants and company that built it don't exist anymore. And any attempt to improve it is basically, oh, we'll just build a whole new system based on current technology. But it's the government. So we're going to spend eight years planning it before we execute it. And then we're going to

Use the design we came up with at the beginning. So it'll be, you know, 10 years old by time we start building it and 15 plus years old by time we actually deploy it. And that's how you end up with 36 year old unsustainable systems that won't be replaced until 2031. Unfortunately, Alan, I really wasn't listening to all that. I'm sure it was great, but I was too busy thinking about how much you could charge an hour if you're the guy that's keeping the system designed for air traffic control in 1974. Yeah.

still operational in 2024. I think that guy's probably doing okay. But don't worry, it's only going to cost $8 billion to fix all of this. And that's only for the ones they've actually planned to make an investment in. Some of them, like that two-year-old system, they're not planning to make any investment in. And

And there's a couple that are over 30 years old that they're currently not planning to actually replace. Is this not just par for the course when it comes to the public sector, though? I'm not sure that I would actually blame this on the public sector because it's no different than what we see in the banking industry, which largely is still running on either literal ancient mainframes and minicomputers or on more modern computing devices that have been configured to pretend that they're ancient mainframes and minicomputers.

I think that once you get into a situation where you're handling more than a certain value of, let's just say, assets. Now, in the bank's case, that's literal money. The bank is handling so much money that they are incredibly risk averse. They don't want to modify anything. If they think the system is verified working, they do not want it touched, period. And I think you're seeing a lot of the same thing when it comes to these devices, because, I mean, these are air traffic control systems.

If you screw up, it's actually even worse than the screw up at the bank. If you screw up at the bank, you might misplace $100 million. If you screw up with the air traffic control system, you might crash a couple of fully loaded jumbo jets in midair. When do you want to take that down? When do you want to do minor refreshes and updates and like little bitty bug fixes as you go? Like you really want that system super duper stable.

It's just the downside of any super duper stable system is, you know, eventually you do have to actually pay all that accrued tech debt. Yeah, I think a lot of that goes into when we're designing the replacements for these, maybe as part of that, we should also design their replacement plan, right?

Like, okay, we're going to install this new system and 15 years from now, we're going to plan this refresh or probably maybe even a shorter timeline than that. But like Jim was saying, we've seen any system that's kind of needs to be on 24 seven and is super change averse, uh,

has kind of a different life cycle to it. We saw a lot of this with the airlines a couple years ago where they were like, well, it turns out when we book a ticket, we're actually taking a JSON message from the website and then turning that into some XML and then wrapping that in the old Telegram format, like the typewritten things that we used to send around in the 70s to actually buy a ticket. And we're just doing that over X.25. Yeah.

over analog phone lines to actually make a reservation with an airline. And so it takes a lot to change these systems and the companies are very change-averse because when you have a disruption, it can be huge. Like we saw with the NOTAMs, things like, okay, every plane just has to stay on the ground for eight hours while we sort this out.

that we can't have that on a regular basis. But at the same time, we can't just keep running the old stuff and hope it works. We need to come up with better plans. And it shouldn't take eight years to come up with a plan on how to replace a system. And I think a lot of that has to go into, we have to plan to sustain and replace these systems as part of building the replacement. Because when we don't have a plan for how we're going to continue to service it and replace it over time, that's when we run into this problem.

And we've talked about it before, even in smaller scale stuff, if you're just building out a data center for your office or whatever, or picking your servers, you know, when you buy the server that you're going to have to replace it when it's five years old or seven years old. And you should plan that into your budget and know that's what you're going to have to do. And if you can defer to your that maybe that's a bonus to you. But if you don't plan for it, it's a surprise when the hardware dies.

That's not what you want to do. And as FFA, you don't want to find out, hey, the computer we're running off, we can't get spare parts for. In some of these systems, it's not just computers and software. They're talking about actual radars where they can't get replacement parts and they have to switch to a new radar. And that means they need a different computer and whole different controllers. And you're talking about physically taking the top off of a radar tower and putting something else on there. And that means it's going to be offline for a long time. And...

All these systems actually do need to be managed and have a life cycle. I wonder how modern the management practices are, not just the ones being in use now, but, you know, we're talking about these replacements and what we should be doing going forward. You know, we talk about the potential issues when you replace a system with a newer system. And, you know, what if there's a bug you didn't know about in the newer system? And we're talking about replacing things that are in some cases literally half a century old, right?

And beyond hardware and software, there are management practices that aren't that old that we should probably be applying. Do we actually have just a set of unit tests for this? I mean, we should be able to say, hey, we have these defined unit tests that we can run at any air traffic control system, make sure it's responding in the way that it should, the types of inputs that it ought to be handling. Do we have that? Do we have a plan to create that if we don't already have that? Because that's going to be critical going forward.

I don't know that we can necessarily plan up front for like, well, this is when we're going to replace this system, but we should definitely have regularly scheduled non-negotiable audits for that. You know, hey, every five years we're going through all this. We're talking about, you know, what's going well, what's going badly, what it looks like for five years down the road. You know, there should be an actual report with recommendations. Is it going to be time to replace this system at the next five year mark or not?

And, you know, we need to be paying attention to those reports and reading them as well as writing them. Yeah. And like the thing here says, a lot of this has to do with the fact that the FAA isn't sexy and the government chose to defer investment in it over and over and over again until it was falling apart. And now we needed to spend a lot more money all at once. But it's why we have to have plans in place for how we're going to deal with this and

realize that every single thing we install is going to have to be replaced at some point. And we need to be thinking about that, not just that's future somebody else's problem. Well, I hear from various listeners about the CentOS rug pull. A lot of people are on CentOS 7, and they are now having to jump from CentOS 7 to something completely different. Alma, Rocky, CentOS Stream, RHEL. And this has caused chaos in a lot of industries,

where people are just drowning in technical debt. And like we've said before, Jim, I think you said, pay your technical debt down. The interest payments are ridiculously high or whatever. Exactly. That's why I was suggesting, you know, something like a five-year report. Honestly, I think annual would be a better idea, but I just, I know how slow our government works. And the thing that I'm concerned about was saying, we ought to be looking at this annually is at the scale of the government and the

The level of ability that politicians have to ignore things that they don't think will get them immediately reelected, I'm a little worried about producing too much data that will then just get ignored as opposed to producing enough to act on. But that's a lot of what I'm talking about here, because that five-year audit is going to uncover issues like that. Now, in these systems, because they are very hardened, very offline systems, they're

They're not necessarily going to get the kind of like, you know, rapid update schedule that a normal system does in these days, you know, that's connected to the Internet. And you absolutely must be patching it constantly because you don't know how many different people are touching it and how many different ways. And you have to plug every single possible hole the second you find it. It's not necessarily the way you're going to handle these things. They're going to be massively air gapped and only very trusted people are going to be able to work on them to begin with.

I mean, I guarantee you, it's not like you've been getting apt updates daily on that 50-year-old system, right? That kind of gives you some idea of what you're working with. But if you're doing that five-year audit, you look at things like that. You say, oh, okay, well, we've been using CentOS, but CentOS went away.

So we're going to need to find a replacement for that. Well, what does that look like? I don't know. Do we want to go to Rocky? Do we want to go to this? Do we do we want to scrap the whole thing and say Linux sucks when you do BSD? Do you want to say, hey, open source sucks and we want Windows systems for all this stuff? Whatever. You uncovered the problem on your five year audit and now you've got five years to get that crap figured out. And hopefully you've got those unit tests I was talking about beforehand solved.

So when you stand the whole system up on whatever the different things are that you have to replace, whether it's new hardware, new software, new operating system, whatever, you can take a CICD approach to it. You can run all the unit tests against it and you can come out on the other end of it saying, yeah, this this definitely should be good. And now you can start doing a piecemeal replacement.

Now that's the unfortunate part is, you know, when you're saying, well, we're going to replace them just one system at a time to kind of, you know, minimize how many eggs we've got in one basket.

You are still talking about human lives in those baskets. So you want to be as sure as you possibly can before you replace anything. But regardless, we know how to do these things now. We didn't necessarily know how to do them right in the 1970s. Yeah. But I do wonder kind of to Joe's point, if the long-term support promises of some of those things like REL are a contributor here?

If we force people to upgrade more often, like every four years instead of seven or eight years that you can extend out, kind of the problem we've seen that vendors have taken advantage of the fact that most companies are willing to pay extra to continue to get support and put off their technical debt. It's kind of like predatory payday loans for computer software. Well, Canonical is now offering 12 years for their LTSs.

just selling technical debt as we titled that episode, I think. I would argue that if you're an organization the size of the FAA with an organization the size and scale of the United States government behind you, managing something as important as what we're talking about here, which is, you know, over time, millions of human lives,

You shouldn't pay anybody for long-term support of anything. You should be capable of doing the amount of support that you need on a day-to-day basis, and you should be prepared to rip out the whole damn system and put something new in if and when you need to.

We shouldn't be saying the lives of millions of American citizens depend on a private company continuing to uphold their end of a contract. That's – no, don't do that. Right. I'm saying not so much that part as that the availability of –

You can just pay to keep having the old thing is part of the problem and that people should be designing life cycles to be a lot shorter. Yeah, I was actually agreeing with you. I was just saying that, you know, yes, I agree with you. And my response to that is the government shouldn't pay for those contracts ever for anything this important because that's a dependency now. That's a dependency. And it's not a strong enough link in the chain when you're talking about what needs to be insured by that.

Right, and we've seen, you know, well, CentOS wasn't something anybody was paying for, that they're perfectly happy to rug pull there. Or, you know, we saw with Broadcom, they're like, oh, that perpetual support agreement you had? We're ripping it up, sorry.

Okay, this episode is sponsored by people who support us with PayPal and Patreon. Go to 2.5admins.com slash support for details of how you can support us too. Patreon supporters have the option to listen to episodes without ads like this. And it's not just this show. There's Late Night Linux for news, discoveries, audience input, and misanthropy. Linux Matters for upbeat family-friendly adventures. Linux After Dark for silly challenges and philosophical debates.

Linux Dev Time about developing with and for Linux, Hybrid Cloud Show for everything public and private cloud, and Ask the Hosts for off-topic questions from you. You can even get some episodes a bit early. We've got a lot going on, and it's only possible because of the people who support us. So if you like what we do and can afford it, it would be great if you could support us too at 2.5admins.com slash support.

Let's do some free consulting then, but first just a quick thank you to everyone who supports us with Paypal and Patreon, we really do appreciate that. And if you want to send in your questions for Jim and Alan or your feedback, you can email [email protected]. Another perk of being a patron is you get to skip the queue, which is what Bosschan has done. He writes: My friend and roommate is installing Kali Linux, with plans to learn things using the internet and chat GPT.

This makes me a bit nervous as he's on my network and I have no idea what things he'll decide to do based on forum advice, let alone chat GPT.

I'm not worried about him breaking his own stuff, but I am concerned about network intrusion, since we share a LAN and many of the tutorials he might find could lead him down dangerous paths. What can I do to make certain I'm safe from any of his network activities while still sharing an internet connection and letting him learn as much as he can about Kali and InfoSec? So there's two routes that you can take here, and the...

More traditionally business IT answer would be, well, you're going to need a smart switch that supports VLANs and you need to port lock all the VLANs so that you can't set at your own computer what VLAN you're going to be using. That's determined by which port you're plugged into the switch on. And you need to make sure that all of his switch ports are on a different VLAN from all of your switch ports on the main VLAN and make sure that you don't route from one VLAN to the other VLAN and then you'll be good.

That's one way you could do that. However, I have a simpler recommendation.

Just go buy yourself a cheap router and put your own stuff behind that router. If your router is doing that and your stuff is behind your router, whatever nonsense he's doing out there on the mainland is not going to be able to get through that network or disk translation to mess with any of this stuff on your side of the network. Yeah, with that, you have the drop that comes from the switch or whatever that's for your whole house into your room and you plug your router into that. You now treat

the hostile LAN as if it was the internet, right? That's the outside. That's where the badness is. And then everything on the inside of that switch is now your safe network.

And that is easier. It has double NAT, but that's not a big deal, especially since both the routers are probably more than performant enough. And that will mostly solve the problem. Whereas if you do try to go the VLAN route and configure it on the switches, you also have to get your router to realize that, hey, there's two different VLANs coming in here, and I need to not mix them together and make sure I'm going to have different DHCP in these different things. And at that end, you practically need a separate router anyway, and then you're just...

one VLAN going to router A and one VLAN going to router B, and you still have to figure out how to connect both of those to the internet. And so Jim's solution does seem more straightforward. And depending on what else is going on in your house, and you know, if you have a TV in the living room that needs to be able to get to your media server, that can get a little more complicated. But in most cases, it probably does make sense to just separate the network by chucking a router in there and keeping their stuff or your stuff separate.

Another potential solution to just buy a router and put yourself behind your router and leave him connected directly to the ISP's gateway router.

That's a pretty much guaranteed to work no matter what solution. If you don't like that, another potential solution you might check to see some ISPs will offer more than one public IP address if you want it for not too much more money a month or in some cases, no more money a month. So you might be able to spend somewhere between zero and five bucks a month to get two different public IPs and be able to hook up

two routers behind the ISP's gateway, each of which gets a different public IP. And at that point, they are really entirely separate networks. There's no double NAT. It's just essentially as though each of you has your own internet connection and there is no tie between the two at all. However, if it were me, I wouldn't bother. I'd just do the double NAT. Yeah, it is a much more straightforward solution.

Right. Well, we'd better get out of here then. Remember, show at 2.5admins.com if you want to send any questions or feedback. You can find me at joerest.com slash mastodon. You can find me at mercenariesysadmin.com. And I'm at Alan Jude. We'll see you next week.