It’s paraphrasing Torvalds himself though. It’s a cheeky title.
“… and I have absolutely no excuses to delay the v6.6 release any more, so here it is,”
It’s paraphrasing Torvalds himself though. It’s a cheeky title.
“… and I have absolutely no excuses to delay the v6.6 release any more, so here it is,”
Didn’t understand that by willing you meant wanting.
I’ve seen something similar to this before in remote desktop servers where user redirected printers end up bloating registries to the point login times exceed processing limits and so not all the configuration in the registry or group policy gets processed. Each redirected printer gets created and never pegged, and it’s unique to that rdp session so they are duplicated to infinity over time. Glad you found it out, the only point with the complexity is I was trying to explain that it being complex doesn’t mean it won’t be robust if it’s still implemented without conflicts so you can rule that out (if you’ve ruled out conflicts) . Sounds like you found the culprit in the end! Good work.
When the horses have all bolted, BBC is the one to close the barn door.
Hey, sorry to say but not seeing this at all. About 60 customers, each between 30-200 staff, in Australia region. Almost all of them have reasonable conditional access policies managing maximum login times per app, requirements for device compliance for data sync and geo-restrictions and longer login times for known sites, as well as standard mfa requirements.
Id say there’s something else in your stack. We monitor many of our customers with 3rd party tools too, including Arctic Wolf for seim /SOC alerts and triage and isolation if AAD accounts are breached. Sentinel one with integration in aad too. Though personally I feel like most medium and small businesses would be better served with the already included defender for business. A topic for a different day.
But no unusual requirement for cleaning cache and such to ensure the policies we configure act as we expect.
I’ve seen different tenants act differently of course in the past. But nothing right now I can suggest. I’d personally start doing a/b testing and reviewing all logs relative and see what impact before and after has on logs.
Anyway sounds frustrating so good luck.
I’m not in America but the organisation for NIST recommends it in guidance now and its getting backing by the nsa
https://www.zdnet.com/article/nsa-to-developers-think-about-switching-from-c-and-c-to-a-memory-safe-programming-language/ https://www.malwarebytes.com/blog/news/2022/11/nsa-guidance-on-how-to-avoid-software-memory-safety-issues
I see this becoming required in the future for new projects and solutions when working for new governnent solutions. The drum is certainly beating louder in the media about it.
Not possible without a domain, even just “something.xyz”.
The way it works is this:
Now, to get that experience you need to meet those conditions. The machine trying to browse to your website needs to trust the certificate that’s presented. So you have a few ways as I previously described.
Note there’s no reverse proxy here. But it’s also not a toggle on a Web server.
So you don’t need a reverse proxy. Reverse proxies allow some cool things but here’s two things they solve that you may need solving:
But in this case you don’t really need to if you have lots of ips since you’re not offering publicly you’re offering over tailscale and both Web servers can be accessed directly.
It’s possible to host a dns server for your domain inside your tailnet, and offer dns responses like: yourwebserver.yourdomain.com = tailnetIP
Then using certbot let’s encrypt with DNS challenge and api for your public dns provider, you can get a trusted certificate and automatically bind it.
Your tailnet users if they use your internal dns server will resolve your hosted service on your private tailnet ip and the bound certificate name will match the host name and everyone is happy.
There’s more than one way though, but that’s how I’d do it. If you don’t own a domain then you’ll need to host your own private certificate authority and install the root authority certificate on each machine if you want them to trust the certificate chain.
If your family can click the “advanced >continue anyway” button then you don’t need to do anything but use a locally generated cert.
It’s totally fine to bulk replace some sensitive things like specifically sensitive information with “replace all” as long as it doesn’t break parsing which happens with inconsistency. Like if you have a server named "Lewis-Hamiltons-Dns-sequence“ maybe bulk rename that so is still clear “customer-1112221-appdata”.
But try to differentiate ‘am I ashamed’ or ‘this is sensitive and leaking it would cause either a PII exfiltration risk or security risk’ since only one of these is legitimate.
Note, if I can find that information with dns lookup, and dns scraping, that’s not sensitive. If you’re my customer and you’re hiding your name, that I already invoice, that’s probably only making me suspicious if those logs are even yours.
Just fyi, as a sysadmin, I never want logs tampered with. I import them filter them and the important parts will be analysed no matter how much filller debugging and info level stuff is there.
Same with network captures. Modified pcaps are worse than garbage.
Just include everything.
Sorry you had a bad experience. The customer service side is kind of unrelated to the technical practice side though.
I love the hand gesture at the end!
Start realising that the way you’re used to scrolling with your mouse wheel, is a cog between you and the service it’s moving. Actually you were using natural all along. It was the early touch pads that were wrong and nonsense.
zfs is excellent. It’s enterprise and designed to suit the whole “I’ve got 60 disks filling up a 4ru top loaded SAN. If we expand we have to buy another 60 disk expansion.” and because of that it works perfectly for expansion. You don’t make a single raidz holding 60 disks. You’d make them in groups of say 6 or 8 or 10. Whatever suits your needs for speeds and storage and resilience. When you expand you drop another while raidz into the pool. Maybe it’s another 6 disks into the new storage shelf.
But since your article in 2016, the openZFS project has promised us individual raidz expanding: In 2021 the announcement: https://arstechnica.com/gadgets/2021/06/raidz-expansion-code-lands-in-openzfs-master/
In 2022 an update for feature design complete but no code: https://freebsdfoundation.org/blog/raid-z-expansion-feature-for-zfs/
The actual request is here: https://github.com/openzfs/zfs/pull/15022
And the last announcement update was in June 2033 in the leadership meeting recorded here: https://m.youtube.com/watch?time_continue=1&v=2p32m-7FNpM
You might think this is slow and yeah it’s snails pace. But it’s not from lack of work it’s really truely because it’s part of the entire strategy of making sure zfs and every update and every feature is just as robust.
I’m a fan even having hardware fail and disks fail both in enterprise, and at home. Zfs import being so agnostic just pull in the pool doesn’t matter if it was BSD or Linux.
Hi, I run pop! Os for about a year on a mac book pro 2012. My biggest hassles are Bluetooth audio sucks (glitchy) and I had to install a wireless driver to get wireless to work at all. Other than that, it’s working exactly as expected. Can recommend. It can’t game, it can’t play videos well because the inbuilt speakers suck (and the Bluetooth audio is glitchy), but it’s plenty performant for my actual tasks. Runs smooth. I’m sure most distributions will.
I think that’s a good idea, good luck with it!
I can guess at some things but let me first start with what I think is happening:
You have a gateway set. Your device sends a broadcast arp message asking 'who has ip ’ and the device with that ip is supposed to send back ‘me with this mac address!’.
That device is either sending it so slowly that your machine says ‘I can’t go past the gateway, the gateway isn’t responding’ which in your error message is no route to host.
Assuming that you have no custom manual network route in play.
So things that can cause that are usually link layer and layer two issues and sometimes duplicate IPs. Two devices with the gateway ip.
You should watch your mac address table and arp table (arp a) and watch if the router gateway disappears or changes Mac addresses.
Don’t feel bad because you’re really good at using a tool that doesn’t follow your values. I use Windows during the work week and I use Linux for gaming on the weekend where I literally can’t work even if I wanted to.
For me Windows is a tool box with propriatry tools that have no Linux compatibility. That’s OK for me. People get emotionally invested but that’s neither healthy nor helpful. No point being angry at work, it’s like being angry that your work uniform is made by one textiles vendor not the other.
You get to choose what you use at home in your own time. If you feel good using Linux then, do it!
The bypass is to run your own router, distribute locally hosted dns servers (either the router or pihole) and the dns servers get their lookups over dns over https (443) and your provider can’t intercept that since it looks like regular encrypted Web traffic just like they shouldn’t be able to inspect your netbank.
Australia is different but these isps who do that generally have a +$5 per month plan to go to a static public rout able public Up (instead of cgnat) and unfiltered Internet. They usually are more allowing mum and dad to filter the Web so their kids can’t get too far off track. Maybe just double check on your ISP portal settings but I’m going to assume you’re not in aus.
100%.Or set host file entries on each endpoint to resolve the mail.domain.com to your internal ip that’s available only over vpn. Not going to be easy on mobiles.
There is an assumption though that the mail server has an internal IP address wherever you are hosting. That might not be true. I would always put the public IP on the firewall and then NAT with specific port 25 in to the private IP of the server, but who knows what this particular OP has done.
I don’t think that works on my Samsung TV, or my partners iPad though. :)
Although not especially effective on the YouTube front, it actually increases network security just by blocking api access to ad networks on those kinds of IoT and walled garden devices. Ironically my partner loves it not for YouTube but apparently all her Chinese drama streaming websites. So when we go travel and she’s subjected to those ads she’s much more frustrated than when she’s at home lol.
So the little joke while not strictly true, is pretty true just if you just say ‘streaming content provider’.