• 0 Posts
  • 20 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle

  • What? Tech companies the world over have people on 24/7 on-call rotas, and it’s usually voluntary.

    Depending on the company, you might typically do 1 week in 4 on-call, get a nice little retainer bonus for having to have not much of a social life for 1 week in 4, and then get an additional payment for each call you take, plus time worked at x1.5 or x2 the usual rate, plus time off in lieu during the normal workday if the call out takes a long time. If you do on-call for tech and the conditions are worse than this, then your company’s on-call policies suck.

    I used to do it regularly. Over the years, it paid for the deposit on my first house, plus some nice trips abroad. I enjoyed it - I get a buzz out of being in the middle of a crisis and fixing it. But eventually my family got bored of it, and I got more senior jobs where it wasn’t considered a good use of my energies.

    Your internet connection, the websites and apps you use, your utilities - they don’t fix themselves when they break at 0300.

    If TSMC’s approach to on-call is bad, then yeah, screw that. I don’t see anything in the article that says that one way or the other. But doing an on-call rota at all is a perfectly normal thing to do in tech.


  • Do we have to bring this up again? It’s just boring.

    systemd is here and it isn’t going anywhere soon. It’s an improvement over SysV, but the core init system is arguably less well-designed than some of the other options that were on the table 10 years ago when its adoption started. The systemd userspace ecosystem has significantly stifled development of alternatives that provide equivalent functionality, which has led to less experimentation and innovation in those areas. In many cases those systemd add-on services provide less functionality than what they have replaced, but are adopted simply because they are part of the systemd ecosystem. The core unit file format is verbose and somewhat awkward, and the *ctl utilities are messy and sometimes unfriendly.

    Like most Red Hat-originated software written in the last 15 years, it valiantly attempts to solve real problems with Linux, and mostly achieves that, but there are enough corner cases and short-sighted design decisions that it ends up being mediocre and somewhat annoying.

    Personally I hope that someone comes along and takes the lessons learned and rewrites it, much like Pulseaudio has been replaced by Pipewire. Perhaps if someone decides it needs rewriting in Rust?


  • The WiFi card is probably a Realtek 8852AE, which has become very common in laptops since 2021. Unfortunately Realtek driver support tends to lag quite a bit.

    If you want to run Ubuntu Desktop 22.04, then you’re probably best off waiting a few weeks for the Ubuntu Desktop 22.04.4 point release. It’s due sometime this month. It will boot and install an “HWE” (Hardware Enablement) kernel and drivers, that are based on the kernel from Ubuntu 23.04, and therefore should work out of the box with your WiFi card.

    While it’s possible to upgrade an existing Ubuntu 22.04 installation with the latest HWE kernel, doing it by downloading the relevant packages on another machine and moving them across using a USB stick is going to be somewhat frustrating if you’ve not done it before. You’ll certainly learn a few things, but it may not be an enjoyable experience. I’m a grizzled Linux veteran, and I’m pretty sure I’d end up forgetting to download one or more packages and having to swap back and forth between machines.

    In the meantime, I would just continue to use Ubuntu 23.04. In fact, if it was me, I would probably just stick with 23.04, upgrade to 23.10 and then subsequently 24.04 when they become available. What you do once you’re on the 24.04 LTS release is up to you. By that time, other distros will probably also work out of the box too.


  • I’m a big fan of Kubernetes, and for larger projects the flexibility and power it brings is unrivalled. But for smaller projects, assuming equal levels of competence, delivery teams using managed Kubernetes are almost universally later and have more issues than teams that use simpler solutions. Container-as-a-service solutions like GCP CloudRun or AWS FarGate help somewhat, but are not cheap for a given amount of compute time.

    Terraform (or IaC in general) absolutely has a place, because even if you use Kubernetes, most projects have more infrastructure to manage than just the cluster - at the very least, lemmy.world has a CloudFlare proxy to manage - and clicking buttons in a management portal is not a repeatable way of deploying that, or deploying the Kubernetes clusters themselves.

    Ansible also has a place, particularly if you’re deploying onto bare metal. I wouldn’t use it for new deployments unless I had bare metal to configure and maintain, but lemmy.world is deployed onto a bare metal server as I understand it. Plus, the most effective tooling is generally the one your team understands.



  • Apple users have been sending text messages interchangeably between their phones and computers/tablets for years.

    As have Android users. Microsoft Phone Link/My Phone Companion and KDE Connect have supported this for years on their relevant PC platforms. The Phone Link Android app is even preinstalled on Samsung devices. There’s a teensy bit of setup but nothing complicated. KDE Connect even supports stuff like using the phone as a touchpad, remote keyboard, or media/presentation controller.

    If your PC is a Chromebook then you don’t even need these. If you sign into the phone and Chromebook with the same Google account, the integration just works, much as it does on Apple devices.

    Most of your arguments can be boiled down to “everything is really slick if you use an all-Apple ecosystem”. Which is fine, but the same can be said about Android - if you use an all-Google ecosystem with Pixels, Chromebooks and Google Workspace then most, if not all of your complaints about Android go away. Pixel Android is more consistent and less buggy than most vendor versions of Android. Integration with Chromebooks works out of the box. Google Workspace MDM is simple and straightforward, and you don’t really need to buy a separate MDM solution.

    The difference is that Android at least makes a decent effort to cater for a heterogeneous ecosystem. With Apple, if you’re not entirely onboard with an all-Apple ecosystem then it starts getting messy quickly.


  • At least for me, there is a big difference between naming things at home and naming things for work.

    Work “pet” machines get systematic names based on function, location, ownership and/or serial/asset numbers. There aren’t very many of them these days. If they are “cattle” then they get random names, and their build is ephemeral. If they go wrong or need an upgrade, they get rebuilt and their replacement build gets a new random name. Whether they are pets or cattle, the hostnames are secondary to tags and other metadata, and in most cases the tags are used to identify the machines in the first instance, because tags are far more flexible and descriptive than a hostname.

    At home, where the number of machines is limited, I know all of them like the back of my hand, and it’s mostly just me touching them, whimsical names are where it’s at.


  • marmarama@lemmy.worldtoSelfhosted@lemmy.worldWhat is your machine naming scheme?
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Ungulates. Because who doesn’t like a hoofed animal?

    My client machines are even-toed ungulates (order Artiodactyla) and my servers/IoT machines are odd-toed (order Perissodactyla). I’m typing this on Gazelle. My router is called Quagga, both after the extinct zebra subspecies and the routing protocol software (I don’t use it any more but hey, it’s a router).

    Biological taxonomy is a great source of a huge number of systematic (and colloquial) names.


  • I wouldn’t say the Pixel line’s hardware is rubbish, more that Google is focused on having a polished “it just works” experience rather than trying to differentiate themselves by having the fastest, biggest, newest hardware in the Android market.

    The mobile market hit the “diminishing returns” point quite a while ago and for a lot of people - probably the majority - the only reasons to upgrade are security updates ending, or because a non-replaceable battery is getting to the end of its life.

    I used to upgrade every 12-18 months religiously, but now my Pixel 5 is coming up on 3 years old and I’d happily keep it another few years with a battery replacement, if the updates weren’t going to end shortly.


  • I could well be wrong about the AAC passthrough, and I should have hedged that statement with “allegedly” as I’ve not tested it myself.

    To your other point though, I disagree - there are plenty of ways you could pass through an unchanged AAC bitstream, but still mix in other sounds when required. For example, having the sender duck the original bitstream out temporarily and send a mixed replacement bitstream while the other sound is playing. Or (and this would only work if you control the firmware on the receiver, but if you’re using Apple headphones with an Apple device, that’s not a problem) sending multiple bitstreams to the receiver and letting the receiver mix them.


  • I can only comment on my experience with my own equipment and ears, but in my experience, 990Kbps LDAC is noticeably more transparent than 256Kbps AAC for Bluetooth audio.

    I can fairly reliably guess whether or not I remembered to switch my Sony XM4s out of multipoint mode the last time I used them (when in multipoint pairing mode LDAC is not supported and 256Kbps AAC is usually what gets negotiated). The difference is small, but over a few minutes of listening, the sonic signature when it’s using AAC is just a little bit “off” and my ears don’t like it as much.

    Could I ABX the difference using the usual ABX setup with short samples of music I’m not familiar with? Probably not. Can I tell the difference over an extended period using music I know well, and that I often listen to uncompressed? Yes, pretty easily.

    LDAC is not a particularly sophisticated codec, but it doesn’t have to be when it has a 990Kbps bitrate. It’s also possible that the FDK-AAC codec that I think both Pipewire and Android use for real-time AAC encoding is not the best tuned for 256Kbps CBR. AIUI in 256Kbps CBR mode, FDK-AAC has a hard low-pass filter at 17KHz, and I can still hear above 17KHz.


  • Yeah, I agree.

    I bought them for their noise cancelling primarily, and they’re excellent at that, but otherwise they’re not great. The un-EQed frequency response is terrible for headphones in their price range: flabby, wildly over-exaggerated bass and no mids at all. Running without EQ I can barely hear lyrics - every singer sounds like they’re mumbling underwater. I’ve had $20 IEMs with better tonal balance. They respond well to EQ but the on-board EQ doesn’t have enough frequency bands to even come close to fixing them. Wavelet on Android doing EQ duty makes them listenable. Even when you do EQ them properly, they still sound a bit dull and lifeless.

    No idea how they got so much praise when they were launched. The power of marketing budgets I guess. For a while I was gaslighting myself thinking I had a faulty pair or maybe there was something going wrong with my hearing, but having heard another pair, and doing comparisons with my other headphones - most of which are far cheaper - I realised that no, they’re just not very good as headphones.


  • It is worse than uncompressed, but 990Kbps LDAC is the closest codec to totally transparent I’ve heard for Bluetooth audio. AptX HD is nearly as good to my ears, and is better than 660Kbps LDAC. The differences are very small though, especially when compared with the differences on the analog side, e.g. the amp, and particularly the headphone design.

    Apple side-steps the problem by, at least when you’re listening to Apple Music, simply sending the AAC stream as-is to the headphones and has them decode the audio. I don’t know why that isn’t a more common approach.

    I’m still somewhat bemused that we’re talking about Bluetooth codecs at all. It surely can’t be that difficult technically to get 1.5Mbps actual throughput on Bluetooth and simply send raw 16-bit/44.1Khz PCM. 2.4Ghz WiFi is capable of hundreds of times that speed. Bluetooth has been stuck at the same speeds for decades.


  • I have a Radsone ES100 Bluetooth DAC/headphone amp, and that supports LDAC, multipoint, and doesn’t compromise the LDAC bitrate when you have multipoint enabled. You can even leave it plugged in as a USB DAC and still use multipoint BT with LDAC, and it switches smoothly between the sources depending on which device started playing a stream most recently.

    I was distinctly underwhelmed by the BT implementation when I got my Sony XM4s, it’s kinda weak by comparison.





  • Converted-to-Bluetooth Stadia controller.

    It’s actually a really nice controller. The ergonomics are great for my big meaty hands, it’s got some weight to it and feels really solidly built. The heft means the vibration really has some kick to it. The battery life is really good too - it was specced for having Wi-Fi on all the time, so now it’s running only a little Bluetooth LE radio, the battery is massive. Even when it runs down, the charge rate is quick - full in about half an hour, and then good to go for weeks. Again, probably because it was specced for Wi-Fi, the radio circuitry is way above average and the range is stupid - I can control a Steam Deck from two rooms away, through two solid brick walls, something none of my other controllers can do.

    The sticks are accurate and don’t drift, the buttons are pretty good, and the D-Pad is a bit stiff but perfectly serviceable. My one significant complaint is that the springback on the triggers is way too light, which makes it difficult to be subtle with the triggers, a little annoying for driving games.

    Still, if you see one at a sensible price, they’re a steal.


  • Bit of a nitpick, but the comparison with the reversing of the MS Office formats is a bit tenuous, and somewhat revisionist.

    Competitors and open-source applications were reverse-engineering the Office file formats long before Apple iWork was a thing, and arguably no-one really gets it right because in order to get it perfect you’d have to reproduce the Office application layouting engine exactly, bug-for-bug. Even Microsoft doesn’t get it 100% from release to release.