• 0 Posts
  • 44 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle





  • Sorry, but this is completely wrong.

    Windows has ACLs and they are an important part of Windows administration, and used extensively for managing file permissions.

    Windows has supported ACLs on NTFS since Windows NT & NTFS were released in 1993 (possibly partly influenced by AIX ACLs in the late 80s influenced by VMS ACLs introduced the early 80s).

    ACLs were not introduced to standard POSIX until c.1998, and NFS and Linux filesystems didn’t get them until 2003. In fact, the design of the NFSv4 ACL standard was heavily influenced by the design of NTFS/Windows ACL model – a specific decision by the designers to model it more like NTFS rather than AIX/POSIX.

    Technically, at the filesystem level, exFAT also provides support for ACLs, but I am not sure if any implementation actually makes use of this feature (not even Windows AFAIK, certainly not any desktop version).




  • Correct me if I’m wrong

    Well actually, yes, I’m sorry to have to tell you are wrong. Shannon-Fano coding is suboptimal for prefix codes and Huffman coding, while optimal for prefix-based coding, is not necessarily the most efficient compression method for any given data (and often isn’t).

    Huffman can be optimal given certain strict constraints, but those constraints don’t always occur in natural/real- world data.

    The best compression method (whether lossless or lossy) depends greatly on the nature of the data to be compressed. Patterns and biases can make certain methods much more efficient (or more practical) in some cases, when they might be useless elsewhere or in general. This is why data is often transformed before compression, using a reversible transformation that “encourages” certain desirable statistical characteristics in the data, so the compression method can better exploit them.

    For example, compression software (e.g. gzip) may perform a Burrows-Wheeler transform and other encodings before applying Huffman coding to get a better compression ratio. If Huffman coding was an optimal compression method for all possible data, this would be redundant! Often, E.g. in medical imaging, audio/video data, the data is best analysed in a different domain to better reveal the underlying patterns and redundancies in the data so they cam be easily exploited by compression. E.g. frequency domain instead of time/spatial domain.




  • zero_iq@lemm.eetoLinux@lemmy.mlCircles do not exist
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 year ago

    Damn, so what’s the name of the shape that’s a flat donut with an inner and outer circular perimeters? i.e. a filled circle with another smaller radius circular area subtracted from it. Or 2D cross section of a torus seen perpendicularly to the plane that intersects the widest part of the torus. A squished donut, or chubby circle, if you like.


  • zero_iq@lemm.eetoLinux@lemmy.mlCircles do not exist
    link
    fedilink
    arrow-up
    27
    arrow-down
    1
    ·
    1 year ago

    And many “circles” aren’t circles either, but 2D torus approximations. The edge of a true circle is made of infinitesimally small points so would be invisible when drawn. And even if you consider a filled circle, how could you be sure you aren’t looking at a 1-torus with an infinitessimally small hole? Or an approximation of all the set of all points within a circle?

    Clearly, circles are a scam.



  • Unfortunately, it’s not as simple as that. Theoretically, if everyone was using state-of-the-art designs of fast-breeder reactors, we could have up to 300,000 years of fuel. However, those designs are complicated and extremely expensive to build and operate. The finances just don’t make it viable with current technology; they would have to run at a huge financial loss.

    As for Uranium for sea-water – this too is possible, but has rapidly diminishing returns that make it financially unviable quite rapidly. As Uranium is extracted and removed from the oceans, exponentially more sea-water must be processed to continue extracting Uranium at the same rate. This gets infeasible pretty quickly. Estimates are that it would become economically unviable within 30 years.

    Realistically, with current technology we have about 80-100 years of viable nuclear fuel at current consumption rates. If everyone was using nuclear right now, we would fully deplete all viable uranium reserves in about 5 years. A huge amount of research and development will be required to extend this further, and to make new more efficient reactor designs economically viable. (Or ditch capitalism and do it anyway – good luck with that!)

    Personally, I would rather this investment (or at least a large chunk of it) be spent on renewables, energy storage and distribution, before fusion, with fission nuclear as a stop-gap until other cleaner, safer technologies can take over. (Current energy usage would require running about 15000 reactors globally, and with historical accident rates, that’s about one major nuclear disaster every month). Renewables are simpler, safer, and proven ,and the technology is more-or-less already here. Solving the storage and distribution problem is simpler than building safe and economical fast-breeder reactors, or viable fusion power. We have almost all the technology we need to make this work right now, we mostly just lack infrastructure and the will to do it.

    I’m not anti-nuclear, nor am I saying there’s no place for nuclear, and I think there should be more funding for nuclear research, but the boring obvious solution is to invest heavily in renewables, with nuclear as a backup and/or future option. Maybe one day nuclear will progress to the point where it makes more sound sense to go all in on, say fusion, or super-efficient fast-breeders, etc. but at the moment, it’s basically science fiction. I don’t think it’s a sound strategy to bank on nuclear right now, although we should definitely continue to develop it. Maybe if we had continued investing in it at the same rate for the last 50 years it might be more viable – but we didn’t.

    Source for estimates: “Is Nuclear Power Globally Scalable?”, Prof. D. Abbott, Proceedings of the IEEE. It’s an older article, but nuclear technology has been pretty much stagnant since it was published.


  • The modern definition we use today was cemented in 1998, along with the foundation of the Open Source Initiative. The term was used before this, but did not have a single well-defined definition. What we might call Open Source today, was mostly known as “free software” prior to 1998, amongst many other terms (sourceware, freely distributable software, etc.).

    Listen again to your 1985 example. You’re not hearing exactly what you think you’re hearing. Note that in your video example the phrase used is not “Open-Source code” as we would use today, with all its modern connotations (that’s your modern ears attributing modern meaning back into the past), but simply “open source-code” - as in “source code that is open”.

    In 1985 that didn’t necessarily imply anything specific about copyright, licensing, or philosophy. Today it carries with it a more concrete definition and cultural baggage, which it is not necessarily appropriate to apply to past statements.


  • In the latest version of the emergency broadcast specification (WEA 3.0), a smart phone’s GPS capabilities (and other location features) may be used to provide “enhanced geotargeting” so precise boundaries can be set for local alerts. However, the system is backwards compatible – if you do not have GPS, you will still receive an alert, but whether it is displayed depends on the accuracy of the location features that are enabled. If the phone determines it is within the target boundary, the alert will be displayed. If the phone determines it is not within the boundary, it will be stored and may be displayed later if you enter the boundary.

    If the phone is unable to geolocate itself, the emergency message will be displayed regardless. (Better to display the alert unnecessarily than to not display it at all).

    The relevant technical standard is WEA. Only the latest WEA 3.0 standard uses phone-based geolocation. Older versions just broadcast from cell towers within the region, and all phones that are connected to the towers will receive and display the alerts. You can read about it in more detail here.


  • I understand the concerns about Google owning the OS, that’s my only worry with my chromebook. If Google start preventing use of adblockers, or limiting freedoms in other ways that might sour my opinion. But the hardware can run other OSs natively, so that would be my get-out-of-jail option if needed.

    I’ve not encountered problems with broken support for dev tools, but I am using a completely different tool chain to you. My experience with linux dev and cross-compiling for android has been pretty seamless so far. My chromebook also seems to support GPU acceleration through both Android and Linux VMs, so perhaps that is a device-specific issue?

    I’m certainly not going to claim that chromebooks are perfect devices for everyone, nor a replacement for a fully-fledged laptop or desktop OS experience. For my particular usage, it’s worked out great but YMMV, my main point is that ChromeOS isn’t just for idiots as the poster above seemed to think.

    Also, a good percentage of my satisfaction with it is the hardware and form-factor rather than ChromeOS per se. The same device running Linux natively would still tick most of my boxes, although I’d probably miss a couple of android apps and tablet mode support.


  • People who use Chromebooks are also really slow and aren’t technically savvy at all.

    Nonsense. I think your opinion is clouded by your limited experience with them.

    ChromeOS supports a full Debian Linux virtual machine/container environment. That’s not a feature aimed at non-tech-savvy users. It’s used by software developers (especially web and Android devs), linux sysadmins, and students of all levels.

    In fact I might even argue the opposite: a more technically-savvy user is more likely to find a use case for them.

    Personally, I’m currently using mine for R&D in memory management and cross-platform compiler technology, with a bit of hobby game development on the side. I’ve even installed and helped debug Lemmy on my chromebook! It’s a fab ultra-portable, bullet proof dev machine with a battery life that no full laptop can match.

    But then I do apparently have an IQ of zero, so maybe you’re right after all…