• 0 Posts
  • 561 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle




  • I’d actually be surprised if Apple pays anything to OpenAI at the moment. Obviously running some Siri requests through ChatGPT (after the user confirms that’s what they want to do) is quite expensive for OpenAI, but Apple Intelligence doesn’t touch OpenAI servers at all (just Siri has ChatGPT integration).

    Even then, there’ll obviously still be a lot of requests, but the problem OpenAI has is that they aren’t really in a negotiating position. Google owns Android and so most phones default to Gemini, instantly giving them a huge advantage in marketshare. OpenAI doesn’t have its own platform, so Apple having the second largest install base of all smartphone operating systems is OpenAI’s best chance.

    Apple might benefit from OpenAI but OpenAI needs Apple way more than the other way around. Apple Intelligence runs perfectly fine (I mean, as “perfectly fine” as it currently does) without OpenAI, the only functionality users would lose is the option to redirect “complex” Siri requests to ChatGPT.

    In fact, I wouldn’t be surprised if OpenAI actually pays Apple for the integration, just like Google pays Apple a hefty sum to be the default search engine for Safari.


  • Apple Intelligence isn’t “powered by OpenAI” at all. It’s not even based on it.

    The only time OpenAI servers are contacted is when you ask Siri something it can’t compute with Apple Intelligence, but even then it clearly asks the user first if they want to send the request to ChatGPT.

    Everything else regarding Apple Intelligence runs either on-device or on their “Private Cloud Compute” infrastructure, which apparently uses M2 Ultra chips. You then have to trust Apple that their claims regarding privacy are true, but you kind of do that when choosing an iPhone in the first place. There’s some pretty interesting tech behind this actually.








  • CUDA is a proprietary platform that (officially) only runs on Nvidia cards, so making projects that use CUDA run on non-Nvidia hardware is not trivial.

    I don’t think the consumer-facing stuff can be called a monopoly per se, but Nvidia can easily force proprietary features onto the market (G-Sync before they adapted VESA Adaptive-Sync, DLSS etc.) because they have such a large market share.

    Assume a scenario where Nvidia has 90% market share and Nvidia cards would still only support adaptive sync via their proprietary G-Sync solution. Display manufacturers will obviously want to tailor to the market, so most displays will release with support for G-Sync instead of VESA Adaptive-Sync. 9 out of 10 customers will likely buy a G-Sync display as they have Nvidia cards. Now everyone has a monitor supporting some form of adaptive sync. AMD and Nvidia release their new GPU generation and isolated (in this hypothetical scenario), AMD cards are 10% cheaper for the same performance and efficiency as their Nvidia counterparts. The problem for AMD here is that even though per $ they have the better cards, 9 out of 10 people would need new displays to get adaptive sync working with an AMD card (because their current display only supports the proprietary G-Sync), and AMD can’t possibly undercut Nvidia by so much that the customer can also buy a new display for the price difference. This results in 9 out of 10 customers going for Nvidia again.

    To be fair to Nvidia, most of their proprietary features are somewhat innovative. When G-Sync first came out, VESA Adaptive-Sync wasn’t really a thing yet. DLSS was way better than any other upscaler in existence when it released and it required hardware that only Nvidia had.

    But with CUDA, it’s a big problem. Entire software projects that just won’t (officially) run on non-Nvidia hardware so Nvidia is able to charge whatever they want (unless what they’re charging is more than the cost of switching to competitor products and importantly porting over the affected software projects).








  • To be fair, USB-C, especially with Thunderbolt, is much more universal. There are adapters for pretty much every “legacy” port out there so if you really need FireWire you can have it, but it’s clear why FireWire isn’t built into the laptop itself anymore.

    The top MacBook Pro is also the 2016+ pre Apple Silicon chassis (that was also used with M chips, but sort of as a leftover), while the newer MacBook Pro chassis at least brought back HDMI and an SD card reader (and MagSafe as a dedicated charging port, although USB-C still works fine for that).

    Considering modern “docking” solutions only need a single USB-C/Thunderbolt cable for everything, these additional ports only matter when on the go. HDMI comes in handy for presentations for example.

    I’d love to see at least a single USB-A port on the MacBook Pro, but that’s likely never coming back. USB-C to A adapters exist though, so it’s not a huge deal. Ethernet can be handy as well, but most use cases for that are docked anyway.

    I like the Framework concept the most, also “only” 4 ports (on the 13" at least, plus a built-in combo jack), but using adapter cards you can configure it to whatever you need at that point in time and the cards slide into the chassis instead of sticking out like dongles would. I usually go for one USB-C/Thunderbolt on either side (so charging works on either side), a single USB-A and video out in the form of DisplayPort or HDMI. Sometimes I swap the video out (that also works via USB-C obviously) for Ethernet, even though the Ethernet card sticks out. For a (retro) LAN party, I used 1 USB-C, USB-A (with a 4-port hub for wired peripherals), DisplayPort and Ethernet.