Any cryptography you’re likely to encounter uses fixed size primes over a residue ring for performance reasons. These superlarge primes aren’t relevant for practical cryptography, they’re just fun.
Any cryptography you’re likely to encounter uses fixed size primes over a residue ring for performance reasons. These superlarge primes aren’t relevant for practical cryptography, they’re just fun.
A highly compressed, global base map at 1m resolution is somewhere on the order of 10TB. MSFS is probably using higher resolution commercial imagery, and that’s just the basemap textures, most of which you’ll never see.
MSFS implements optimizations on top of that (progressive detail, compression, etc), but that’s how almost all map systems work under the hood. It’s actually an efficient way to represent real environments where you don’t have the luxury of procedural generation.
Cleanroom RE is how you prove that’s what you did to a court. The point is to avoid into a courtroom with Nintendo at all, making the point moot.
The thing is, steam’s market dominance is one of user choice rather than anticompetitive strategies or lack of alternatives. Steam doesn’t do exclusives, they don’t charge you for external sales, they don’t even prevent you from selling steam keys outside the platform, or users from launching non steam games in the client. The only real restriction is that access to steam services requires a license in the active steam account. Even valve-produced devices like the steam deck can install from other stores.
Sure, dominance is bad in an abstract theoretical way and it’d be nice if Gog, itch.io, etc were more competitive, but Steam is dominant because consumers actively choose it.
Glaciers actually do retreat and advance seasonally or on even longer cycles. Some have terminuses that move back and forth literal miles. One of the key indicators of climate change is the fact that globally, glaciers are retreating more than they’re advancing on average.
Other than Apple music and iCloud, they’re generally less intrusive about popups than Microsoft. Their tactic is to completely prevent competitors from integrating with the system at all rather than nag you to use a setting. For example, there’s no way to use Google maps or Spotify in all the same ways you can use Apple music or Maps.
A torque converter is part of the whole transmission system even if it’s a separate housing. When you buy a new transmission, it comes with a torque converter.
Torque converters also create the majority of heat in automatic transmissions and are why automatic transmissions get coolers in the first place. How many manuals have you seen with transmission coolers?
deleted by creator
The CSB doesn’t regulate and it can’t issue fines. They also don’t show up unless you’ve already had an incident. When they do show up, it’s simply to document and investigate the root causes, so they can issue recommendations to one of the regulatory agencies that actually enforces things. You need to have really fucked up for an agency with literally 40 staff overseeing one of the largest industrial economies in the world to notice you.
There is independent government oversight. That’s NHTSA, the agency doing these investigations. The companies operating these vehicles also have insurance as a requirement of public operating permits (managed by the states). NHTSA also requires mandatory reporting of accidents involving these vehicles and has safety standards.
The only thing missing is the fee, and I’m not sure what purpose that’s supposed to serve. Regulators shouldn’t be directly paid by the organizations they’re regulating.
Just for context, a large chunk of “top tech talent” at the companies in the study are going to be making 200-400k. While there’s still going to be issues with pay, it’s a pretty different situation than fast food workers or similar.
I’m not assuming it’s going to fail, I’m just saying that the exponential gains seen in early computing are going to be much harder to come by because we’re not starting from the same grossly inefficient place.
As an FYI, most modern computers are modified Harvard architectures, not Von Neumann machines. There are other architectures being explored that are even more exotic, but I’m not aware of any that are massively better on the power side (vs simply being faster). The acceleration approaches that I’m aware of that are more (e.g. analog or optical accelerators) are also totally compatible with traditional Harvard/Von Neumann architectures.
ML is not an ENIAC situation. Computers got more efficient not by doing fewer operations, but by making what they were already doing much more efficient.
The basic operations underlying ML (e.g. matrix multiplication) are already some of the most heavily optimized things around. ML is inefficient because it needs to do a lot of that. The problem is very different.
TCP has been amended in backwards incompatible ways multiple times since 1993. See e.g. RFCs 5681, 2675, and 7323 as examples.
Plus, speaking TCP/IP isn’t enough to let you to use the web, which is what most people think of when you say “Internet”. That 1993 device is going to have trouble speaking HTTP/1.1 (or 1.0 if you’re brave) to load even the most basic websites and no, writing the requests by hand doesn’t count.
eGPUs aren’t supported on Apple silicon.
It’s pretty unintuitive because we’re not used to dealing with ocean sized bodies of water in day to day life. Part of the explanation is just that the prevailing winds pile all the water in the Pacific up against the coast, causing higher sea levels on the West Coast. The lower salinity of the Pacific also causes lower water density, which translates to higher sea levels.
I haven’t explained what the differences are because almost everything is different. It’s like comparing a Model T to a Bugatti. They’re simply not built the same way, even if they both use internal combustion engines and gearboxes.
Let me give you an overview of how the research pipeline occurs though. First is the fundamental research, which outside of semiconductors is usually funded by public sources. This encompasses things like methods of crack formation in glasses, better solid state models, improved error correction algorithms and so on. The next layer up is applied research, where the fundamental research is applied to improve or optimize existing solutions / create new partial solutions to unsolved problems. Funding here is a mix of private and public depending on the specific area. Semiconductor companies do lots of their own original research here as well, as you can see from these Micron and TSMC memory research pages. It’s very common for researchers who are publicly funded here to take that research and use it to go start a private company, usually with funding from their institution. This is where many important semiconductor companies have their roots, including TSMC via ITRI. These companies in turn invest in product / highly applied research aimed at productizing the research for the mass market. Sometimes this is easy, sometimes it’s extremely difficult. Most of the challenges of EUV lithography occurred here, because going from low yield academic research to high yield commercial feasibility was extremely difficult. Direct investment here is almost always private, though there can be significant public investments through companies. If this is published (it often isn’t), it’s commonly done as patents. Every company you’ve heard of has thousands of these patents, and some of the larger ones have tens or hundreds of thousands. All of that is the result of internal research. Lastly, they’ll take all of that, build standards (e.g. DDR5, h.265, 5G), and develop commercial implementations that actually do those things. That’s what OEMs buy (or try to develop on their own in the case of Apple modems) to integrate into their products.
You have no idea how modern technology is produced. Any particular product is usually the result of dozens to thousands of iterations, some funded with public money and many not. Let’s take an example from your chart: DRAM. I actually don’t know when DARPA “developed” DRAM (since DARPA usually funds private companies to do development for them), but it must have been before 1970 when Intel designed the 1103 chip that got them started. Do you think that pre-1970s design is remotely similar to the DRAM operating on your device today? I’ll give you a hint: it’s not.
And no, modern device development does not consist of gluing a bunch of APIs together. Apple maintains its own compilers, languages, toolchains, runtimes, hardware, operating systems, debugging tools, and so on. Some of that code had distant origins in open source (e.g. webkit), but that’s vastly different than publicly funded and those components are usually very different today.
They’re failing to produce competitive modems because modern wireless is one of closest things humans have to straight up black magic. It’s extremely difficult to get right, especially as frequencies go up, SNR goes down, and we try to push things ever faster despite having effectively reached the Shannon limit ages ago.
Can’t be air canada and repressed trauma prevents me from acknowledging WestJet’s existence, so I’m going to guess the good one is Harbour Air. They run the cute little seaplanes you see around Vancouver and Victoria. I hear that boarding one when the system clock is set to 3am unlocks a special area where you can catch spirit bears.