Steam has voice channels? 🤯
Steam has voice channels? 🤯
Ahh OK that makes sense thanks
Hol up, Kagi is indexing discord servers?
Nintendo does have sales from time to time, they’re just rarely great discounts. If you have a switch and you wishlist games they will email you if your wishlisted game goes on sale.
Mostly you can’t - most fsd options I’m aware of are mostly in the robotaxi space. Here’s a website that tracks where they’re available.
Otherwise your best bet today that I know of would be Mercedes Drive Pilot, which has a Level 3 rating for being autonomous.
Agree that passkeys are the direction we seem to be headed, much to my chagrin.
I agree with the technical advantages. Where passkeys make me uneasy is when considering their disadvantages, which I see primarily as:
There’s no silver bullet for the authentication problem, and I don’t think the passkey is an exception. What the passkey does provide is relief from credential stuffing, and I’m certain that consumer-facing websites see that as a massive advantage so I expect that eventually passwords will be relegated to the tomes of history, though it will likely be quite a slow process.
What is your suggestion for a superior solution to the problems passwords solve?
What an absolute failure of the legal system to understand the issue at hand and appropriately assign liability.
Here’s an article with more context, but tl;dr the “hackers” used credential stuffing, meaning that they used username and password combos that were breached from other sites. The users were reusing weak password combinations and 23andme only had visibility into legitimate login attempts with accurate username and password combos.
Arguably 23andme should not have built out their internal data sharing service quite so broadly, but presumably many users are looking to find long lost relatives, so I understand the rationale for it.
Thus continues the long, sorrowful, swan song of the password.
The website makes it sound like all of the code being bespoke and “based on standards” is some kind of huge advantage but all I see is a Herculean undertaking with too few engineers and too many standards.
W3C lists 1138 separate standards currently, so if each of their three engineers implements one discrete standard every day, with no breaks/weekends/holidays, then having an alpha available that adheres to all 2024 web standards should be possible by 2026?
This is obviously also without testing but these guys are serious, senior engineers, so their code will be perfect on the first try, right?
Love the passion though, can’t wait to see how this project plays out.
Since we’re telling people to Google things, try “anecdotal fallacy” and let us know if it helps you to understand the source of the downvotes.
The OP is about survey data that directly contradicts your position. It’s fantastic that you’ve found a position where you have work/life balance that works so well for you, but it simply doesn’t match the experience of many commenting in this thread or those who were surveyed.
Be as obstinate as you like, it won’t change the lived experiences of others in the industry.
Is there any chance you’re at a kbbq or hotpot restaurant? Because then you get to cook the meal yourself, which is arguably chef-like.
Jokes aside, I see the comparison you’re making and it’s not a bad one. I’d counter by giving the example of a menu - when you get to a restaurant you’re given a menu with text descriptions of the food you can receive from the kitchen. Since this is an analogy and not an exact comparison, let’s say that a meal on the menu is like the starting point of the workflow I described.
Based on that you have an idea of what the output will be when you order - but let’s say you don’t like mushrooms and you prefer your sauce on the side. When you make your order you provide those modifications - this is like inpainting.
Certainly you’re not a ‘chef’, but if the dish you design is both bespoke and previously unimaginable, I’d argue that at the very least you contributed to the creative process and participated in creating something new that matches your internal vision.
Not exactly the same but I don’t think it’s entirely different.
Not OP but familiar enough with open source diffusion image generators to be able to chime in.
Now I’d argue that being an artist comes down to being able to envision something in your mind’s eye and then reproduce it in the real world using some medium, whether it’s a graphite pencil, oil paint, a block of marble, Wacom tablet on a pc, or even through a negotiation with an AI model. Your definition might be different, but for the sake of conversation this is how I’m thinking about it.
The work flow for an AI generated image can have a few steps before feeling like it sufficiently aligns with your vision. Prompting for specific details can be tricky, so usually step 1 is to generate the basic outline of the image you’re after. Depending on your GPU or cloud service, this could take several minutes or hours before you get a basis that you can work with. Once you have the basic image, you can then use inpainting tools to mask specific areas of the image and change specific details, colors, etc. This again can take many many generations before you land on something that sufficiently matches your vision.
This is all also after you go through the process of reviewing and selecting one of the hundreds of models that have been trained specifically for different types of output. Want to generate anime-style art? There’s a model for that, want something great at landscapes? There’s a different one for that. Surely you can use an all-purpose model for everything, but some models simply don’t have the training to align to your vision, so you either choose to live with ‘close enough’ or you start downloading new options, comparing them with your existing work flow, etc.
There’s certainly skill associated with the current state of image generation. Perhaps not the same level of practice you need to perfectly represent a transparent veil in graphite, but as with other formats I have a hard time suggesting that when someone represents their vision in the real world that it’s automatically “not art”.
It sounds like someone got ahold of a 6 year old copy of Google’s risk register. Based on my reading of the article it sounds like Google has a robust process for identifying, prioritizing, and resolving risks that are identified internally. This is not only necessary for an organization their size, but is also indicative of a risk culture that incentivizes self reporting risks.
In contrast, I’d point to an organization like Boeing, which has recently been shown to have provided incentives to the opposite effect - prioritizing throughput over safety.
If the author had found a number of issues that were identified 6+ years ago and were still shown to be persistent within the environment, that might be some cause for alarm. But, per the reporting, it seems that when a bug, misconfiguration, or other type of risk is identified internally, Google takes steps to resolve the issue, and does so at a pace commensurate with the level of risk that the issue creates for the business.
Bottom line, while I have no doubt that the author of this article was well-intentioned, their lack of experience in information security / risk management seems obvious, and ultimately this article poses a number of questions that are shown to have innocuous answers.
Well to be fair the OP has the date shown in the image as Apr 23, and Google has been frantically changing the way the tool works on a regular basis for months, so there’s a chance they resolved this insanity in the interim. The post itself is just ragebait.
*not to say that Google isn’t doing a bunch of dumb shit lately, I just don’t see this particular post from over a month ago as being as rage inducing as some others in the community.
Wait, you don’t inherently trust pictures of text posted by anonymous strangers online? Clearly this sentiment deserves downvotes. /s
On a less sarcastic note, I’ve noticed this a lot with my gen z friends - instead of using the share button that is built into pretty much every website and app these days, I get a screenshot of a headline from an article and am left to find the source on my own. Infuriating.
You’re right, it’s not an insurmountable obstacle, I think I was just feeling petulant about seeing another product with a sign next to it saying basically, “you must be this invested in the Apple ecosystem to ride”.
Let’s be real though, it’s already a better option than what Apple is offering for $3500, so I’m sure they will get some traction before being bought out.
Lastly, because you underscored the point I was making, fuck iPhones.
This requires an Apple iPhone XR or newer, as the face scan utilizes the TrueDepth sensor.
Am I wrong in my reading that this hardware product is only available for people who already own and use an iPhone XR or newer? It seemed neat until I got to that bit…
This is really just stereotypical Tesla driver behavior. They are far and above the most entitled drivers.
FTFY
I have one of these, and while the switch tech is certainly neat, I haven’t really come up with many good ways to use it.
Their implementation doesn’t seem to support changing resistance or being sensitive to multiple levels of pressure on the key, but one way I do use it is by changing the activation distance for certain keys that I tend to press by mistake when gaming, like caps lock, so that you really need to bottom out the key to activate it. This seems to help a bit but I suspect that if I wanted to get the most out of it, I would probably need to be a much more intense gamer.
Yeah Susan, I’m sure Microsoft TOTALLY learned their lesson from the Crowdstrike incident. Y’know, since they’ve never had an anti-malware company cause worldwide outages because of a configuration error before.