• 0 Posts
  • 44 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle






  • Agree that passkeys are the direction we seem to be headed, much to my chagrin.

    I agree with the technical advantages. Where passkeys make me uneasy is when considering their disadvantages, which I see primarily as:

    • Lack of user support for disaster recovery - let’s say you have a single smartphone with your passkeys and it falls off a bridge. You’d like to replace it but you can’t access any of your accounts because your passkey is tied to your phone. Now you’re basically locked out of the internet until you’re able to set up a new phone and sufficiently validate your identity with your identity provider and get a new passkey.
    • Consolidating access to one’s digital life to a small subset of identity providers. Most users will probably allow Apple/Google/etc to become the single gatekeeper to their digital identity. I know this isn’t a requirement of the technology, but I’ve interacted with users for long enough to see where this is headed. What’s the recourse for when someone uses social engineering to reset your passkey and an attacker is then able to fully assume your identity across a wide array of sites?
    • What does liability look like if your identity provider is coerced into sharing your passkey? In the past this would only provide access to a single account, but with passkeys it could open the door to a collection of your personal info.

    There’s no silver bullet for the authentication problem, and I don’t think the passkey is an exception. What the passkey does provide is relief from credential stuffing, and I’m certain that consumer-facing websites see that as a massive advantage so I expect that eventually passwords will be relegated to the tomes of history, though it will likely be quite a slow process.



  • What an absolute failure of the legal system to understand the issue at hand and appropriately assign liability.

    Here’s an article with more context, but tl;dr the “hackers” used credential stuffing, meaning that they used username and password combos that were breached from other sites. The users were reusing weak password combinations and 23andme only had visibility into legitimate login attempts with accurate username and password combos.

    Arguably 23andme should not have built out their internal data sharing service quite so broadly, but presumably many users are looking to find long lost relatives, so I understand the rationale for it.

    Thus continues the long, sorrowful, swan song of the password.


  • The website makes it sound like all of the code being bespoke and “based on standards” is some kind of huge advantage but all I see is a Herculean undertaking with too few engineers and too many standards.

    W3C lists 1138 separate standards currently, so if each of their three engineers implements one discrete standard every day, with no breaks/weekends/holidays, then having an alpha available that adheres to all 2024 web standards should be possible by 2026?

    This is obviously also without testing but these guys are serious, senior engineers, so their code will be perfect on the first try, right?

    Love the passion though, can’t wait to see how this project plays out.


  • Since we’re telling people to Google things, try “anecdotal fallacy” and let us know if it helps you to understand the source of the downvotes.

    The OP is about survey data that directly contradicts your position. It’s fantastic that you’ve found a position where you have work/life balance that works so well for you, but it simply doesn’t match the experience of many commenting in this thread or those who were surveyed.

    Be as obstinate as you like, it won’t change the lived experiences of others in the industry.


  • Is there any chance you’re at a kbbq or hotpot restaurant? Because then you get to cook the meal yourself, which is arguably chef-like.

    Jokes aside, I see the comparison you’re making and it’s not a bad one. I’d counter by giving the example of a menu - when you get to a restaurant you’re given a menu with text descriptions of the food you can receive from the kitchen. Since this is an analogy and not an exact comparison, let’s say that a meal on the menu is like the starting point of the workflow I described.

    Based on that you have an idea of what the output will be when you order - but let’s say you don’t like mushrooms and you prefer your sauce on the side. When you make your order you provide those modifications - this is like inpainting.

    Certainly you’re not a ‘chef’, but if the dish you design is both bespoke and previously unimaginable, I’d argue that at the very least you contributed to the creative process and participated in creating something new that matches your internal vision.

    Not exactly the same but I don’t think it’s entirely different.


  • Not OP but familiar enough with open source diffusion image generators to be able to chime in.

    Now I’d argue that being an artist comes down to being able to envision something in your mind’s eye and then reproduce it in the real world using some medium, whether it’s a graphite pencil, oil paint, a block of marble, Wacom tablet on a pc, or even through a negotiation with an AI model. Your definition might be different, but for the sake of conversation this is how I’m thinking about it.

    The work flow for an AI generated image can have a few steps before feeling like it sufficiently aligns with your vision. Prompting for specific details can be tricky, so usually step 1 is to generate the basic outline of the image you’re after. Depending on your GPU or cloud service, this could take several minutes or hours before you get a basis that you can work with. Once you have the basic image, you can then use inpainting tools to mask specific areas of the image and change specific details, colors, etc. This again can take many many generations before you land on something that sufficiently matches your vision.

    This is all also after you go through the process of reviewing and selecting one of the hundreds of models that have been trained specifically for different types of output. Want to generate anime-style art? There’s a model for that, want something great at landscapes? There’s a different one for that. Surely you can use an all-purpose model for everything, but some models simply don’t have the training to align to your vision, so you either choose to live with ‘close enough’ or you start downloading new options, comparing them with your existing work flow, etc.

    There’s certainly skill associated with the current state of image generation. Perhaps not the same level of practice you need to perfectly represent a transparent veil in graphite, but as with other formats I have a hard time suggesting that when someone represents their vision in the real world that it’s automatically “not art”.


  • It sounds like someone got ahold of a 6 year old copy of Google’s risk register. Based on my reading of the article it sounds like Google has a robust process for identifying, prioritizing, and resolving risks that are identified internally. This is not only necessary for an organization their size, but is also indicative of a risk culture that incentivizes self reporting risks.

    In contrast, I’d point to an organization like Boeing, which has recently been shown to have provided incentives to the opposite effect - prioritizing throughput over safety.

    If the author had found a number of issues that were identified 6+ years ago and were still shown to be persistent within the environment, that might be some cause for alarm. But, per the reporting, it seems that when a bug, misconfiguration, or other type of risk is identified internally, Google takes steps to resolve the issue, and does so at a pace commensurate with the level of risk that the issue creates for the business.

    Bottom line, while I have no doubt that the author of this article was well-intentioned, their lack of experience in information security / risk management seems obvious, and ultimately this article poses a number of questions that are shown to have innocuous answers.