• 0 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle

  • Idk about anyone else but its a bit long. Up to q10 i took it seriously and actually looked for ai gen artifacts (and got all of them up to 10 correct) and then I just sorta winged it and guessed and got like 50% of them right. OP if you are going to use this data anywhere I would first recommend getting all of your sources together as some of those did not have a good source, but also maybe watch out for people doing what I did and getting tired of the task and just wanting to see how well i did on the part i tried. I got like 15/20

    For anyone wanting to get good at seeing the tells, focus on discontinuities across edges: the number or intensity of wrinkles across the edge of eyeglasses, the positioning of a railing behind a subject (especially if there is a corner hidden from view, you can imagine where it is, the image gen cannot). Another tell is looking for a noisy mess where you expect noisy but organized: cross-hatching trips it up especially in boundary cases where two hatches meet, when two trees or other organic looking things meet together, or other lines that have a very specific way of resolving when meeting. Finally look for real life objects that are slightly out of proportion, these things are trained on drawn images, and photos, and everything else and thus cross those influences a lot more than a human artist might. The eyes on the lego figures gave it away though that one also exhibits the discontinuity across edges with the woman’s scarf.





  • The real AI, now renamed AGI, is still very far

    The idea and name of AGI is not new, and AI has not been used to refer to AGI since perhaps the very earliest days of AI research when no one knew how hard it actually was. I would argue that we are back in those time though since despite learning so much over the years we have no idea how hard AGI is going to be. As of right now, the correct answer to how far away is AGI can only be I don’t know.


  • Five years ago the idea that the turing test would be so effortlessly shattered was considered a complete impossibility. AI researchers knew that it was a bad test for AGI, but to actually create an AI agent that can pass it without tricks still was surely at least 10-20 years out. Now, my home computer can run a model that can talk like a human.

    Being able to talk like a human used to be what the layperson would consider AI, now it’s not even AI, it’s just crunching numbers. And this has been happening throughout the entire history of the field. You aren’t going to change this person’s mind, this bullshit of discounting the advancements in AI has been here from the start, it’s so ubiquitous that it has a name.

    https://en.wikipedia.org/wiki/AI_effect





  • garyyo@lemmy.worldtoMemes@lemmy.mlRevelations
    link
    fedilink
    arrow-up
    58
    ·
    1 year ago

    Given the type of people that we are targeting here I think that helium blow-up dolls are are a bit of a waste, especially considering the scale that we would need to perform this on to actually make it somewhat believable. Better would be to use hydrogen, its soo much cheaper than helium, has better lift, and is not a limited resource. Along with that a custom order of human shaped and roughly human colored (with painted on clothes patterns) balloons would work better. Likely a lot cheaper if done at larger scales, blow up dolls are made of tougher material than your average balloon. This would also allow for the pursuit of more sustainable materials given that we are just sort of releasing this stuff into the sky.

    There is also a matter of making it realistic. If we are limiting to maybe one city then its best to create some devices that automatically release them on timed schedules. load these up with a handful of people balloons each and let them release with increasing frequency throughout the day. Should be a bit more convincing and gets a bigger effect. For cleanup we already filled these guys with hydrogen, so why not just light them up. might make for a good effect and leave less waste to be examined, making it more difficult to prove that this is not a rapture event.




  • You should read a bit more on how LLMs work, as it really helps to know what the limitations of the tech are. But yeah, it’s good when it’s good but a lot of the time it is inconsistent. It is also confident but sometimes just confidently wrong, something that people have taken to call “hallucinations”. Overall it is a great tool if you can easily check it and are just using it to write up your own code writing, but pretty bad at actually generating fully complete code.




  • We don’t understand it because no one designed it. We designed how to train a nn, we designed some parts of the structure, but not the individual parts inside. For the largest LLMs there are upwards of 70 billion different parameters. Each being individual numbers they were can tweak. The are just too many of them to understand what any individual one does, and since we just left a optimization algorithm do it’s optimizing we can’t really even know what groups of them do.

    We can get around this, we can study it like we do the brain. Instead of looking at what an individual part does, group them together and figure out how they group influences things (AI explanability), or even get a different NN to look at it and generate an explanation (post hoc rationale generation). But that’s not really the same as actually understand what it is actually doing under the hood. What it is doing under the hood is more or less fundamentally unknowable, there is just to much information and it’s not well organized enough for us to be able to understand. Maybe one day we will be able to abstract what is going on in there and organize it in an understandable manner, but not yet.


  • One thing to note is that making an industry more efficient (like translating, which gpt is really good at, much better than google translate but not necessarily better than existing tools) comes with a decrease in the amount of jobs. Tech doesn’t have to eliminate the human portion, but if it even makes one more human twice as efficient in their job, thats half the humans you need doing that job for the same amount of work output.

    That being said this is not a great infographic for this topic.