For real, it almost felt like an LLM written article the way it basically said nothing. Also, the way it puts everything in bullet points is just jarring to read.
Hi! I am Creesch, also creesch on other platforms :)
For real, it almost felt like an LLM written article the way it basically said nothing. Also, the way it puts everything in bullet points is just jarring to read.
True, though that isn’t all that different from people doing knee jerk responses on the internet…
I am not claiming they are perfect, but for the steps I described a human aware of the limitations is perfectly able to validate the outcome. While still having saved a bunch of time and effort on doing an initial search pass.
All I am saying is that it is fine to be critical of LLM and AI claims in general as there is a lot of hype going on. But some people seem to lean towards the “they just suck, period” extreme end of the spectrum. Which is no longer being critical but just being a reverse fanboy/girl/person.
I don’t know how to say this in a less direct way. If this is your take then you probably should look to get slightly more informed about what LLMs can do. Specifically, what they can do if you combine them with with some code to fill the gaps.
Things LLMs can do quite well:
These are all the building blocks for searching on the internet. If you are talking about local documents and such retrieval augmented generation (RAG) can be pretty damn useful.
You are glossing over a lot of infrastructure and development, when boiled down to the basics you are right. So it is basically a question of getting enough users to have that app installed. Which is not impossible given that we do have initiatives like OpenStreetMap.
At least for the instance this was posted on: the February 2024 Beehaw Financial Update
Long term wearing of vr headsets might indeed be not all that good. Though, the article is light on actual information and is mostly speculation. Which for the Apple Vision Pro can only be the case as it hasn’t been out long enough to conduct anything more than a short term experiment. So that leaves very little data in the way of long term data points.
As far as the experiment they did, there was some information provided (although not much). From what was provided this bit did stand out to me.
The team wore Vision Pros and Quests around college campuses for a couple of weeks, trying to do all the things they would have done without them (with a minder nearby in case they tripped or walked into a wall).
I wonder why the Meta Oculus Quests were not included in the title. If it is the meta Quest 3, it is fairly capable as far as pass through goes. But, not nearly as good as I understand the Apple Vision Pro’s passthrough is. I am not saying the Apple Vision Pro is perfect, in fact it isn’t perfect if the reviews I have seen are any indicator. It is still very good, but there is still distortion around edges of vision, etc.
But given the price difference between the two I am wondering if the majority of the particpants actually used Quests as then I’d say that the next bit is basically a given:
They experienced “simulator sickness” — nausea, headaches, dizziness. That was weird, given how experienced they all were with headsets of all kinds.
VR Nausea is a known thing even experienced people will get. Truly walking around with these devices with the distorted views you get is bound to trigger that. Certainly with the distortion in pass through I have seen of Quests 3 videos. I’d assume there are no Quests 2 in play as the passthrough there is just grainy black and white video. :D
Even Apple with all their fancy promo videos mostly shows people using the Vision pro sitting down or in doors walking short distances.
So yeah, certainly with the current state of technology I am not surprised there are all sorts of weird side effects and distorted views of reality.
What I’d be more interested in, but what is not really possible to test yet, is what the effects will be when these devices become even better. To the point where there is barely a perceivable difference in having them on or off. That would be, I feel, the point where some speculated downsides from the article might actually come into play.
Would you like me to quote every single one of your lines, line by line, and respond to them?
No, that’s not really what I’m asking for. I’m also not looking for responses that isolate a single sentence from my longer messages and ignore the context. I’m not sure how to make my point any clearer than in my first reply to you, where I started with two bullet points. You seemed to focus on the second, but my main point was about the first. If we do want to talk about standard behavior in human conversation, generally speaking, people do acknowledge that they have heard/read something someone said even if they don’t respond to it in detail.
Again, I’ve been agreeing that AI is causing significant problems. But in the case of this specific tweet, the real issue is with a pay to publish journal where the peer review process is failing, not AI. This key point has mostly been ignored. Even if that was not the case, if you want to have any change of trying to combat the emergence of AI I think it is pretty reasonable to question if the basic processes in place are even functioning in the first place. Where my thesis (again, if this wasn’t a pay to publish journal) would be that this is likely not the case as in that entire process clearly nobody looked closely at these images. And just to be extra clear, I am not saying that AI never will be an issue, etc. But if reviewing already isn’t happening at a basic level how are you ever hoping to combat AI in the first place?
When did anyone say
But by just shouting, “AI is at it again with its antics!” at every turn instead of looking further and at other core issues we will only make things worse”
The context of this tweet, saying “It’s finally happened. A peer-reviewed journal article with what appear to be nonsensical AI-generated images. This is dangerous.”, does imply that. I’ve been responding with this in mind, which should be clear. It is this sort of thing I mean when I say selective reading when you seemingly take it as me saying that you personally said exactly that. Which is a take, but not one I’d say is reasonable if you take the whole context into account.
And in that context, I’ve said:
that doesn’t mean all bullshit out there is caused by AI
Which I stand by. In this particular instances, in this particular context AI isn’t the issue and somewhat clickbait. Which makes most of what you argued about valid concerns. Youtube struggling, SEO + AI blog spam, etc are all very valid and concerning of AI causing havoc. But in this context of me calling a particular tweet clickbait they are also very much less relevant. If you just wanted to discuss the impact of AI in general and step away from the context of this tweet, then you should have said so.
Now, about misrepresenting arguments:
If you are reaffirming somebody else’s comment, you are generally standing behind most if not all of what they said. But nobody here is saying or doing the things you are claiming. You are tilting at windmills.
Have you looked back at your own previous comments when you wrote that? Because while have this, slightly bizarre, conversation I have gone back to mine a few times. Just to check if I actually did mess up somewhere or said things differently that I thought I did. The reason I am asking is that I have been thrown a few of these remarks from you where I could have responded with the above quote myelf. Things like “It’s passing the buck and saying that AI in no way, shape, or form, bears any responsibility for the problem.”
The fact that you specifically respond to this one highly specific thing. While I clearly have written more is exactly what I mean.
shrugs
I feel like this is the third time people are selective reading into what I have said.
I specifically acknowledge that AI is already causing all sorts of issues. I am also saying that there is also another issue at play. One that might be exacerbated by the use of AI but at its root isn’t caused by AI.
In fact, in this very thread people have pointed out that *in this case" the journal in question is simply the issue. https://beehaw.org/comment/2416937
In fact. The only people likely noticed is, ironically, the fact that AI was being used.
And again I fully agree, AI is causing massive issues already and disturbing a lot of things in destructive ways. But, that doesn’t mean all bullshit out there is caused by AI. Even if AI is tangible involved.
If that still, in your view, somehow makes me sound like an defensive AI evangelist then I don’t know what to tell you…
I said clickbait about the AI specific thing. Which I do stand by. To be more direct, if peer reviewers don’t review and editors don’t edit you can have all the theoretical safeguards in place, but those will do jack shit. Procedures are meaningless if they are not being followed properly.
Attributions can be faked, just like these images are now already being faked. If the peer review process is already under tremendous pressure to keep up for various reasons then adding more things to it might actually just make things worse.
I feel like two different problems are conflated into one though.
Point two can contribute to point 1 but for that a bunch of stuff needs to happen. Correct my if I am wrong but as far as my understanding of peer-review processes are supposed to go it is something along the lines of:
If at point 3 people don’t do the things I highlighted in bold then to me it seems like it is a bit silly to make this about AI. If at point 4 the editor ignores most feedback for the peer reviewers, then it again has very little to do with AI and everything the a base process being broken.
To summarize, yes AI is going to fuck up a lot of information, it already has. But by just shouting, “AI is at it again with its antics!” at every turn instead of looking further and at other core issues we will only make things worse.
Edit:
To be clear, I am not even saying that peer reviewers or editors should “just do their job already”. But fake papers have been increasingly an issue for well over a decade as far as I am aware. The way the current peer review process works simply doesn’t seem to scale to where we are today. And yes, AI is not going to help with that, but it is still building upon something that already was broken before AI was used to abuse it.
Oh huh, you are right. I threw that exact prompt in Dall-e and got indeed legible letters.
I totally see why you are worried about all the aspects AI introduces, especially regarding bias and the authenticity of generated content. My main gripe, though, is with the oversight (or lack thereof) in the peer review process. If a journal can’t even spot AI-generated images, it raises red flags about the entire paper’s credibility, regardless of the content’s origin. It’s not about AI per se. It is about ensuring the integrity of scholarly work. Because realistically speaking, how much of the paper itself is actually good or valid? Even more interesting, and this would bring AI back in the picture. Is the entire paper even written by a human or is the entire thing fake? Or maybe that is also not interesting at all as there are already tons of papers published with other fake data in it. People that actually don’t give a shit about the academic process and just care about their names published somewhere likely already have employed other methods as well. I wouldn’t be surprised if there is a paper out there with equally bogus images created by an actual human for pennies on Fiverr.
The crux of the matter is the robustness of the review process, which should safeguard against any form of dubious content, AI-generated or otherwise. Which is what I also said in my initial reply, I am most certainly not waving hands and saying that review is enough. I am saying that it is much more likely the review process has already failed miserably and most likely has been for a while.
Which, again to me, seems like the bigger issue.
This feels like clickbait to me, as the fundamental problem clearly isn’t AI. At least to me it isn’t. The title would have worked as well without AI in the title. The fact that the images are AI generated isn’t even that relevant. What is worrying is that the peer review process, at least for this journal clearly is faulty as no actual review of the material took place.
If we do want to talk about AI. I am impressed how well the model managed to actually create text made up of actual letters resembling words. From what I have seen so far that is often just as difficult for these models as hands are.
They’re for different needs.
Yes… but also extremely no. Superficially you are right, but a lot of the arguments of why many new distros are created is just because of human nature. This covers everything from infighting over inane issues to more pragmatic reasons. A lot of them, probably even a majority, don’t provide enough actual differentiators to be able to honestly claim that it is because of different needs. In the end it all boils down to the fact that people can just create a new distro when they feel like it.
Which is a strength in one way, but not with regard to fragmentation.
It still does? That is an entirely different page and still shows the newest videos of channels you are subscribed to. At least, for me it does.
I am dissapointed in that I have not been able to get a single mathematic equation produced (like famous ones), but I know they can?
Well, my understanding is that they actually can’t. LLM’s do “language” mostly based on what is called “next word prediction” so they basically look at the word and predict what the next most logical word would be. (Somewhat simplified). So numbers to them are not numbers but words, which is why they are fairly bad at them.
Opera has Aria, which is like the cleanest version of ChatGPT
Pass, not sure what stake the chinese owners have these days but Opera is a bit too… feature rich in everything.
I do like working with just chat.openai.com for simple stuff. It is great at helping my debug things in areas I don’t quite have all the knowledge I’d like. For example, I had to work on a shell script earlier in bash. Something I don’t do often and as an added bonus it needed to work on both macOS machines and the bash version shipped with “git bash” on windows. MacOS GNU utils already function slightly differently at times, but git bash on windows is entirely broken in some areas. Where yesterday I spend an hour trying to find something relevant based on my input and the error I got through google chatGPT just managed to point out the pain point right away.
And that is where I feel chatGPT (in this case anyway) does a great job, troubleshooting issues about things that are not necessarily bleeding edge. I just presented it with a clear problem and a bit of context and asked why that could be the case. It also got it wrong a few times, but that is fine, it did safe me a bunch of time in the end.
Bing and Google Bard keep disappointing me. Bing for some reason only picks up on half of what I ask. Which is extremely odd as it is supposedly is ChatGPT based and ChatGPT gives pretty good answers on the same queries. The only problem with the latter is that a lot of it is of course outdated.
Bard might just be broken for me. I keep getting I'm a text-based AI, and that is outside of my capabilities.
or similar responses.
Yeah, you raise some valid points about the future of reddit itself and communities being forced. A few things I specifically still want to reply to:
I guess I also don’t get the concern about picking “the right lemmy instance” - at worst, it’s like picking an e-mail server, or grocery store. Try a random one, find out what doesn’t work for you (if anything) and then use that knowledge to evaluate the next one.
Well yeah, but that is in hindsight easy to say. If all you have heard is “Lemmy” and you start looking things up it can become a bit overwhelming and dififcult to figure out. Also, ironically, because a lot of people are trying to put information out there. But, not everyone is good at actually creating easy to follow resources. Also, from a user perspective, you are entirely right. From a community perspective it is slightly more complex. You either need to find the money and people with technical know how to host your own instance or find a reliable instance that allows community creation.
I tend to quote and comment on the part of a comment I’m replying to that I have something to say about it.
On reddit I, personally, also wouldn’t have assumed that to be the intent. Often because that is not what is happening. What I often do when I just want to reply to something specific is stating it. Something along the lines of “I generally agree with your post/comment, but this part specifically, I do have a slightly different view of” and then follow with the quote.
this is a rant (so don’t take it that seriously)
Heh, some people want their rants to be taken very seriously :) So again, just add it as context. Not just state that it is a rant, but that because of it is doesn’t have to be taken seriously.
What do you mean by “it”? The chatGPT interface? Could be, but then you are also missing the point I am making.
After all, chatGPT is just one of the possible implementations of LLMs and indeed not perfect in how they implemented some things like search. In fact, I do think that they shot themselves in the foot by implementing search through bing and implementing it poorly. It basically is nothing more than a proof of concept tech demo.
That doesn’t mean that LLM’s are useless for tasks like searching, it just means that you need to properly implement the functionality to make it possible. It certainly is possible to implement search functionality around LLMs that is both capable and can be reviewed by a human user to make sure it is not fucking up.
Let me demonstrate. I am doing some steps that you would normally automate with conventional code:
I started about by asking chatGPT a simple question.
It then responded with.
The following step I did manually, but is something you would normally have automated. I put the suggested query in google, I quickly grabbed the first 5 links and then put the following in chatGPT.
It then proceeded to give me the following answer
Going over the search results myself seems to confirm this list. Most importantly, except for the initial input, all of this can be automated. And of course, a lot of it can be done better, as I didn’t want to spend too much time.