I want to share some thoughts that I had recently about YouTube spam comments. We all know these early bots in the YouTube comment section, with those “misleading” profile pictures and obvious bot like comments. Those comments are often either random about any topic or copied from other users.
OK, why am I telling you that? Well, I think these bots are there to be recognized as bots. Their job is to be seen as a bot and be deleted and ignored. In that case everyone feels safe, thinking all bots are now deleted. But in reality there are more sophisticated bots under us. So the easy bots job is to get delete and basically mislead us, so we don’t think that any is left, because they are deleted.
What do you think? Sounds plausible, doesn’t it? Or do I have paranoia? :D
The most deceitful bots even try to convince you there are more bots than what are shown on Youtube.
They even follow you to other sites like Lemmy and make posts.
Obvious scams are obvious on purpose, but not for the reason you’re suggesting. They don’t want to waste actual manpower on anyone smart enough to recognize a scam. So they make the scam obvious, and they know that anyone that does respond to it is in that small percentage of clueless or gullible people that the scam might actually succeed on. They’re not trying to play 4D chess, they’re trying to find the people that can only play 1D checkers.
Oh, I finally get this argumentation! I’ve read it before couple of times (also in the comment section of this topic here), but it didn’t make sense. But now it clicked. I can definitely see why they keep it simple and straightforward now. And yes, it makes more sense than my theory.
Take your meds.
They could be having that effect. Scams that look obvious are to attract people who fall for obvious scams, such as people with dementia. They are designed to be transparent to most people because they don’t need anybody clicking that has the faculties to know better than to fall for the rest of the scam.
Seems like that would be the engineered high effort version that I don’t think spammers would invest.
Is there a need to manipulate moderators? I don’t think so. Unless you have a specific target and don’t mind the effort.
Otherwise, I suspect simple spam works well enough, with lower effort.
So, just for show? It sounds possible but implausible IMO; I don’t think YouTube cares about that cesspool of its own comments, not even enough to set a smoke screen up.
Maybe some of the obviousness is a sort of camouflage in that if it looks like a fishing scheme, people at YouTube won’t look any deeper. I think the actual goal of the bots is to manipulate the algorithm. Like, most of the time, the obvious bots just get ignored, especially on videos from bigger creators, no reason to put effort in to making them believable.
Like, maybe they comment on video A to show “engagement” with that content, then they go and comment on video B. Fool the algorithm into associating people who engage with video A as the same kind of audience who would engage with Video B. Thus getting the algorithm to recommend video B more often to viewers of Video A. For something like that you wouldn’t need the bots to look real to other commenters, and having them seem like innocuous fishing scam bots might reduce the scrutiny on their activity.
I could see a lot of different reasons to do that. Could be as simple as some shady “Viral marketing consultancies” trying to boost a client’s channel in the algorithm. Could also be something more comprehensive and nefarious, like trying to manipulate social discourse by steering whole demographics towards certain topics or even away from specific topics. I do wonder how much the algorithm could be nudged by an organized bot comment spam ring.
I don’t think you sound paranoid at all, at least not compared to me. Bots are everywhere on social sights and there is a well documented history of different groups using various tactics and strategies to hide the bots or distract from what the bots are doing.
deleted by creator
this applies possibly to phone calls, text messages, email, comments on forums and sites like youtube and many other things.
check: does user respond? if yes, user will engage. add to will engage list.
check: how does user respond? delete or reply? if reply, add to repeat text/voice call list. if delete add to spam defender list.
will engage list: continue to send. engagement is attention. they are acknowledging and thus may be able to attract their attention in some way for advertisers.
text/voice list: same as engage list but also opens lines of communication. chance to upsell. chance to phish with support scam.
spam defender list: continue using default spam tactics. add higher level phishing techniques. consider adding to spearphishing list.
spearphishing list: has spam experience and can use computer/phone. possible tech worker. gather more information. attempt to infiltrate. cross reference username with leak db’s. do they reuse their passwords?
all of the above: collect ai training data.
i don’t know how true any of this is, it’s simply how i imagine some of it works. i might be paranoid. how you react is part of how you get classified into a list or group.
I wonder what list that ear piercing high Ab with the trumpet 3 inches from the phone when I was having an exceptionally bad spam day put me on.
Mailing lists for Spotify and Pro Tools, plus you get signed up for a ‘free’ with an asterisk lifetime subscription to sirius radio that can’t be cancelled.
You know, the truck I had at the time somehow never had it’s Sirius radio shut off. Although, I never got billed for it either…
The truck might have had one of their ‘lifetime’ subscriptions.
Sirius sold lifetime subscriptions. Some people who purchased one were led to believe it was for the rest of their life. Sirius worded it to say it was the lifetime of the device. Their ‘lifetime’ service got cancelled on them after a merger with XM Radio, or they’d replace their vehicle which had a different but still a Sirius radio and could not transfer lifetime service.
There was a class action lawsuit filed. The lawsuit was settled in 2021 (subs had been sold as far back as the early 2000’s) and made ‘lifetime’ refer to the subscriber, not the life of the radio. People with inactive subscriptions could cancel it and get $100. An active subscription could pay $35 (instead of $75) to move it to another radio, each time they wanted to move it. Except that settlement was dismissed in 2022 and it’s no longer possible.
oh yea there are definitely bots, some try to pretend to be actors of the shows in youtube video commenting themselves. for instance, iSAIP i was commenting on how bad the show has become and magically the poster had a pic of the actor on it to try defend said video. you can tell with the broken english they use, which makes think its a troll account from a certain country that is well known to use propaganda.
I don’t even look at YouTube comments half the time.
To me the comments are one of the most interesting things on YouTube. Either on Gaming, Linux or in example on funny video content, with lot of funny comments. I actually use FreeTube client to watch videos anonymously, but go to Firefox and login to YouTube specifically to comment and interact with other users.
If you are concerned of the possibility of a bot being among other users that has no been deleted what I would suggest is. Get into less popular and less documented videos topics and their comment sections.
Idk what you are searching for but the more specific a video is to a genre or video topic will certainly throw off an ai chat bot. They’ll eventually say the wrong thing. What you do then is see if they bother correcting themselves or do they keep using the same answer in responses. You’l know its a bot because in more niche topic a real person would be more dedicated into saying the right things.
Linux is well documented but how documented is Palemoon browser compared to Firefox for example. The more specific you get the easier it will be to know if it’s an ai bot. If you talk about everyday topics the ai is always being trained on user generated content. It gets harder to tell.