

The article says they suspect this was done by people who have an interest in hunting, since those people often complain that the eagles target birds like pheasants.


The article says they suspect this was done by people who have an interest in hunting, since those people often complain that the eagles target birds like pheasants.


Sorry for a casual, what do you mean cap at 60hz?
I just use Firefox on Ubuntu, which fifteen years ago seemed like enough.
Which also doesn’t seem that casual, but this shit is too much to keep up with. Today my engineer dad was complaining about search engines having too many ads and I asked what he used, and he said besides Google on the one computer he uses Bing on the other.
No, this is looking at it wrong. You get to fuck the sexy hybrid human-dog abomination


I responded to your other comment, but yes, I think you could set up an llm agent with a camera and microphone and then continuously provide sensory input for it to respond to. (In the same way I’m continuously receiving input from my “camera” and “microphones” as long as I’m awake)


I’m just a person interested in / reading about the subject so I could be mistaken about details, but:
When we train an LLM we’re trying to mimic the way neurons work. Training is the really resource intensive part. Right now companies will train a model, then use it for 6-12 months or whatever before releasing a new version.
When you and I have a “conversation” with chatgpt, it’s always with that base model, it’s not actively learning from the conversation, in the sense that new neural pathways are being created. What’s actually happening is a prompt that looks like this is submitted: "{{openai crafted preliminary prompt}} + “Abe: Hello I’m Abe”.
Then it replies, and the next thing I type gets submitted like this: "{{openai crafted preliminary prompt}} + "Abe: Hello I’m Abe + {{agent response}} + “Abe: Good to meet you computer friend!”
And so on. Each time, you’re only talking to that base level llm model, but feeding it the history of the conversation at the same time as your new prompt.
You’re right to point out that now they’ve got the agents self-creating summaries of the conversation to allow them to “remember” more. But if we’re trying to argue for consciousness in the way we think of it with animals, not even arguing for humans yet, then I think the ability to actively synthesize experiences into the self is a requirement.
A dog remembers when it found food in a certain place on its walk or if it got stabbed by a porcupine and will change its future behavior in response.
Again I’m not an expert, but I expect there’s a way to incorporate this type of learning in nearish real time, but besides the technical work of figuring it out, doing so wouldn’t be very cost effective compared to the way they’re doing it now.


Yeah, it seems like the major obstacles to saying an llm is conscious, at least in an animal sense, is 1) setting it up to continuously evaluate/generate responses even without a user prompt and 2) allowing that continuous analysis/response to be incorporated into the llm training.
The first one seems like it would be comparatively easy, get sufficient processing power and memory, then program it to evaluate and respond to all previous input once a second or whatever
The second one seems more challenging, as I understand it training an llm is very resource intensive. Right now when it “remembers” a conversation it’s just because we prime it by feeding every previous interaction before the most recent query when we hit submit.


I think chairs and tables are insufficiently different - people would end up using one as a substitute for the other. I think a more interesting question would be what if you were required to magically eliminate all perfectly level planes (tables, chairs, beds), or eliminate all slanted planes (ramps, screws, lazy boys)


I’m an atheist, but if you read about Jesus specifically you won’t find a lot of hate.
Thanks for sharing. Although I’m an enthusiastic open source user, I haven’t written any code of significance, so I’m not aware: has anyone made a license where use is restricted to individuals and democratically controlled organizations? I’m picturing that would allow for some degree of profit motive while encouraging things like worker co-ops and excluding venture capital controlled entities.


Because the only way Marvel movies know how to ratchet up the stakes is having more and more people die.


Thanks for sharing this link!


That’s what I was getting at, I was trying to understand the mechanism by which they were doing this


When they say access to social media on smartphones does that mean restricting connectivity to certain sites on devices using mobile IP addresses?
I assume they have no mechanism to remove apps from individual devices.
I don’t care what Shrek thinks


Not really. I didn’t see anything unhygienic in terms of the food people were taking home with them.


When I worked at a grocery store the dairy stock room always had a weird tangy smell. Inevitably product would get spilled, and then cleaned up, but never seemed to be truly clean.
I don’t know OP’s intent but there is a genre of porn where the one person isn’t into it, but it’s not rape. It would be like them playing a video game or washing dishes and being indifferent/unresponsive to the other person having sex with them.
Sorry friend, but if someone is asking a question, telling them to read about it rather than provide the meat of the answer doesn’t seem too helpful.
You’re under no obligation to explain anything to anyone, but if you’re going to take the time to respond why not elaborate?
Didn’t Frank Lloyd Wright use the term “Usonian”?