I was told earlier today this was due to a transporter accident.
I was told earlier today this was due to a transporter accident.
Damn… nice work on the research! I will read through these as I get time. I genuinely didn’t think there would be much for manual labor stuff. I’m particularly interested in the plumber analysis.
I think augmentation makes a lot of sense for jobs where a human body is needed and it will be interesting to see how/if trade skill requirements change.
I’ll edit this as I read…
Plumbing. The article makes the point that it isn’t all or nothing. That as automation increases productivity, fewer workers are needed. Ok, sure, good point.
Robot plumber? A humanoid robot? Not very likely until enormous breakthroughs are made in machine vision (I can go into more detail…), battery power density, sensor density, etc. The places and situations vary far too greatly.
Rather than an Asimov-style robot, a more feasible yet productivity enhancing solution is automated pipe cutting and other tasks. For example, you go take your phone and measure the pipe as described in the link. Now press a button, walk out to your truck by which time the pipe cutter has already cut off the size you need saving you several minutes. That savings probably means you can do more jobs per day. Cool.
Edit 2
Oil rig worker. Interesting and expected use of AI to improve various aspects of the drilling process. What I had in mind was more like the people that actually do the manual labor.
Autonomous drones, for example, can be used to perform inspections without exposing workers to dangerous situations. In doing so, they can be equipped with sensors that send images and data to operators in real time to enable quick decisions and effective actions for maintenance and repair.
Now that’s pretty cool and will probably reduce demand for those performing inspections (some of whom will have to be at the other end receiving and analyzing data from the robot until such time as AI can do that too.
Autonomous robots, on the other hand, can perform maintenance tasks while making targeted repairs to machinery and equipment.
Again, technologies required to make this happen aren’t there yet. Machine vision (MV) alone is way too far from being general purpose. You can decide a MV system that can, say, detect a coke can and maybe a few other objects under controlled conditions.
But that’s the gotcha.Change the intensity of lighting, change the color temperature or hue of the lighting and the MV probably won’t work. It might also mistake diet coke can or a similar sized cylinder for a Pepsi can. If you want it to recognize any aluminum beverage can that might be tough. Meanwhile any child can easily identify a can in any number of conditions.
Now imagine a diesel engine generator, let’s say. Just getting a robot to change the oil would be nice. But it has to either be limited to a specific model of engine or be able to recognize where the oil drain plug and fill spot is for various engines it might encounter.
What if the engine is a different color? Or dirty instead of clean? Or it’s night, or noon (harsh shadows), overcast (soft shadows), or sunset (everything is yellow orange tinted)? I suppose it could be trained for a specific rig and a specific time of day but that means set up time costs a lot. It might be smarter to build some automated devices on the engine like a valve on the oil pan. And a device to pump new oil in from a vat or standard container or whatever. That would be much easier. Maybe they already do this, idk.
Anyway… progress is being made in MV and we will make far more. That still leaves the question of an autonomous robot of some kind able to remove and reinstall a drain plug. It’s easy for us but you’d be surprised at how hard that would be for a robot.
Well, we sure as shit better not get “involved” on the ground in Iran.
Uh not instantly. Not storewide in a moment. Make it networked and you can change prices per customer as they walk each aisle.
“that guy looks loaded, pop the prices, quick”
Streamline your Price Fixing shenanigans with our Tenant Screwer 3000™
Well, it’s only unacceptable to rich people who have the power to avoid consequences.
It can be frustrating without being the most frustrating thing. Mild annoyance repeated many times becomes pretty annoying.
I don’t disagree with most of what you said. I think so far the following jobs are safe from direct AI replacement, because it is much harder to replace manual laborers.
What companies won’t realize until too late is that paying customers need jobs to pay for things. If AI causes unemployment to rise to some ungodly high, paying customers will become rare and companies will collapse in droves.
Appreciate the detailed response!
Indeed, intelligence is …a difficult thing to define. It’s also a fascinating area to ponder. The reason I asked was to get an idea of where your head is at with the claims you made.
Now, I admit I haven’t done a lot with gpt-4 but your comments make me think it is worth the time to do so.
So you indicate gpt-4 can reason. My understanding is gpt-4 is an LLM, basically a large scale Markov chain, trained to respond with appropriate output based on input (questions).
On the one hand, my initial reaction is: no, it doesn’t reason it just mimics or simulates human reasoning that came before it in text form.
On the other hand, if a program could perfectly simulate whatever processes are involved in reasoning by a human to the point that they’re indistinguishable, is it not, in effect, reasoning? (I suppose this amounts to a sort of Turing Test but for reasoning exercises).
I don’t know how gpt4 LLMs work yet. I imagine, being a Markov Model (specifically a Markov Chain), if the model is trained on human language then the underlying semantics are sort of implicitly captured in the statistical model. Like, simplistically, if many sentences reflect human knowledge that cars are vehicles and not animals then it’s statistically unlikely for anyone to write about attributes and actions of animals when talking about cars. I assume the LLM is of such a scale that it permits this apparently emergent behavior.
I am skeptical about judgement calls. I would think some sensory input would be required. I guess we have to outline various types of judgement calls to really dig into this.
I am willing to accept that gpt-4 simulates the portions of the brain that deal with semantics and syntax both the receiving and transmitting abilities. And, maybe to some degree, knowledge and understanding.
I think “very similar to a complete brain” is an overstatement as the brain also does some amazing things with vision, hearing, proprioception, touch, among other things. Human brains can analyze situations and take initiative, analyze things and understand how they work and apply that to their repair, improvement, duplication, etc. We can understand and solve problems, and so on. In other words I don’t think you’re giving the brain anywhere near enough credit. We aren’t just Q&A machines.
We also have to be careful of the human tendency to anthropomorphize.
I’m curious to look into vector databases and their applications here. Addition of what amounts to memory, or like extended context, sounds extremely interesting.
Interesting to ponder what the world would be like with AGI taking over the jobs of most knowledge workers, artists, and so on. (I wonder if someone could create a CEO replacement…)
What does it mean for a capitalist society with masses of people permanently unemployed? How does the economy work when nobody can afford to buy anything because they’re unemployed? Does this create widespread poverty and collapse or a post-scarcity economy in some sectors?
Until robots mechanically evolve to Asimov’s vision, at least, manual labor is safe. Truly being able to replace a human body with a robot is still a ways off due to lack of progress on several fronts.
And it saves us the indignity of being treated like criminals in our neighborhood grocery store. Curbside is fantastic in most cases.
How do you define “intelligence” in this context?
Do you think gpt4 is self aware?
Do you believe this LLM tech has the ability to make judgement calls, say? Or understand meaning?
What has been your experience with the accuracy / correctness of the answers it has provided? Does it match claims that mistakes or “hallucinations” occur often?
Hydrazine (from Wikipedia)
Hydrazine is an inorganic compound with the chemical formula N2H4. It is a simple pnictogen hydride, and is a colourless flammable liquid with an ammonia-like odour. Hydrazine is highly toxic unless handled in solution as, for example, hydrazine hydrate (N2H4·xH2O).
Hydrazine is mainly used as a foaming agent in preparing polymer foams, but applications also include its uses as a precursor to polymerization catalysts, pharmaceuticals, and agrochemicals, as well as a long-term storable propellant for in-space spacecraft propulsion. Additionally, hydrazine is used in various rocket fuels and to prepare the gas precursors used in air bags. Hydrazine is used within both nuclear and conventional electrical power plant steam cycles as an oxygen scavenger to control concentrations of dissolved oxygen in an effort to reduce corrosion.
Pff sure. How hard can it be? Few resistor thingies and some capaci-whatsists, and Arduino, done.
It’s the decade of the Linux desktop over here.
In effect, it will be for some people fed up with all this bullshit.
The real obstacles are anti-intellectualism and extremist right wing propaganda.
Naw. It’s predatory bullshit. People deserve to live even if they’re gullible or whatever else. Not everybody gets to grow up with good parenting, schooling, DNA, etc.
I like the cut of his jib.
Somebody let me know when the Artist Formerly Known as Twitter faces any consequences.