When I search this topic online, I always find either wrong information or advertising lies. So what is actually something that LLMs can do very well, as in being actually useful and not just outputing a nonsensical word salad that sounds coherent.
Results
So basically from what I’ve read, most people use it for natural language processing problems.
Example: turn this infodump into a bullet point list, or turn this bullet point list into a coherent text, help me with rephrasing this text, word association, etc.
Other people use it for simple questions that it can answer with a database of verified sources.
Also, a few people use it as struggle duck, basically helping alleviate writers block.
Thanks guys.
Brainstorming. ChatGPT and co. are slightly better rubber ducks. Which helps to sort my thoughts and evaluate ideas.
Also when researching a new topic I barely know anything about, it helps to get useful pointers and keywords for further research and reading. It’s like an interactive Wikipedia in that regard.
Not exactly sure this is the “right way” to use them, but I use one as an autocomplete helper in my IDE. I don’t ask it to code anything, just use it as autocomplete.
Majority of the time, it works well, especially in common languages like Python.
Just rewrote my corporate IT policies. I feed it all the old policies and a huge essay of criteria, styles, business goals etc. then created a bunch of new policies. I have chatgpt interview me about the new policies, I don’t trust what it outputs until I review it in detail and I ask it things like
What do other similar themed policies have that I don’t? How is the policy going to be hard to enforce? What are my obligations annually, quarterly and so on?
What forms should I have in place to capture information ( i.e. consultant onboarding).
I can do it all myself but it would be slower and more likely to have consistency and grammatical errors.
Taking a natural language question and providing a foothold on a subject by giving you the vocabulary so that you can research a topic on your own.
“What is it called when xyz.”
I use it pretty sparsely, and it’s stuff that’s simple but if I were to google I’d get a whole 10 page essay about filled with ads.
For example here’s some of my recent searches, the span is like ~2 months back.
- XXL and 2X being the same thing.
- Rainbow Minecraft MOTD server text.
- Script to find all the schematic files in my ~30 nested files and copy them to a new folder
- how to get cat hair out of clothes
- some legal supreme court thing from the 1800s
- creative commons CC-BY-SA explanation (their website didn’t explain what the abbreviation of “BY” was)
- how to unlock my grandad’s truck with the code
- “can ghosts be black” (??? I think I was in a silly argument on discord)
- how to read blood pressure
- scene from movie where I forgot the movie
- how to draw among us in desmos calculator as a joke
Very basic and non-creative source code operations. Eg. “convert this representation of data to that representation of data based on the template”
I find it’s really good for asking extremely specific code questions
I find they’re pretty good at some coding tasks. For example, it’s very easy to make a reasonable UI given a sample JSON payload you might get from an endpoint. They’re good at doing stuff like crafting farily complex SQL queries or making shell scripts. As long as the task is reasonably focused, they tend to get it right a lot of the time. I find they’re also useful for discovering language features working with languages I’m not as familiar with. I also find LLMs are great at translation and transcribing images. They’re also useful for summaries and finding information within documents, including codebases. I’ve found it makes it a lot easier to search through papers where you might want to find relationships between concepts or definitions for things. They’re also good at subtitle generation and well as doing text to speech tasks. Another task I find they’re great at is proofreading and providing suggestions for phrasing. They can also make a good sounding board. If there’s a topic you understand, and you just want to bounce ideas off, it’s great to be able to talk through that with a LLM. Often the output it produces can stimulate a new idea in my head. I also use LLM as a tutor when I practice Chinese, they’re great for doing free form conversational practice when learning a new language. These are a just a few areas I use LLMs in on nearly daily basis now.
I use LLMs to generate unit tests, among other things that are pretty much already described here. It helps me discover edge cases I haven’t considered before, regardless if the generated unit tests themselves pass correctly or not.
Oh yeah that’s a good use case as well, it’s a kind of a low risk and tedious task where these things excel at.
Overcoming writers block or whatever you want to call it
Like writing an obit or thank you message that doesn’t sound stupid. I just need a sentence down to work from, even though it doesn’t make it to final draft.
Or I needed to come up with activities to teach 4th graders about aerodynamics for a STEM outreach thing. None of the output from LLM was usable as it was spit out but was enough for me to kickstart real ideas
This sort of applies to dev work too, especially if you have ADHD. I overcome blockage by rubber ducking, but sometimes my ADHD gets strong enough that I can’t, for the life of me, sit down to write some trivial code that might as well be a typing exercise. I simply get Cursor to generate the stuff, proofread it, and now that it’s suddenly a bug smashing session instead of typing out some class or component or whatever, I overcome my blockage and can even flow. Speaking as someone that often gets blocked for weeks to months at a time, LLMs have saved me from crashing into deadlines more than a few times.
This is a great use i use it for similar purpose it’s great brainstorming ideas. Even if it’s ideas are bullshit cause it made it up it can spark an idea in me that’s not.
Yes, it’s like the rubberducking technique, with a rubber duck that actually responds.
Sometimes even just trying to articulate a question is a good first step for finding the solution. A LLM can help with this process.
That’s about where I land. I’ve used it the other way, too, to help tighten up a good short story I’d written where my tone and tense was all over the place.
I’ve used LLMs to write automated tests for my code, too. They’re not hard to write, just super tedious.
Same. It’s gets me started on things, even if I use very little or even non of its actual output.
They work well when being correct doesn’t matter
Well, yes, but then what’s the point? It would be like having Wikipedia filtered through Alex Jones.
There are plenty of use cases that don’t involve it needing to recite accurate facts.
I used it to help write copy for my website, to write proposals, and to help with rephrasing when I can’t think of the most diplomatic way to say a thing.
For instance: commenting on Reddit
JIRA queries, rules, automations, etc. Suggestions for how to make my rage-fueled communications sound more reasonable and professional Meeting Summaries. Not having to take notes is HUGE.
Meeting notes are the ideal use case for AI, in the sense that everyone thinks someone needs to write them but almost nobody ever goes back and actually reads them.
But when I got curious and read the AI generated ones (the ones from Zoom at least)… According to the AI I had agreed on an action that hadn’t been even discussed in the meeting and we apparently spent half of the meeting discussing weather conditions in the various locations (AI seems to have a hard time telling the difference between initial greetings or jokes and the actual discussion, but in this one it became weirdly fixated with those initial 5 minutes)
This is one area where, at least for me, CoPilot is very good. In most other areas, CoPilot is not very good.
I am not using it for this purpose, but churning out large amounts of text that doesn’t need to be accurate is proving to be a good fit for:
-
scammers, who can now write more personalize emails and also have conversations
-
personality tests
-
horoscopes or predictions (there are several examples even on serious outlets of “AI predicts how the world will end” or similar)
Due to how good LLMs are at predicting an expected pattern of response, they are a spectacularly bad idea (but are obviously used anyway) for:
-
substitute for therapy
-
virtual friends/girlfriend/boyfriend
The reason they are such a bad idea for these use cases is that fragile people with self-destructive patterns do NOT need those patterns to be predicted and validated by a LMM.
Have they given you anything creative that was good. I also, used it to make a meal plan and make a work schedule as an Excel doc, then it just needed a few edits.
Would you say you are good at creating a meal plan or a work schedule by yourself, with no AI? I suspect if you know what a good meal plan looks to you and you are able to visualize the end result you want, then genAI can speed up the process for you.
I am not good at creative tasks. My attempts to use genAI to create an image for a PowerPoint were not great. I am wondering if the two things are related and I’m not getting good results because I don’t have a clear mental picture of what the end result should be so my descriptions of it are bad
In my case, I wanted an office worker who was juggling a specific set of objects that were related to my deck. After a couple of attempts at refining my prompt, Dall-E produced a good result, except that it had decided that the office worker had to have a clown face, with the make-up and the red nose.
From there it went downhill. I tried “yes, like this, but remove the clown makeup” or “please lose the clown face” or “for the love of Cthulhu, I beg you, no more clowns” but nothing worked.
I once asked ChatGPT how it (AI) works. It gave me the tools needed to get the right results. There were books on prompt engineering free online. But I decided after reading them that it was easier to have AI teach me to use AI…better. that’s the LLMs. On the other hand for image generation, it takes persistence and priority. If the prompt is too complicated, it will do its own thing. If it is too simple, it will do its own thing. After a lot of practice getting to know how it outputs images you will find the right, or close results. Emphasis on close. Leonardo.ai is my favorite.
Edit: if you don’t believe you are creative enough, prone the LLM for ideas. Ask it to make the prompt. They are finnicky
-
As a developer, I use LLMs as sort of a search engine, I ask things like how to use a certain function, or how to fix a build error. I try to avoid asking for code because often the generated code doesn’t work or uses made up or deprecated functions.
As a teacher, I use it to generate data for exercises, they’re especially useful for populating databases and generating text files in a certain format that need to be parsed. I tried asking for ideas for new exercises but they always suck.
kill time
I was surprised how effective it was for getting a checklist of things I should do to get a car that hasn’t been running for 30 years back on the road and asking for instructions for each step and things I should keep in mind
Outside of that it’s become a Google replacement for software development questions
You do kinda have to know about the things you ask it about so you can spot when it’s bullshiting you