Funny, because, sometimes it's not the patience of the other side that's the problem, but on my own side.. and with LLMs, I find in particular my patience challenged.. Whereas with humans, it's often somewhat possible to gauge their level of comprehension and adjust accordingly, but with the savant-idiot-like qualities that most LLMs exhibit, it's really difficult to strike a balance, or even understand at which point they're irrecoverably lost.
BrenBarn 1 days ago [-]
When a person can't do something because it exhausts their patience, we usually describe it not by saying the task is difficult but that it is tedious, repetitive, boring, etc. So this article reinforces my view that the main impact of LLMs is their abilities at the low end, not the high end: they make it very easy to do a bad-but-maybe-adequate job at something that you're too impatient to do yourself.
perrygeo 10 hours ago [-]
I agree with this more daily.
Converting a dictionary into a list of records when you known that's what you want ... easy, mechanical, boring af, and something we should almost obviously outsource to machines. LLMs are great at this.
Deciding whether to use a dictionary or a stream of records as part of your API? You need to internalize the impacts of that decision. LLMs are generally not going to worry about those details unless you ask. And you absolutely need to ask.
TeMPOraL 4 hours ago [-]
> Deciding whether to use a dictionary or a stream of records as part of your API? You need to internalize the impacts of that decision. LLMs are generally not going to worry about those details unless you ask. And you absolutely need to ask.
OTOH, unless I've been immersed in the larger problem space of streaming vs. batching and caching, and generally thinking on this level, there's a good chance LLMs will "think" of more critical edge cases and caveats than I will. I use scare quotes here not because of the "are LLMs really thinking?" question, but because this isn't really a matter of thinking - it's a matter of having all the relevant associations loaded in your mental cache. SOTA LLMs always have them.
Of course I'll get better results if I dive in fully myself and "do it right". But there's only so much time working adults have to "do it right", one has to be selective about focusing attention; for everything else, quick consideration + iteration are the way to go, and if I'm going to do something quick, well, it turns out I can do this much better with a good LLM than without, because the LLM will have all the details cached that I don't have time to think up.
(Random example from just now: I asked o3 to help me with power-cycling and preserving longevity of a screen in an IoT side project; it gave me good tips, and then mentioned I should also take into account the on-board SD card and protect it from power interruptions and wear. I haven't even remotely considered that, but it was a spot-on observation.)
This actually worries me a bit, too. Until now, I relied on my experience-honed intuition for figuring out non-obvious and second-order consequences of quick decisions. But if I start to rely on LLMs for this, what will it do to my intuition?
(Also I talked time, but it's also patience - and for those of us with executive functioning issues, that's often a difference between attempting a task or not even bothering with it.)
skydhash 8 hours ago [-]
> easy, mechanical, boring af, and something we should almost obviously outsource to machines
That’s when you learn vim or emacs. Instead of editing character wise, you move to bigger structures. Every editing task becomes a short list of commands and with the power of macros, repeatable. Then if you do it often, you add (easily) a custom command for it.
andyferris 8 hours ago [-]
Speaking of tedious and exhausting my patience… learning to use vim and emacs properly. I do like vim but I barely know how to use it and I’ve had well over a decade of opportunity to do so!
Pressing TAB with copilot to cover use cases you’ve never needed to discover a command or write a macro for is actually kinda cool, IMO.
stuaxo 4 hours ago [-]
AI selling itself at the high end is much like car companies showing off shiny sports cars.
Centigonal 11 hours ago [-]
>However, there doesn’t seem to be a huge consumer pressure towards smarter models. Claude Sonnet had a serious edge over ChatGPT for over a year, but only the most early-adopter of software engineers moved over to it. Most users are happy to just go to ChatGPT and talk to whatever’s available.
I want to challenge this assumption. I think ChatGPT is good enough for the use cases of most of its users. However, for specialist/power user work (e.g. coding, enterprise AI, foundation models for AI tools) there is strong pressure to identify the models with the best performance/cost/latency/procurement characteristics.
I think most "vibe coding" enthusiasts are keenly aware of the difference between Claude 3.7/Gemini Pro 2.5 and GPT-4.1. Likewise, people developing AI chatbots quickly become aware of the latency difference between e.g. OpenAI's and Claude (via Bedrock)'s batch APIs.
This is similar to how most non-professionals can get away with Paint.NET, while professional photo/graphic design people struggle to jump from Photoshop to anything else.
TeMPOraL 4 hours ago [-]
I don't know how "normies" use this, but ChatGPT has been steadily improving. Myself, I've been using it much more in the recent month than ever before (i.e. official webapp, as opposed to API or other providers), simply because o3 is just that good. The integration of search and thinking is something beautifully effective. It's definitely the smartest model around for any problem-solving queries, whether it's figuring out the pinout of some old electronic component you bought in Shenzhen a decade ago, or figuring out which product to buy to solve a problem and how it compares with alternatives.
I do agree that ChatGPT may just be good enough for casual users to not be worth exploring (I'm tired with constant churn of AI releases too - on that note, there should be a worldwide ban on multiple AI companies releasing similar tools at the same time; I don't have time to look into all of them at once!) - but they're definitely not getting the suboptimal deal here. At least not ones on the paid plan that are aware of the model switcher in the UI.
EDIT: Also, setting gpt-4o as the default model gives ChatGPT another stickiness point: it's (AFAIK still) unique image generator that qualitatively outclasses anything that came before.
wobfan 6 hours ago [-]
> ChatGPT is good enough for the use cases of most of its users
I think that's the point the author made. If the big majority of users wants this, but software developers want that, they obviously focus on this. Its what recent history confirmed and its what's logic in a capitalistic standpoint.
To break it down, developers want intelligence and quality, users want patience and validation. ChatGPT is good at the latter and okay (in comparison to competitors) at the first.
ChrisMarshallNY 11 hours ago [-]
It takes practice, skill, and self-actualization, to become a really good listener. I know I’m not there, yet, and I’ve been at it, a long time. I suspect most folks aren’t so good at it.
It’s entirely possible that LLMs could make it so that people expect superhuman patience from other people.
I think there was a post, here, a few days ago, about people being “lost” to LLMs.
2 hours ago [-]
ggm 8 hours ago [-]
"I'm sorry, you have exceeded my budget for today and must either ask again later or pay for a higher level of service" is not infinite patience. More specifically it's also not "too cheap to meter" because its patently both metered, and not too cheap.
And yes, despite what they might say, people were not seeking intelligence, which is under-defined and highly misunderstood. They were seeking answers.
stuaxo 4 hours ago [-]
This is idea nails it.
I can ask the LLM infinite "stupid questions".
For all the things I know a little about, it can push me in the direction of an average in that field.
I can do lots of little prototypes and find the gaps then think and come back or ask more, in turn I learn.
ktallett 3 hours ago [-]
But do you ever get good enough to make a contribution that's worthwhile? And also is your knowledge flawed because the knowledge of the data used to train the model is flawed?
Whilst I do see your point and I do see the value for prototyping, I don't quite agree that you can learn very much from it. Not more than the many basic intro to..... Articles can teach.
kepano 9 hours ago [-]
I had a similar though a while ago[1]:
> the most salient quality of language models is their ability to be infinitely patient
> humans have low tolerance when dealing with someone who changes their mind often, or needs something explained twenty different ways
Something I enjoy when working with language models is the ability to completely abandon days of work because I realize I am on the wrong track. This is difficult to do with humans because of the social element of sunk cost fallacy — it can be hard to let go of the work we invested our own time into.
Agree, I find it rather funny that LLM can refresh their context, but human remember their context day-through-day so it is sometime very hard to explain things in an alternative wording to them.
Animats 8 hours ago [-]
The near future: receiving a huge OpenAI bill because your kid asked "Why" over and over and got answers.
stuaxo 4 hours ago [-]
Silly because search engines work really well for this use case.
bee_rider 7 hours ago [-]
Huh, I expected going in that this would actually be about LLMs waiting on customer service lines or whatever. That actually seems like it would be a rare social good produced by these things; plenty of organizations seem to shirk their responsibility to provide prompt customer service by hoping people will give up…
I’m less convinced of the good of an AI therapist. Seems too healthcare-y for these current buggy messes. But if somebody is aided by having a digital shoulder to cry on… eh, ok, why not?
TeMPOraL 4 hours ago [-]
> I’m less convinced of the good of an AI therapist. Seems too healthcare-y for these current buggy messes. But if somebody is aided by having a digital shoulder to cry on… eh, ok, why not?
Medium-term that may be the problem. The social aspect of having another person "see you" is important in therapy. But on the immediate term, LLMs are a huge positive in this space. Professional therapy is stupidly expensive in terms of time and money, and makes it unavailable to majority of people, even those rather well-off.
And then there's availability, which, beyond what the article discussed, matters also because many people have problems that don't fit well with typical 1h sessions 2-4 times a month. LLMs let one have a 2+h therapy sessions every day at random hours for as long it takes for one to unburden themselves completely; something that's neither available nor affordable for most people.
th0ma5 11 hours ago [-]
That's the ultimate goal of these models, though, to exhaust you of any sass. They will eventually approach full hallucination I'd imagine for any eventually long enough context.
timewizard 10 hours ago [-]
> Most users are happy to just go to ChatGPT and talk to whatever’s available. Why is that?
Perhaps their use case is so unremarkable and unsophisticated that the quality of output is immaterial to it.
> Most good personal advice does not require substantial intelligence.
Is that what therapy is to this author? "Good advice given unintelligently?"
> They’re platitudes because they’re true!
And the appeal is you can get an LLM to repeat them to you? How exactly is that "appealing?"
> However, they are fundamentally a good fit for doing it because they are
...bad technology that can only solve a limited number of boring problems unreliably. Which is what saying platitudes to a person in trouble is and not at all what therapy is meant to be.
em-bee 2 hours ago [-]
for me to discuss problems requires human empathy from the listener. AI can't provide that. talking to an AI about personal problems is no better than talking to myself.
patience for answering technical/knowledge questions that i don't want to bother a human being with may be nice, but i get the same patience from a search engine. and the patience an AI provides is contrasted with the patience that i need to get the right answers.
i have endless patience when talking to a human being because i have empathy for them. but i don't have empathy for a machine, and therefore i have no patience at all for the potential mistakes and hallucinations that an AI might produce.
AI for therapy is even worse. the thought that i could receive bad/hallucinated advice from an AI outright scares me.
Rendered at 12:36:49 GMT+0000 (UTC) with Wasmer Edge.
Converting a dictionary into a list of records when you known that's what you want ... easy, mechanical, boring af, and something we should almost obviously outsource to machines. LLMs are great at this.
Deciding whether to use a dictionary or a stream of records as part of your API? You need to internalize the impacts of that decision. LLMs are generally not going to worry about those details unless you ask. And you absolutely need to ask.
OTOH, unless I've been immersed in the larger problem space of streaming vs. batching and caching, and generally thinking on this level, there's a good chance LLMs will "think" of more critical edge cases and caveats than I will. I use scare quotes here not because of the "are LLMs really thinking?" question, but because this isn't really a matter of thinking - it's a matter of having all the relevant associations loaded in your mental cache. SOTA LLMs always have them.
Of course I'll get better results if I dive in fully myself and "do it right". But there's only so much time working adults have to "do it right", one has to be selective about focusing attention; for everything else, quick consideration + iteration are the way to go, and if I'm going to do something quick, well, it turns out I can do this much better with a good LLM than without, because the LLM will have all the details cached that I don't have time to think up.
(Random example from just now: I asked o3 to help me with power-cycling and preserving longevity of a screen in an IoT side project; it gave me good tips, and then mentioned I should also take into account the on-board SD card and protect it from power interruptions and wear. I haven't even remotely considered that, but it was a spot-on observation.)
This actually worries me a bit, too. Until now, I relied on my experience-honed intuition for figuring out non-obvious and second-order consequences of quick decisions. But if I start to rely on LLMs for this, what will it do to my intuition?
(Also I talked time, but it's also patience - and for those of us with executive functioning issues, that's often a difference between attempting a task or not even bothering with it.)
That’s when you learn vim or emacs. Instead of editing character wise, you move to bigger structures. Every editing task becomes a short list of commands and with the power of macros, repeatable. Then if you do it often, you add (easily) a custom command for it.
Pressing TAB with copilot to cover use cases you’ve never needed to discover a command or write a macro for is actually kinda cool, IMO.
I want to challenge this assumption. I think ChatGPT is good enough for the use cases of most of its users. However, for specialist/power user work (e.g. coding, enterprise AI, foundation models for AI tools) there is strong pressure to identify the models with the best performance/cost/latency/procurement characteristics.
I think most "vibe coding" enthusiasts are keenly aware of the difference between Claude 3.7/Gemini Pro 2.5 and GPT-4.1. Likewise, people developing AI chatbots quickly become aware of the latency difference between e.g. OpenAI's and Claude (via Bedrock)'s batch APIs.
This is similar to how most non-professionals can get away with Paint.NET, while professional photo/graphic design people struggle to jump from Photoshop to anything else.
I do agree that ChatGPT may just be good enough for casual users to not be worth exploring (I'm tired with constant churn of AI releases too - on that note, there should be a worldwide ban on multiple AI companies releasing similar tools at the same time; I don't have time to look into all of them at once!) - but they're definitely not getting the suboptimal deal here. At least not ones on the paid plan that are aware of the model switcher in the UI.
EDIT: Also, setting gpt-4o as the default model gives ChatGPT another stickiness point: it's (AFAIK still) unique image generator that qualitatively outclasses anything that came before.
I think that's the point the author made. If the big majority of users wants this, but software developers want that, they obviously focus on this. Its what recent history confirmed and its what's logic in a capitalistic standpoint.
To break it down, developers want intelligence and quality, users want patience and validation. ChatGPT is good at the latter and okay (in comparison to competitors) at the first.
It’s entirely possible that LLMs could make it so that people expect superhuman patience from other people.
I think there was a post, here, a few days ago, about people being “lost” to LLMs.
And yes, despite what they might say, people were not seeking intelligence, which is under-defined and highly misunderstood. They were seeking answers.
I can ask the LLM infinite "stupid questions".
For all the things I know a little about, it can push me in the direction of an average in that field.
I can do lots of little prototypes and find the gaps then think and come back or ask more, in turn I learn.
Whilst I do see your point and I do see the value for prototyping, I don't quite agree that you can learn very much from it. Not more than the many basic intro to..... Articles can teach.
> the most salient quality of language models is their ability to be infinitely patient > humans have low tolerance when dealing with someone who changes their mind often, or needs something explained twenty different ways
Something I enjoy when working with language models is the ability to completely abandon days of work because I realize I am on the wrong track. This is difficult to do with humans because of the social element of sunk cost fallacy — it can be hard to let go of the work we invested our own time into.
[1] https://x.com/kepano/status/1842274557559816194
I’m less convinced of the good of an AI therapist. Seems too healthcare-y for these current buggy messes. But if somebody is aided by having a digital shoulder to cry on… eh, ok, why not?
Medium-term that may be the problem. The social aspect of having another person "see you" is important in therapy. But on the immediate term, LLMs are a huge positive in this space. Professional therapy is stupidly expensive in terms of time and money, and makes it unavailable to majority of people, even those rather well-off.
And then there's availability, which, beyond what the article discussed, matters also because many people have problems that don't fit well with typical 1h sessions 2-4 times a month. LLMs let one have a 2+h therapy sessions every day at random hours for as long it takes for one to unburden themselves completely; something that's neither available nor affordable for most people.
Perhaps their use case is so unremarkable and unsophisticated that the quality of output is immaterial to it.
> Most good personal advice does not require substantial intelligence.
Is that what therapy is to this author? "Good advice given unintelligently?"
> They’re platitudes because they’re true!
And the appeal is you can get an LLM to repeat them to you? How exactly is that "appealing?"
> However, they are fundamentally a good fit for doing it because they are
...bad technology that can only solve a limited number of boring problems unreliably. Which is what saying platitudes to a person in trouble is and not at all what therapy is meant to be.
patience for answering technical/knowledge questions that i don't want to bother a human being with may be nice, but i get the same patience from a search engine. and the patience an AI provides is contrasted with the patience that i need to get the right answers.
i have endless patience when talking to a human being because i have empathy for them. but i don't have empathy for a machine, and therefore i have no patience at all for the potential mistakes and hallucinations that an AI might produce.
AI for therapy is even worse. the thought that i could receive bad/hallucinated advice from an AI outright scares me.