I regularly use the request data button on ChatGPT, then parse the JSON and output HTML for each conversation, and a DB to search for the file(s) when needed.
askafriend 2 days ago [-]
Personally I'll wait for OpenAI to add this feature directly. I'm sure they're working on it.
I don't want this solution delivered in the form of an extension (one practical reason is I use ChatGPT from mobile a lot of the time). I have 0 extensions installed in general.
Larrikin 2 days ago [-]
Why do you subject yourself to web ads?
winternewt 13 hours ago [-]
How else do you propose that the web sites you visit should fund their work and infrastructure?
az09mugen 16 hours ago [-]
With brave mobile you have no ads, and no extension needed.
d4rkp4ttern 10 hours ago [-]
Another feature that I find shockingly absent from
most web-based chat providers is autocomplete, I.e copilot-like suggestions to complete what your typing. Typing long text into chat boxes quickly becomes tedious and having context-based autocomplete helps a lot — you can experience this within AI-IDEs like zed or cursor etc — in fact I often resort to using those just for this feature.
amelius 2 days ago [-]
I wish I could search through my chats more easily.
But I don't want to download extensions, they are too security-unfriendly.
I learned today, that o-1 is able to search through all chats and can find and verify if the findings are relevant to the actual context. i found that very usefull as i have a lot of very long chats regarding only one project.
ChatGpt lists the findings with the date and context and searches further back if asked for it. (in my case summer 2024)
graeme 2 days ago [-]
Wait, how do you do that?
rallyforthesun 2 days ago [-]
I did just ask to remember from our last chats, if we hadn’t this particular bug discussed before.
kccqzy 1 days ago [-]
Why not just copy paste these chats and save them into a local file?
I mean I've been copying chats with my friends and saving them locally since the time online chats were called Instant Messaging.
2 days ago [-]
jaredsohn 1 days ago [-]
OpenAI has already built some of this but maybe it requires a paid account.
Since then, for whatever reason, it's not available for my account (I'm on a Plus plan).
s-sameer 1 days ago [-]
Thanks for sharing that...I don't use the pro plan so I built this extension to allow everyone to bookmark their important chats for free
s-sameer 1 days ago [-]
all chats are stored locally, so there are no privacy concerns either
ramoz 2 days ago [-]
I’ve always wanted better search and chat organization.
But I’m at a place where I can’t determine if the ephemeral UX of chatting with AI (ChatGPT, Claude) isn’t actually better. Most chats I want to save these days are things like code snippets that I’m not ready to integrate yet.
rubymamis 2 days ago [-]
You could join my native, cross-platform client waitlist[1] if you're looking to use your OpenAI API key. Work-in-progress but it's coming along pretty fast.
That is a perfect use case for having an extension like this. It makes it easier for you to jump back into a previous conversation and is primarily what I use for as well.
behnamoh 2 days ago [-]
The fact that you even need something like this shows how far we are from truly useful language models. Because ideally they should have all of the context of all of the messages in their mind, and so far we've had to manually manage that context for them.
themanmaran 2 days ago [-]
To be fair this is less a language model problem, and more in the application layer around them.
Theoretically with an infinite context window a model would just work fine forever by shoving the entire conversation history into context with each request. But a message search/retrieval makes a lot more sense.
I think the long term AI chat is just relatively new as a UI pattern, and so it takes time to build patterns around it.
Ex: in 2023 I told GPT to answer all questions like a pirate. I never told it to stop doing that, so if we're loading every historical chat in memory, should it still be answering as a pirate?
Vampiero 2 days ago [-]
> Theoretically with an infinite context window a model would just work fine forever by shoving the entire conversation history into context with each request. But a message search/retrieval makes a lot more sense.
Nope, with an infinite context window the LLM would take forever to give you an answer. Therefore it would be useless.
We don't really have such a thing as a context window, it's an artifact of LLM architecture. We are building a ton of technology around it but who's to say it's the right approach?
Maybe the best AIs will only use a very tiny LLM for actual language processing while delegating storage and compression of memories to something that's actually built for that.
layer8 1 days ago [-]
You need something like this if you want to use them as a reminder. Even if LLMs could remind you of past chats, they wouldn’t know which chats you want to be reminded of. It’s like marking chats as favorites. You actually have to mark them yourself, for anyone to know which chats are your favorites.
consumer451 7 hours ago [-]
Just a reminder that LibreChat exists. It's FOSS, and lets you bring your own APIs key for all the LLM providers. The UI is excellent.
You can run it locally, or as I do on a $5/month Linode server. I don't want to pay ~20/month for each LLM provider, so I put $5 to $10 on my Anthropic and OpenAI API accounts every couple months, and that lasts me plenty long.
You get to save all your chats, change models mid-chat, view code artifacts, create presets, and much more.
If you don't know how to set up something like this, ask ChatGPT or Claude. They will walk you through it, and you will learn a useful skill. It's shockingly easy.
Honestly I'm pretty sick of ChatGPT anymore. It completely ignores custom instructions, loses context insanely quickly, has bugs with working with canvas where it puts the code in the chat instead of updating canvas, the Project feature is half baked and is very terrible experience, GPTs are just really stupid and also half baked. I started using Le Chat (Mistral) again and, honestly, the conversations there are much more fun. Tons of issues with that one as well, but I am happier using it haha. Ended up using a desktop app that lets me control the system prompt against Mistral's API and couldn't be happier.
memhole 23 hours ago [-]
I’ve been wondering if OpenAI’s updates ruin other applications or utility? Kinda makes me skeptical of global models. Obviously OpenAI is optimizing and changing how they train their models. Makes sense to me you’d lose some of the quirks. I think this is actually the promise of open weight models. Personally, I’ve never used ChatGPT, Claude, whatever. I’ve found a decent amount of utility in just the open weight models.
Kerbonut 18 hours ago [-]
I am right there with you on the open weight models. Only reason I started back on ChatGPT is because I'm rebuilding my servers and had to take out my 3090Ti. That thing is huge and I needed the room for hard drives until my 3.5" to 5.25" adapters come in.
brettgriffin 2 days ago [-]
There are two technologies I use every day that demonstrate a company is capable of solving an incredibly hard problem, X, while completely dropping the ball on the presumable easier part of UX, Y. ChatGPT is one of those. Driving in my Tesla is the other. I'm not sure how or why it happens but I think it about it daily.
tmpz22 2 days ago [-]
Ineffective dog fooding. PMs might use it every day but they only use a subset of functionality. Some engineers may intentionally never use it when they get home because they’re so sick of looking at it. Some engineers doing crazy esoteric but it doesn’t propagate because their heads are down within the org. Most people are only showcasing exclusively happy paths to leadership, sorry I meant management. Executives only using it for emails, demos, and again a limited subset of happy paths.
Just burnout, siloing, and a lack of creativity. We can’t solve these problems in the industry because we are greedy short term thinkers who believe we’re long term innovators. To say nothing of believing we are smarter and more entitled then we are
kraftman 2 days ago [-]
But each chat has a unique link that you can just bookmark right?
graeme 2 days ago [-]
Is it possible to bookmark a chat on mobile? Haven't found a way to do so on ios
Claude has a way to star important conversations. Don't think chatgpt has that.
My only solution so far has been aggressively deleting conversations once I find and answer and know I don't need it for reference.
fragmede 2 days ago [-]
in the ChatGPT iOS app, I can long click on the chat itself on the left sidebar, and one of the options is"share chat".
graeme 2 days ago [-]
Ah. That one actually makes a public link and doesn't work if there are images or under some circumstances.
On desktop you can directly copy the url for reference and open it later
brettgriffin 2 days ago [-]
Of course. But I use it dozens of times a day across dozens of projects. Many of the concepts are linked together. Intelligently indexing, linking, and referencing them seems like a pretty obvious feature. I doubt I'm in the minority in expecting this.
s-sameer 2 days ago [-]
It basically offers a much better user experience, than manually bookmarking each link
perchard 2 days ago [-]
perhaps Y is harder to solve than you are assuming
llamaimperative 2 days ago [-]
"Harder [for the organization in question] to solve" is definitely right
Not really an excuse though, since a product company's mandate is to create a product that doesn't leave its customers baffled about apparently missing functionality.
s-sameer 2 days ago [-]
lol ikr, its crazy this doesn't already exist
2 days ago [-]
maxbaines 2 days ago [-]
I built my own client (llmpad.com) to originally solve this problem, as well as using other LLM's and features. A little surprised others have not done this too?
You can try llmpad soon, feel free to message me.
frankacter 16 hours ago [-]
Is there something like this, but for managing ChatGPT/AI Prompts?
deknos 2 days ago [-]
is there something like that for firefox?
s-sameer 23 hours ago [-]
does anyone even use light mode on ChatGPT?
jrs235 2 days ago [-]
Is it just me or is ChatGPT down?
s-sameer 1 days ago [-]
I think it was down yesterday
Rendered at 22:58:07 GMT+0000 (UTC) with Wasmer Edge.
I don't want this solution delivered in the form of an extension (one practical reason is I use ChatGPT from mobile a lot of the time). I have 0 extensions installed in general.
But I don't want to download extensions, they are too security-unfriendly.
I mean I've been copying chats with my friends and saving them locally since the time online chats were called Instant Messaging.
https://mashable.com/article/chatgpt-chat-history-search-int...
It also has projects (says is for Plus, Team, and Pro users). https://help.openai.com/en/articles/10169521-using-projects-...
Any outputs they generate that one finds useful need to be retained outside their walled-garden.
Since then, for whatever reason, it's not available for my account (I'm on a Plus plan).
But I’m at a place where I can’t determine if the ephemeral UX of chatting with AI (ChatGPT, Claude) isn’t actually better. Most chats I want to save these days are things like code snippets that I’m not ready to integrate yet.
[1] https://www.get-vox.com/
Theoretically with an infinite context window a model would just work fine forever by shoving the entire conversation history into context with each request. But a message search/retrieval makes a lot more sense.
I think the long term AI chat is just relatively new as a UI pattern, and so it takes time to build patterns around it.
Ex: in 2023 I told GPT to answer all questions like a pirate. I never told it to stop doing that, so if we're loading every historical chat in memory, should it still be answering as a pirate?
Nope, with an infinite context window the LLM would take forever to give you an answer. Therefore it would be useless.
We don't really have such a thing as a context window, it's an artifact of LLM architecture. We are building a ton of technology around it but who's to say it's the right approach?
Maybe the best AIs will only use a very tiny LLM for actual language processing while delegating storage and compression of memories to something that's actually built for that.
You can run it locally, or as I do on a $5/month Linode server. I don't want to pay ~20/month for each LLM provider, so I put $5 to $10 on my Anthropic and OpenAI API accounts every couple months, and that lasts me plenty long.
You get to save all your chats, change models mid-chat, view code artifacts, create presets, and much more.
If you don't know how to set up something like this, ask ChatGPT or Claude. They will walk you through it, and you will learn a useful skill. It's shockingly easy.
https://librechat.ai
https://github.com/danny-avila/LibreChat
Just burnout, siloing, and a lack of creativity. We can’t solve these problems in the industry because we are greedy short term thinkers who believe we’re long term innovators. To say nothing of believing we are smarter and more entitled then we are
Claude has a way to star important conversations. Don't think chatgpt has that.
My only solution so far has been aggressively deleting conversations once I find and answer and know I don't need it for reference.
On desktop you can directly copy the url for reference and open it later
Not really an excuse though, since a product company's mandate is to create a product that doesn't leave its customers baffled about apparently missing functionality.