
full image - Repost: Two things that kept breaking for me: LLM costs and prompt leaks (from Reddit.com, Two things that kept breaking for me: LLM costs and prompt leaks)
Mining:
Exchanges:
Donations:
Been hacking on something recently and wanted to get a reality check.While working with LLM APIs, I noticed two things pretty quickly:costs can get unpredictable depending on the model, and people paste way more sensitive stuff into prompts than you’d expect.The second one was kind of surprising.Stuff like API keys, emails, logs… just regular debugging type usage.And it all just gets sent straight out to whatever model you’re using.I didn’t have anything in place either, so I added a thin layer in front to:- catch obvious sensitive data - and route to cheaper/faster models when possible It’s pretty simple, but it actually helped more than I expected.Not sure if others are seeing this too or if I’m just over-indexing on my own use case.

Social Media Icons