Friday, April 3, 2026

Two things that kept breaking for me: LLM costs and prompt leaks


full image - Repost: Two things that kept breaking for me: LLM costs and prompt leaks (from Reddit.com, Two things that kept breaking for me: LLM costs and prompt leaks)
Been hacking on something recently and wanted to get a reality check.While working with LLM APIs, I noticed two things pretty quickly:costs can get unpredictable depending on the model, and people paste way more sensitive stuff into prompts than you’d expect.The second one was kind of surprising.Stuff like API keys, emails, logs… just regular debugging type usage.And it all just gets sent straight out to whatever model you’re using.I didn’t have anything in place either, so I added a thin layer in front to:- catch obvious sensitive data  - and route to cheaper/faster models when possible  It’s pretty simple, but it actually helped more than I expected.Not sure if others are seeing this too or if I’m just over-indexing on my own use case.


Mining:
Bitcoin, Cryptotab browser - Pi Network cloud PHONE MINING
Fone, cloud PHONE MINING cod. dhvd1dkx - Mintme, PC PHONE MINING


Exchanges:
Coinbase.com - Stex.com - Probit.com


Donations:
Done crypto



Comments System

Disqus Shortname

Disqus Shortname

designcart
Powered by Blogger.