#49: Things I’m Thinking About in AI
Nostalgia, RAG, and GPT-5
Mixing it up from the usual programming. Instead of focusing on a central theme, I’ll jump from topic to topic. We’ll cover three AI happenings in a blurb format. Let’s dive in.
Nostalgia Deepens
Do you ever wake up and the first thing to cross your mind is Newton’s third law of motion? For every action, there is an equal and opposite reaction. You know, that one?
Better yet, what’s your favorite law of motion? Share your favorite law and why in the comments… kidding! I doubt anyone wakes up with this thought. But feel free to comment as it helps with Substack engagement.
Anyways, new AI developments like OpenAI’s ChatGPT Agent are increasingly focused on removing humans from the loop rather than assisting them. Although these tools are not fully reliable yet, AI is advancing so rapidly that you must believe as replacement agents improve their accuracy, there will be more uses for them. This will likely lead to a broader existential crisis for many people, as they navigate an AI-led world.
As a result, people will yearn for simpler times, if not turn away from AI altogether (cue Newton’s third law). On that note, I stumbled upon an X post from creator Zach Pogrob, where he documents a day in his life on a camcorder.
My iPhone has every video functionality I could possibly need, yet there’s something about that camcorder footage that just feels better (even though I know it isn’t). It’s because of the emotion it evokes. Camcorders were popular when life was simpler. Before algorithms were able to do our jobs.
If I’m working at a consumer brand that’s been around for decades, I’m relaunching product lines from years ago. The “archive collection” or the “90s capsule” are phrases you’ll start to hear more often. Second-hand clothing will continue to rise in popularity, partially because people want to be reminded of a simpler time when a vintage style was in vogue. And they’ll express this opinion through their purchase decisions, whether that be funky clothing or dated technology. Brands like Hollister, Kodak, and Nintendo are well-positioned to reinvigorate life into their currently faded image. Why innovate when people are nostalgic enough to buy what used to work? Out with the new and in with the old.
Not Your Kitchen Dish RAG
Ever receive an answer from ChatGPT (or another LLM chatbot) that you’re sure is wrong? Or at the very least, sounds fishy? That’s where Retrieval-Augmented Generation (RAG) comes in, an AI architecture. Instead of relying solely on what a large language model was trained on, RAG-enhanced systems retrieve relevant documents from external data sources in real time, then generate responses based on those documents. This could be your company’s internal data, private databases, or niche, public domains that could be of use to ensure accuracy the next time you ask a question. Oversimplifying it, RAG is a technology that sense checks LLM output so that ChatGPT answer has a higher likelihood of being correct.
Where I see RAG coming in most handy is in the application layer. You may run a consumer brand and need to ensure your AI customer service agent is up to speed on all your recent policy changes and inventory positions. If your customer service company is knowledgeable on the latest AI frameworks, they should allow you to add or subtract any kind of brand document you would like without needing to retrain their models. RAG will retrieve these documents when the LLM is making the inference as part of the LLM answer. Furthermore, RAG sidesteps the LLM, incorporating the data you provided before your AI chatbot delivers an erroneous response, ensuring that it adapts to the whims of your business.
But hey, why doesn’t my AI customer service agent just retrain their model? Well, that’s time-consuming and expensive. With RAG, you can add a layer of business information to reflect real-time change without a large upheaval to your full model. If you’re a brand operator and your customer service company isn’t willing to add in new information whenever you’d like, ask about RAG. Not all AI customer service companies can support a RAG-based system that lets you dynamically include new information. But it doesn’t hurt to ask as it could give your business an edge.
On another note, RAG comes in handy for other parts of the application layer. Say you’re developing AI for finance or AI for law (companies like Hebbia or Harvey). Relying on ChatGPT’s or Perplexity’s output can be a risky endeavor if you aren’t 100% sure the answer is right. Because your clients can’t afford to be only 80% of the way there. They need to get to 100%, otherwise the answer is wrong, and the technology doesn’t work.
Expanding more on this last point, I’ve found that asking complex questions to LLM’s yields answers that are somewhat correct, but rarely 100% correct.
I recently built a personal CRM of second-and third-degree connections using Perplexity’s Comet. The goal was to gather a list of people who worked in AI-adjacent fields, of which I have someone in my network (first-degree connection) that could bridge an introduction. I had Comet scan my email and LinkedIn to pull in second-and third-degree connections based on who I had emailed over the past year and who I’m connected to on LinkedIn. I’d grade the assignment at 60%. The framework was there, but there were a lot of silly naming mistakes and some of the tagging was clearly incorrect. Yet, it was a good start.
Why bring this up? It shows that the LLMs need help getting 100% right (or at a minimum, better than 60%). Where RAG can best come in is increasing that 60% to a confidence level that works for your use case.
OpenAI’s New Model Release
Yesterday (August 7th), OpenAI announced the launch of GPT-5. The new and improved model boasts stronger writing and coding capabilities (look out, Anthropic), more reliable performance in health-related fields, and the ability to incorporate a personality into the LLM’s responses. Oh, and a bunch more technical changes like larger context windows, multimodal capabilities, and reduced hallucination rates.
Beyond the technical improvements, the bigger takeaway is OpenAI’s push into vertical applications of its software. From becoming a more reliable domain for health topics to pushing further into software development, OpenAI looks ready to disrupt the status quo in trillion-dollar industries.
If you’re building an AI company, I’d recommend going niche as quickly as possible. A service that is too broad and has a large total addressable market means that OpenAI will soon be knocking on your door.
That said, I doubt GPT-5 is going to provide answers to niche topics that are 100% right all the time. There are still plenty of opportunities to improve upon the LLM output, which is where RAG and domain-specific expertise come into play.

