#70: Hot Takes on AI (Part IV)
A contrarian perspective on the current state of AI
We’re back with another Hot Takes on AI edition. Time to explore the bear case for AI shopping, investing, and dependencies.
Hot Take #1:
Shopping on AI platforms like ChatGPT will lag in adoption because AI is optimized for awareness, not conversion.
When you read “Amazon.com”, what do you think? It’s likely that your first (and second) thought has to do with “shopping”.
People associate Amazon with buying stuff. You can buy truly anything on Amazon’s marketplace as they seem to sell everything under the sun. When you go to their website, you’re either considering making a purchase or planning to make a purchase. With that said, you may go to Amazon to browse, read product reviews, or build a wish list for a later date. But all of these activities are shopping-adjacent.
But when you hear “ChatGPT”, what do you think? Your first thought probably isn’t “shopping”. I believe that will be a problem for OpenAI as they roll out their Instant Checkout feature in the coming months (expected based on news releases). For those looking for a refresher on Instant Checkout, take a peek at #61: The Case Against Instant Checkout.
Back in October, I presented a case that the vast majority of Instant Checkout transaction volume will be from the existing e-commerce players (thus unlikely to materially increase consumer spending). This time, I’m stating that when people go to ChatGPT, they plan to do plenty of other things besides shopping.
According to research conducted by Profound, roughly 6% of ChatGPT prompt volume is transactional in nature. Now, everyone’s definition of what implies “transactional intent” may vary (and the statistic is relative to the data set Profound analyzed), but the point ties in nicely with the ChatGPT use case report. People go to ChatGPT to discover information or help complete a task (not only guidance on what to buy). More on the ChatGPT use case point later.
See, people go to Amazon to make a purchase. The singular focus allows Amazon.com to be optimized for conversion, of which ChatGPT does not have that luxury. It’s a product positioning issue, not a user experience issue as the wireframes for Instant Checkout seem clean and shopper-friendly.
Hot Take #2:
In 2026, not all ships will rise with the “AI tide” as they did this year.
Note: None of the below is financial guidance.
In 2025, if you held a hefty allocation of Big Tech stocks in your portfolio, you did well. Like, really well.
I don’t see the AI tide lifting all ships in 2026. With so much run-up in valuations, steep P/E ratios, and circular financings in the Big Tech realm, I imagine there will be events we can’t foresee today that will knock some of 2025’s winners off their pedestals. Consider that we may see continued rate cuts (as the next Federal Reserve chair will very likely be Trump-approved), which subsequently means that there will be fewer monetary policy tools for the US government to use if inflation starts to run away.
I recommend investing in low-cost index funds. On a long enough time horizon, neither you nor I will beat the market. But if you had to ask me what I’m watching, here are two themes.
Power
Energy generation, storage, transportation, and technology (i.e., cooling or grid modernization). The more we rely on AI, the more energy that is needed to power the critical technology. With hundreds of billions (announced) of Big Tech capital expenditures in 2025, capital will continue to be put to work to support the infrastructure build-out of AI. Invest in the picks and shovels companies, which I believe is a safer play.
China-based Foundational Model Companies
A bit contradictory to my previous take about investing in picks and shovels companies, yet I believe it’s worth examining the current state of AI in China.
When DeepSeek was released back in January, NVIDIA’s stock was rocked on the premise that fewer semiconductor chips are needed to create large language models (LLMs). Although the market (and NVIDIA) soon recovered, plenty of LLMs have been popping up in China at a fraction of the cost and reasonably similar performance compared to US-based models. Moonshot AI’s Kimi K2 and Alibaba’s Qwen are some of the leaders in open-source LLMs, with Airbnb’s CEO boasting that they are relying heavily on Qwen for Airbnb agent performance.
Investing in China-based companies has its own risks regarding foreign investor claims and geopolitical uncertainty. But current valuations (both public and private markets) suggest that the US will completely own the global AI story. I see that as unlikely given China’s proven ability to create low-cost, strong performing models.
Hot Take #3:
Outsourcing low-stakes decision-making to AI may erode your high-stakes decision-making abilities.
Maybe it’s just me, but I’ve been increasingly aware of how frequently I ask AI (mainly ChatGPT or Gemini) for advice lately. Earlier this year, I stuck to only asking trivial questions, like what to order at a restaurant. However, as I got more confident in my prompt writing and model understanding, I began to treat it as a true copilot or advisor, asking anything and everything, from innocuous questions to more meaningful ones about life and work.
Although I’m generally satisfied with the output I receive from AI, I do recognize that I’m slowly developing a crutch. Instead of making decisions based on either my own research or intuition, I’m outsourcing this process to technology. A very intelligent technology, but it’s only as wise as the data it’s trained on. And that data excludes many years of personal context that AI does not have access to.
For better or worse, my reliance on AI isn’t uncommon. OpenAI published a report back in September outlining how its users leverage ChatGPT. They broke out uses into three distinct categories: “asking”, “doing”, and “expressing”. Turns out, 49% of prompts fall into the “asking” category, in which users are asking ChatGPT for some sort of advice.
When I ask AI to execute a task for me, my internal dialogue is usually, “I’m glad I can delegate this task, but I’m also glad that I have the knowledge and experience to actually complete the task if I need to.” That provides me with a sense of relief and resolves some guilt about relying on AI to do things for me that I really could do myself.
Have I become more efficient and effective thanks to AI? Absolutely. But in 2026, I’m going to make a conscious effort to still make sure there is productive friction in my decision-making process. When it comes to bigger, more complex decisions, I want to make sure I have the necessary reps on things that are less consequential. Low stakes reps help strengthen high stakes judgment.

