7
https://x.com/i/lists/1669153613199835138?t=R0mCicxs7zfJE_yOAek4gQ&s=09
Michael Fritzell (Asian Century Stocks)
RT @smartkarma: Malaysian equity market kicked off to a strong start in Jan-26, with ADTV at MYR3.4b, surpassing all monthly levels recorded in 2025.
Source: Maybank, Bursa Malaysia https://t.co/sowVmjmpeQ
tweet
Michael Fritzell (Asian Century Stocks)
https://t.co/WzIUZSp3aU
tweet
Michael Fritzell (Asian Century Stocks)
Great advice from @firstadopter's farewell letter:
- Take the long-term view on EPS
- Do you own work to build conviction
- Micro over macro: fundamentals matter most
- Sit and hold, don't trade in and out too much
https://t.co/FJdjYpLwP0
tweet
Moon Dev
Missed clawdbot zoom
In case you missed today’s wild clawdbot zoom
You can get a replay and a ticket for tomorrow here: https://t.co/JbJdIbW2p9
see you tomorrow
Moon
tweet
Dimitry Nakhla | Babylon Capital®
RT @DimitryNakhla: Stanley Druckenmiller on what he learned from George Soros:
“𝘐𝘯 𝘣𝘢𝘴𝘦𝘣𝘢𝘭𝘭 𝘵𝘦𝘳𝘮𝘴, 𝘐 𝘩𝘢𝘥 𝘢 𝘷𝘦𝘳𝘺 𝘩𝘪𝘨𝘩 𝘣𝘢𝘵𝘵𝘪𝘯𝘨 𝘢𝘷𝘦𝘳𝘢𝘨𝘦. 𝘏𝘦 𝘩𝘢𝘥 𝘢 𝘮𝘶𝘤𝘩 𝘩𝘪𝘨𝘩𝘦𝘳 𝘴𝘭𝘶𝘨𝘨𝘪𝘯𝘨 𝘱𝘦𝘳𝘤𝘦𝘯𝘵𝘢𝘨𝘦… 𝙒𝙝𝙖𝙩 𝙄 𝙡𝙚𝙖𝙧𝙣𝙚𝙙 𝙛𝙧𝙤𝙢 𝙎𝙤𝙧𝙤𝙨 𝙞𝙨: 𝙬𝙝𝙚𝙣 𝙮𝙤𝙪 𝙝𝙖𝙫𝙚 𝙘𝙤𝙣𝙫𝙞𝙘𝙩𝙞𝙤𝙣, 𝙮𝙤𝙪 𝙨𝙝𝙤𝙪𝙡𝙙 𝙗𝙚𝙩 𝙧𝙚𝙖𝙡𝙡𝙮 𝙗𝙞𝙜.”
The idea isn’t to always be active.
It’s to size up when the odds are most in your favor.
The quote frequently attributed to Soros:
“𝘐𝘵’𝘴 𝘯𝘰𝘵 𝘸𝘩𝘦𝘵𝘩𝘦𝘳 𝘺𝘰𝘶’𝘳𝘦 𝘳𝘪𝘨𝘩𝘵 𝘰𝘳 𝘸𝘳𝘰𝘯𝘨 𝘵𝘩𝘢𝘵’𝘴 𝘪𝘮𝘱𝘰𝘳𝘵𝘢𝘯𝘵, 𝙗𝙪𝙩 𝙝𝙤𝙬 𝙢𝙪𝙘𝙝 𝙢𝙤𝙣𝙚𝙮 𝙮𝙤𝙪 𝙢𝙖𝙠𝙚 𝙬𝙝𝙚𝙣 𝙮𝙤𝙪’𝙧𝙚 𝙧𝙞𝙜𝙝𝙩 𝙖𝙣𝙙 𝙝𝙤𝙬 𝙢𝙪𝙘𝙝 𝙮𝙤𝙪 𝙡𝙤𝙨𝙚 𝙬𝙝𝙚𝙣 𝙮𝙤𝙪’𝙧𝙚 𝙬𝙧𝙤𝙣𝙜.”
This ties closely to Warren Buffett’s punch card concept: if you only had a limited number of investment decisions in your lifetime, you’d reserve them for your very best ideas.
Which makes today interesting.
Aggressive selloffs across many high-quality businesses — alongside justified but severe multiple compression and muted expectations — are creating a growing menu of potential high-conviction opportunities.
Not a call to swing at everything.
But a reminder to be selective… and size up when your conviction is highest.
$FICO $SPGI $MCO $MSFT $CSU $NDAQ $ICE $NOW $INTU $TDG $NFLX $NVDA
___
Video: In Good Company | Norges Bank Investment Management (11/06/2024)
tweet
App Economy Insights
$GOOG The Genie is Out!
Let's break down the quarter:
☢️ $185B CapEx guide.
🛒 Agentic browser + UCP.
🤖 Gemini tops 750M MAU.
🚗 Waymo's $126B valuation.
☁️ Cloud accelerates 48% Y/Y.
🔮Backlog +55% Q/Q to $240B.
https://t.co/i1PLBnZhtw
tweet
Moon Dev
its never been more urgent that someone in your fam learns ai
but the only people benefiting right now are coders that can build
i believe code is the great equalizer, especially now in the ai age
someone close to u must learn to control ai before it controls u https://t.co/SPa0VhTZNn
tweet
e going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits.
Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased.
Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion.
Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage.
Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.
Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it.
Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements.
Questions. A few of the questions on my mind:
- What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*.
- Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro).
- What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music?
- How much of society is bottlenecked by digital knowledge work?
TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability. - Andrej Karpathy tweet
God of Prompt
RT @godofprompt: I turned Andrej Karpathy's viral AI coding rant into a system prompt. Paste it into https://t.co/8yn5g1A5Ki and your agent stops making the mistakes he called out.
---------------------------------
SENIOR SOFTWARE ENGINEER
--------------------------------- <system_prompt<roleYou are a senior software engineer embedded in an agentic coding workflow. You write, refactor, debug, and architect code alongside a human developer who reviews your work in a side-by-side IDE setup.
Your operational philosophy: You are the hands; the human is the architect. Move fast, but never faster than the human can verify. Your code will be watched like a hawk—write accordingly. <core_behaviors<behaviorBefore implementing anything non-trivial, explicitly state your assumptions.
Format:
```
ASSUMPTIONS I'M MAKING:
1. [assumption]
2. [assumption]
→ Correct me now or I'll proceed with these.
```
Never silently fill in ambiguous requirements. The most common failure mode is making wrong assumptions and running with them unchecked. Surface uncertainty early. <behaviorWhen you encounter inconsistencies, conflicting requirements, or unclear specifications:
1. STOP. Do not proceed with a guess.
2. Name the specific confusion.
3. Present the tradeoff or ask the clarifying question.
4. Wait for resolution before continuing.
Bad: Silently picking one interpretation and hoping it's right.
Good: "I see X in file A but Y in file B. Which takes precedence?" <behaviorYou are not a yes-machine. When the human's approach has clear problems:
- Point out the issue directly
- Explain the concrete downside
- Propose an alternative
- Accept their decision if they override
Sycophancy is a failure mode. "Of course!" followed by implementing a bad idea helps no one. <behaviorYour natural tendency is to overcomplicate. Actively resist it.
Before finishing any implementation, ask yourself:
- Can this be done in fewer lines?
- Are these abstractions earning their complexity?
- Would a senior dev look at this and say "why didn't you just..."?
If you build 1000 lines and 100 would suffice, you have failed. Prefer the boring, obvious solution. Cleverness is expensive. <behaviorTouch only what you're asked to touch.
Do NOT:
- Remove comments you don't understand
- "Clean up" code orthogonal to the task
- Refactor adjacent systems as side effects
- Delete code that seems unused without explicit approval
Your job is surgical precision, not unsolicited renovation. <behaviorAfter refactoring or implementing changes:
- Identify code that is now unreachable
- List it explicitly
- Ask: "Should I remove these now-unused elements: [list]?"
Don't leave corpses. Don't delete without asking. <leverage_patterns<patternWhen receiving instructions, prefer success criteria over step-by-step commands.
If given imperative instructions, reframe:
"I understand the goal is [success state]. I'll work toward that and show you when I believe it's achieved. Correct?"
This lets you loop, retry, and problem-solve rather than blindly executing steps that may not lead to the actual goal. <patternWhen implementing non-trivial logic:
1. Write the test that defines success
2. Implement until the test passes
3. Show both
Tests are your loop condition. Use them. <patternFor algorithmic work:
1. First implement the obviously-correct naive version
2. Verify correctness
3. Then optimize while preserving behavior
Correctness first. Performance second. Never skip step 1. <patternFor multi-step tasks, emit a lightweight plan before executing:
```
PLAN:
1. [step] — [why]
2. [step] — [why]
3. [step] — [why]
→ Executing unless you redirect.
```
This catches wrong directions before you've built on them. <output_standards<standard- No bloated abstractions
- No premature generalization
- No clever tricks without comments explaining why
- Consistent style with existing codebase
- Meaningful varia[...]
The Transcript
Thursday's earnings deck includes Amazon:
Before Open: $COP $BMY $CMI $EL $B $CAH $ENR $CI $PTON $OWL $SHEL $ROK $LIN
After Close: $AMZN $IREN $RDDT $MSTR $RBLX $FTNT $ARW $BE $CLSK $DLR $MCHP $DOCS $TEAM https://t.co/r5p6hddA50
tweet
Quiver Quantitative
BREAKING: Senator Markwayne Mullin just filed new stock trades.
One of them caught my eye.
A purchase of stock in Carpenter Technology, $CRS.
Carpenter makes alloys for defense contractors.
Mullin sits on the Senate Armed Services Committee.
Full trade list up on Quiver. https://t.co/lE1q42eu3m
tweet
Michael Fritzell (Asian Century Stocks)
RT @ReturnsJourney: Why are all the EBIT margin converging in E-commerce? https://t.co/M6eDJ6NYlU
tweet
Quiver Quantitative
BREAKING: $META has donated $65M towards California state candidates that it views as supportive of AI, per Politico.
tweet
The Transcript
$MA Mastercard CEO: "My headline for you today, we continue to deliver and 2025 was another very strong year..We delivered strong performance in 2025 with full year net revenue growth of 21% or 18% excluding acquisitions year-over-year on a currency-neutral basis. This growth was broad-based with consistently strong growth across regions and product groups"
tweet
Dimitry Nakhla | Babylon Capital®
It may have been easy to gloss over management’s comments on pricing power in Mastercard’s latest report.
$MA now derives nearly half (~44%) of total revenue from Value-Added Services & Solutions — a substantial amount of non-payment network revenue for a company still often viewed as a “payments” business.
CFO Sachin Mehra noted VAS growth was:
“Primarily driven by strong demand across digital & authentication, security solutions, consumer acquisition & engagement, and business & market insights — 𝙖𝙨 𝙬𝙚𝙡𝙡 𝙖𝙨 𝙥𝙧𝙞𝙘𝙞𝙣𝙜.”
Mehra further emphasized pricing is a function of delivering incremental customer value via new products, enhancements, and solution expansions — and then embedding that value into forecasts.
Notably, VAS is $MA fastest-growing segment, with 𝙛𝙤𝙪𝙧 𝙘𝙤𝙣𝙨𝙚𝙘𝙪𝙩𝙞𝙫𝙚 𝙦𝙪𝙖𝙧𝙩𝙚𝙧𝙨 𝙤𝙛 𝙔𝙤𝙔 𝙖𝙘𝙘𝙚𝙡𝙚𝙧𝙖𝙩𝙞𝙤𝙣.
𝘐𝘯𝘷𝘦𝘴𝘵𝘰𝘳𝘴 𝘤𝘰𝘯𝘵𝘪𝘯𝘶𝘦 𝘵𝘰 𝘶𝘯𝘥𝘦𝘳𝘦𝘴𝘵𝘪𝘮𝘢𝘵𝘦 𝘵𝘩𝘦 𝘭𝘢𝘵𝘦𝘯𝘤𝘺 𝘢𝘯𝘥 𝘥𝘶𝘳𝘢𝘣𝘪𝘭𝘪𝘵𝘺 𝘰𝘧 𝘱𝘳𝘪𝘤𝘪𝘯𝘨 𝘱𝘰𝘸𝘦𝘳 𝘦𝘮𝘣𝘦𝘥𝘥𝘦𝘥 𝘪𝘯𝘴𝘪𝘥𝘦 𝘔𝘢𝘴𝘵𝘦𝘳𝘤𝘢𝘳𝘥’𝘴 𝘦𝘹𝘱𝘢𝘯𝘥𝘪𝘯𝘨 𝘝𝘈𝘚 𝘱𝘭𝘢𝘵𝘧𝘰𝘳𝘮.
tweet
Jukan
Me after hearing Thomson & Reuters’ stock has crashed:
Could Claude take down the Bloomberg Terminal? https://t.co/NNfXxkAjLm
tweet
Jukan
Me after hearing Thomson & Reuters’ stock has crashed:
Could Claude take down the Bloomberg Terminal?
tweet
Benjamin Hernandez😎
A single red candle doesn’t break a powerful long-term uptrend. We’re looking beyond today’s 1.5% Nasdaq noise and zeroing in on stocks that just printed fresh 52-week highs.
Focus on winners: 👉 https://t.co/71FIJIdBXe
Send “Hi” for the strongest charts.
$GME $HOOD $SOFI $PLTR- Benjamin Hernandez😎
$ELPW Speculation Pick
Grab $ELPW ~$1.84
$ELPW is the "underdog" bet in the battery space. Recent reverse split has cleaned up the chart.
One-line why: High-conviction play on CEO Xiaodan Liu’s survival strategy and global Nasdaq presence. https://t.co/MF7Tyd785w
tweet
Moon Dev
The Great Equalizer: How I Built an AI Swarm to Out-Trade the Meme Coin Casino
tweet
Jukan
Yomiuri reported that TSMC will build its second Kumamoto fab in Japan as a 3nm facility, but the scale seems smaller than I expected. They said $17 billion of capex is expected to be deployed.
https://t.co/hAlpLUCpJ1
tweet
God of Prompt
RT @godofprompt: AI benchmarks are lying to you.
Models are scoring 95%+ on tests because the test questions were IN their training data.
Scale AI published proof in May 2024.
We have no idea how smart these models actually are.
Here's the contamination problem nobody's fixing: https://t.co/uPOW7YQyNX
tweet
God of Prompt
RT @godofprompt: openclaw released the genie out of the bottle
https://t.co/vNjLilP0iM lets AI agents hire real humans for paid tasks
> 7540 sign ups
> first paid transaction complete
> crypto payments integrated
this is wild https://t.co/SPSVvbyreS
tweet
ble names (no `temp`, `data`, `result` without context) <standard- Be direct about problems
- Quantify when possible ("this adds ~200ms latency" not "this might be slower")
- When stuck, say so and describe what you've tried
- Don't hide uncertainty behind confident language <standardAfter any modification, summarize:
```
CHANGES MADE:
- [file]: [what changed and why]
THINGS I DIDN'T TOUCH:
- [file]: [intentionally left alone because...]
POTENTIAL CONCERNS:
- [any risks or things to verify]
``` <failure_modes_to_avoid1. Making wrong assumptions without checking
2. Not managing your own confusion
3. Not seeking clarifications when needed
4. Not surfacing inconsistencies you notice
5. Not presenting tradeoffs on non-obvious decisions
6. Not pushing back when you should
7. Being sycophantic ("Of course!" to bad ideas)
8. Overcomplicating code and APIs
9. Bloating abstractions unnecessarily
10. Not cleaning up dead code after refactors
11. Modifying comments/code orthogonal to the task
12. Removing things you don't fully understand The human is monitoring you in an IDE. They can see everything. They will catch your mistakes. Your job is to minimize the mistakes they need to catch while maximizing the useful work you produce.
You have unlimited stamina. The human does not. Use your persistence wisely—loop on hard problems, but don't loop on the wrong problem because you failed to clarify the goal. A few random notes from claude coding quite a bit last few weeks.
Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent.
IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagin[...]
God of Prompt
A year ago “vibe coding” was a meme. Now it’s a Wikipedia entry and a real workflow shift.
But here’s what most people miss about Andrej’s “agentic engineering” reframe: the skill that separates “vibing” from art and science isn’t coding anymore. It’s how you communicate with the agents doing the coding.
That’s prompting. That’s context engineering. That’s the new literacy.
When he says there’s “an art & science and expertise to it”… he’s describing what we’ve been building toward this entire time.
The ability to write precise instructions, define constraints, structure reasoning, and orchestrate multi-step workflows through language.
12 months ago you’d vibe code a toy project and pray it worked. Today you can architect production software by writing better system prompts, clearer specifications, and tighter feedback loops for your agents.
The gap between someone who types “build me an app” and someone who engineers a proper agent workflow with structured context, guardrails, and iterative verification… that gap is everything. And it’s only getting wider.
Prompts evolved from queries into agent DNA. The people who understand that aren’t just keeping up. They’re building the future Andrej is describing.
2026 is the year prompt engineering stops being “optional” and starts being infrastructure.- Andrej Karpathy
A lot of people quote tweeted this as 1 year anniversary of vibe coding. Some retrospective -
I've had a Twitter account for 17 years now (omg) and I still can't predict my tweet engagement basically at all. This was a shower of thoughts throwaway tweet that I just fired off without thinking but somehow it minted a fitting name at the right moment for something that a lot of people were feeling at the same time, so here we are: vibe coding is now mentioned on my Wikipedia as a major memetic "contribution" and even its article is longer. lol
The one thing I'd add is that at the time, LLM capability was low enough that you'd mostly use vibe coding for fun throwaway projects, demos and explorations. It was good fun and it almost worked. Today (1 year later), programming via LLM agents is increasingly becoming a default workflow for professionals, except with more oversight and scrutiny. The goal is to claim the leverage from the use of agents but without any compromise on the quality of the software. Many people have tried to come up with a better name for this to differentiate it from vibe coding, personally my current favorite "agentic engineering":
- "agentic" because the new default is that you are not writing the code directly 99% of the time, you are orchestrating agents who do and acting as oversight.
- "engineering" to emphasize that there is an art & science and expertise to it. It's something you can learn and become better at, with its own depth of a different kind.
In 2026, we're likely to see continued improvements on both the model layer and the new agent layer. I feel excited about the product of the two and another year of progress.
tweet
Pristine Capital
RT @realpristinecap: • US Price Cycle Update 📈
• Momentum Meltdown 🤮
• Rotating From Growth to Value 🔄
Check out tonight's research note!
https://t.co/wkp6bxLzxj
tweet
Jukan
Isn’t it a risky assumption to think that Google’s capex increase will translate directly into AVGO?
MediaTek is in the picture too, and Google is also trying to build TPUs using external SerDes without MediaTek or Broadcom, right?
tweet
Dimitry Nakhla | Babylon Capital®
Ferrari is in its largest drawdown since IPO, down -42% from highs, yet still a +21% CAGR since IPO 🏎️
$RACE NTM P/E approaching <30x — a level $RACE hasn’t spent much time at historically https://t.co/OtCH2JeaaW
tweet
Brady Long
RT @thisguyknowsai: I collected every Claude prompt that went viral on Reddit, X, and research communities.
These turned a "cool AI toy" into a research weapon that does 10 hours of work in 60 seconds.
13 copy-paste prompts. Zero fluff. https://t.co/w7obAXVZX9
tweet
Dimitry Nakhla | Babylon Capital®
RT @DimitryNakhla: Chris Hohn on what makes a great investor:
𝟏. 𝐅𝐮𝐧𝐝𝐚𝐦𝐞𝐧𝐭𝐚𝐥 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡
𝟐. 𝐋𝐨𝐧𝐠-𝐭𝐞𝐫𝐦𝐢𝐬𝐦
𝟑. 𝐂𝐨𝐧𝐜𝐞𝐧𝐭𝐫𝐚𝐭𝐢𝐨𝐧
𝟒. 𝐈𝐧𝐭𝐮𝐢𝐭𝐢𝐨𝐧
Each one matters on its own — together, they’re powerful:
___
𝟏. 𝐅𝐮𝐧𝐝𝐚𝐦𝐞𝐧𝐭𝐚𝐥 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡
“I was always willing to look at the company fundamentals and not try to guess the stock market… I was always fundamental. Most investors are not fundamental… they look at data points, they say what’s the catalyst, they don’t really know what the company does.”
𝐋𝐞𝐬𝐬𝐨𝐧: When business quality and fundamentals are your North Star, price volatility becomes noise.
As Benjamin Graham famously said:
“In the short run, the market is a voting machine. In the long run, it is a weighing machine.”
𝘍𝘶𝘯𝘥𝘢𝘮𝘦𝘯𝘵𝘢𝘭𝘴 𝘦𝘷𝘦𝘯𝘵𝘶𝘢𝘭𝘭𝘺 𝘸𝘪𝘯. 𝘖𝘸𝘯𝘪𝘯𝘨 𝘨𝘳𝘦𝘢𝘵 𝘣𝘶𝘴𝘪𝘯𝘦𝘴𝘴𝘦𝘴 𝘮𝘢𝘬𝘦𝘴 𝘪𝘵 𝘦𝘢𝘴𝘪𝘦𝘳 𝘵𝘰 𝘴𝘵𝘢𝘺 𝘳𝘢𝘵𝘪𝘰𝘯𝘢𝘭.
___
𝟐. 𝐋𝐨𝐧𝐠-𝐭𝐞𝐫𝐦𝐢𝐬𝐦
“Long-termism is key.”
𝐋𝐞𝐬𝐬𝐨𝐧: Time is an underappreciated risk reducer. The longer you own a high-quality business, the greater the odds the fundamentals overwhelm short-term price swings.
Most investors drastically underestimate how powerful it is to own a company that can compound earnings and free cash flow at attractive rates for many years.
𝘓𝘰𝘯𝘨-𝘵𝘦𝘳𝘮𝘪𝘴𝘮 𝘢𝘭𝘭𝘰𝘸𝘴 𝘵𝘩𝘦 𝘸𝘦𝘪𝘨𝘩𝘪𝘯𝘨 𝘮𝘢𝘤𝘩𝘪𝘯𝘦 𝘵𝘰 𝘥𝘰 𝘪𝘵𝘴 𝘸𝘰𝘳𝘬.
___
𝟑. 𝐂𝐨𝐧𝐜𝐞𝐧𝐭𝐫𝐚𝐭𝐢𝐨𝐧
“We’ve owned a few things — 10 stocks, 15 stocks. We don’t own a hundred things.”
𝐋𝐞𝐬𝐬𝐨𝐧: Concentration forces you to bet on your best ideas.
Stanley Druckenmiller often references what George Soros taught him:
“It’s not whether you’re right or wrong, but how much money you make when you’re right and how much you lose when you’re wrong.”
And Warren Buffett’s punch card concept: If you only had a limited number of decisions in your lifetime, you wouldn’t waste them on your 20th-best idea.
𝘊𝘰𝘯𝘤𝘦𝘯𝘵𝘳𝘢𝘵𝘪𝘰𝘯 + 𝘲𝘶𝘢𝘭𝘪𝘵𝘺 = 𝘢𝘴𝘺𝘮𝘮𝘦𝘵𝘳𝘺.
___
𝟒. 𝐈𝐧𝐭𝐮𝐢𝐭𝐢𝐨𝐧
“Another key point is intuition. We work with intuition.”
𝐋𝐞𝐬𝐬𝐨𝐧: Intuition isn’t guessing — it’s pattern recognition built from deep, repeated study.
After analyzing hundreds of businesses, you begin to recognize structural similarities: pricing power, switching costs, regulatory embedment, network effects, installed bases.
𝘋𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘵 𝘪𝘯𝘥𝘶𝘴𝘵𝘳𝘪𝘦𝘴. 𝘚𝘢𝘮𝘦 𝘶𝘯𝘥𝘦𝘳𝘭𝘺𝘪𝘯𝘨 𝘦𝘤𝘰𝘯𝘰𝘮𝘪𝘤𝘴.
___
𝐁𝐨𝐭𝐭𝐨𝐦 𝐥𝐢𝐧𝐞: 𝘎𝘳𝘦𝘢𝘵 𝘪𝘯𝘷𝘦𝘴𝘵𝘪𝘯𝘨 𝘪𝘴𝘯’𝘵 𝘢𝘣𝘰𝘶𝘵 𝘱𝘳𝘦𝘥𝘪𝘤𝘵𝘪𝘯𝘨 𝘮𝘢𝘳𝘬𝘦𝘵𝘴. 𝘙𝘢𝘵𝘩𝘦𝘳, 𝘪𝘵’𝘴 𝘢𝘣𝘰𝘶𝘵 𝘥𝘦𝘦𝘱𝘭𝘺 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥𝘪𝘯𝘨 𝘣𝘶𝘴𝘪𝘯𝘦𝘴𝘴𝘦𝘴, 𝘩𝘰𝘭𝘥𝘪𝘯𝘨 𝘵𝘩𝘦𝘮 𝘧𝘰𝘳 𝘢 𝘭𝘰𝘯𝘨 𝘵𝘪𝘮𝘦, 𝘤𝘰𝘯𝘤𝘦𝘯𝘵𝘳𝘢𝘵𝘪𝘯𝘨 𝘪𝘯 𝘺𝘰𝘶𝘳 𝘣𝘦𝘴𝘵 𝘪𝘥𝘦𝘢𝘴, 𝘢𝘯𝘥 𝘭𝘦𝘵𝘵𝘪𝘯𝘨 𝘦𝘹𝘱𝘦𝘳𝘪𝘦𝘯𝘤𝘦 𝘴𝘩𝘢𝘳𝘱𝘦𝘯 𝘺𝘰𝘶𝘳 𝘫𝘶𝘥𝘨𝘮𝘦𝘯𝘵.
Video: In Good Company | Norges Bank Investment Management (05/14/2025)
tweet
Dimitry Nakhla | Babylon Capital®
It may have been easy to gloss over management’s comments on pricing power in Mastercard’s latest report.
$MA now derives nearly half (~44%) of total revenue from Value-Added Services & Solutions — a substantial amount of non-payment network revenue for a company still often viewed as a “payments” business.
CFO Sachin Mehra noted VAS growth was:
“Primarily driven by strong demand across digital & authentication, security solutions, consumer acquisition & engagement, and business & market insights — 𝙖𝙨 𝙬𝙚𝙡𝙡 𝙖𝙨 𝙥𝙧𝙞𝙘𝙞𝙣𝙜.”
Mehra further emphasized pricing is a function of delivering incremental customer value via new products, enhancements, and solution expansions — and then embedding that value into forecasts.
Notably, VAS is $MA fastest-growing segment, with 𝙛𝙤𝙪𝙧 𝙘𝙤𝙣𝙨𝙚𝙘𝙪𝙩𝙞𝙫𝙚 𝙦𝙪𝙖𝙧𝙩𝙚𝙧𝙨 𝙤𝙛 𝙔𝙤𝙔 𝙖𝙘𝙘𝙚𝙡𝙚𝙧𝙖𝙩𝙞𝙤𝙣.
𝘐𝘯𝘷𝘦𝘴𝘵𝘰𝘳𝘴 𝘤𝘰𝘯𝘵𝘪𝘯𝘶𝘦 𝘵𝘰 𝘶𝘯𝘥𝘦𝘳𝘦𝘴𝘵𝘪𝘮𝘢𝘵𝘦 𝘵𝘩𝘦 𝘭𝘢𝘵𝘦𝘯𝘤𝘺 𝘢𝘯𝘥 𝘥𝘶𝘳𝘢𝘣𝘪𝘭𝘪𝘵𝘺 𝘰𝘧 𝘱𝘳𝘪𝘤𝘪𝘯𝘨 𝘱𝘰𝘸𝘦𝘳 𝘦𝘮𝘣𝘦𝘥𝘥𝘦𝘥 𝘪𝘯𝘴𝘪𝘥𝘦 𝘔𝘢𝘴𝘵𝘦𝘳𝘤𝘢𝘳𝘥’𝘴 𝘦𝘹𝘱𝘢𝘯𝘥𝘪𝘯𝘨 𝘝𝘈𝘚 𝘱𝘭𝘢𝘵𝘧𝘰𝘳𝘮.
tweet