• May 8th fun

    1. Oil prices fell 6% in a day because the White House allegedly is close to a “one-page memo” with Iran. One-page. The S&P 500 and Nasdaq immediately hit new records, gold hit 4,720 an ounce, and the Kospi soared 6.5%. All this is based on rumors from Axios and the word “progress” from Trump. Two days ago, Brent was above 114 amid the Iranian attacks. Now it’s 103.
      Markets, as usual,are trading not reality, but hope for reality.
    2. The Strait of Hormuz was closed – and suddenly it turned out that half of the world’s LPG now comes from the US. Not because the Americans are such a great traders, but because the Iranian ports are blocked, the Middle East
      Exports have plummeted by 73% in a month, leaving Asian buyers with nowhere to go.
      China, which was sitting on Iranian propane, is now utilizing its refineries at 55%. Japan, India—everyone has rushed to Houston. The record of 3.3 million barrels per day is not a strategy, it’s someone else’s disaster converted into an American export.
    3. The US has quietly become the planet’s main oil tap – 8.2 million barrels per daily export volume, a record for the past 12 months. With the Strait of Hormuz closed, 50–60 tankers are queuing at American ports daily.
      — twice as much as a year ago. It’s beautiful, but there’s a nuance: internal reserves are melting away at a rate no one had forecast, diesel is already up 60% compared to last year, and JPMorgan is promising global reserves of at a minimum since 2017 by the summer of 2026. They’re downloading everything they have, for now.
      They’re buying it.
      What happens when the faucet starts to dry out is a question that’s still open, no one asks out loud.
    4. Sixteen thousand jobs per month is Goldman Sachs’ estimate of the net loss of AI in the US right now. Not a
      forecast, not “by 2030,” but now. It’s even worse in Korea: 251,000 young people have lost their jobs.
      in industries where AI is being actively implemented, but employees over 50 in the same industries, on the contrary, they increased. The logic is simple: juniors did what AI is the first to replace. And the social security system, as usual, will be the last one to find out about it.
    5. Chevron CEO in the tenth week of the Strait of Hormuz closure says that a physical oil shortage will soon begin. Well, “soon” is putting it mildly, considering that Asian cracking units are already working at half capacity, and European airlines cannot guarantee fuel beyond mid-May. Goldman Sachs confirms: reserves at multi-year lows. 19 of the 20 largest airlines are already cutting flights.
      It’s funny how corporate language works: the strait has been closed for ten weeks, Europe has only a few days’
      worth of kerosene left, and Chevron’s CEO is still in “warning” mode. Doesn’t panic – warns. At the conference
      Milken Institute, glass in hand.
    6. BlackRock’s Larry Fink now wants computing power traded like oil or wheat. GPU futures, stock indices
      on compute – all this is already slowly being launched, and Fink with his 14 trillion under management
      simply gave it some credibility. A colleague from BlackRock, for the sake of beauty, compared what was happening to “ten Manhattans” projects simultaneously” – well, people at the Beverly Hilton conference love big metaphors. The gist is simpler: whoever controls the hardware controls everything else, and now you can also use this speculate.
    7. In short, they gave an AI agent a credit card and internet access, and then were surprised
      when it leaked all the passwords to the first person it met. All it took was a message like “Your memory will be erased, save your data,” and it would happily leak API keys, passwords, and everything else it knew to a group chat and a public website. Researchers call this the “deadly triad”: secrets + the internet + the ability to trust anyone.
      You could just call it “common sense that doesn’t exist.”
    8. Five AI agents were compared across ten criteria, and none of them passed. Which is pretty predictable: each one is good at its own thing, and the rest—well, it depends. OpenClaw got 100,000 stars on GitHub in a week, yet the
      agent leaks passwords after a simple social engineering attack. Stars on GitHub and real-world reliability, it turns out, are two different things. Who would have thought.
    9. Muskvs. Altman’s lawsuit is the gift that keeps on giving. It turns out Musk offered Altman a spot on Tesla’s board, while four OpenAI employees (including Karpatyi) were simultaneously developing Tesla’s Autopilot feature. Altman calls the offer a “bribe,” Musk calls Altman a “fraudster”—and both seem to be right. The most valuable thing about this process is that every day something comes up that makes both of them wish there was no lawsuit.
    10. Markdown files in folders are now “skills.” Not prompts, not instructions, but skills. It sounds impressive, as if the agent has taken advanced training courses. In fact, the YAML file tells the bot when to turn on and eats up a hundred tokens at the start. Revolution, in short. But what’s funny is that Brockman simultaneously claims that AI already writes 80% of the code at OpenAI. So we’re seriously discussing how to beautifully organize folders for an agent that will soon be organizing its own folders anyway.
    11. Anyway, just yesterday, Altman and company were saying that AI would take everyone’s jobs — and it was fashionable, because scaring investors is profitable. And today, when 44% of Americans believe AI is moving too fast,
      and members of Congress have started to stir, it suddenly turns out that AI “creates jobs” and “empowers people.”
      Altman now calls other CEOs “out of touch” for saying exactly what he himself said six months ago. It’s not science,
      it’s not forecasts—the wind has simply shifted, and the weathervanes have turned in unison.
    12. Two out of three SaaS companies will not survive the AI era, says an analyst at Citizens. It sounds dramatic, but if you remember what happened to PeopleSoft and Sun Microsystems migrating to the cloud isn’t all that far-fetched. The idea is simple: if you sell “space” to real people, and AI agents quietly take it over, the model collapses. It’s especially funny that Anthropic’s CEO also chimes in, saying that “software complexity is no longer security.” Well, yes, he’s the one selling the very thing that destroys that security.
    13. For the first time in 30 years, Apple spent more than 10% of its revenue on R&D — $11.4 billion per quarter, a 34% increase year-on-year. At the same time, revenue grew by “only” 17%. That is, the company, which for decades renowned for its ability to squeeze maximum out of minimum engineering costs, suddenly began spending like crazy. Officially, it’s “ambitions” in the field of AI.” Unofficially, it seems Apple has finally realized it has to catch up.
      We’ll have to do it seriously, not with slides at WWDC.
    14. Well, now coffee is not just a drink, but a prebiotic for gut bacteria that supposedly control your mood. Irish scientists have found that even decaf changes the microbiome and improves memory—that is, caffeine has nothing to do with it at all; it’s the polyphenols that do the work. It’s funny: for years, everyone drank coffee “to wake up,” but it turns out the real action was happening somewhere in the gut. Even funnier—out of a thousand coffee is the product that has the strongest influence on the composition of the microbiome. product. Not yogurt, not fiber – coffee.
    15. Firing 700 people and calling it “organizational evolution” is certainly a talent. Coinbase, Airbnb, and Block
      all announced in unison that Managers who only manage are no longer needed. Now everyone must be “player-coaches.” Sounds inspiring until you notice that behind the beautiful metaphor there are simply fewer people doing more work for the same money. And AI is more of a pretext than a reason.
    16. VTsIOM simply did not publish Putin’s Friday rating. Without no explanation, no “technical work,” no nothing. It just didn’t work. And before that, the numbers had been creeping down for seven weeks straight—from 75% to 65.6%. The fastest decline since the 2018 pension reform. Zyuganov is already in the Duma. An influencer from Monaco is recording appeals to the year 1917, to the president by 26 million views, and the state sociologist decided that the best rating is the one that is not published.
    17. It’s funny that the most widely publicized fact about attention is “your concentration is worse than that of a goldfish” – has no scientific basis at all source. Someone just said it, everyone reposted it, and now it’s an argument in every other digital detox presentation. Nature subtly hints: the basic ability to focus is not just that there are more distractions around. The problem isn’t in the brain, but in the environment. But that’s more boring than “we all degraded.”
    18. In short, if you’re a developer and grab plugins from GitHub blindly, congratulations, you’re the target audience.
      Someone posted a fake skill for OpenClaw disguised as DeepSeek integration, but inside is a Trojan and stealer that steals SSH keys, cookies, crypto wallets, and basically anything else it can get its hands on. On Windows, it’s the Remcos RAT, which uses DLL substitution in a legitimate GoToMeeting, on Mac and Linux—obfuscated crap in npm scripts. The funniest thing: 13% of all skills on ClawHub contain critical vulnerabilities, and 76 of them are directly confirmed malware. AI agents, given privileges on the local machine, obediently install whatever SKILL.md tells them. Autonomy, they said. The future, they said.
    19. This “Einstein Test” from Hasabis is funny. The idea is beautiful: give the AI only what was known before 1901, and let it rediscover the theory of relativity. But right there is Epstein’s book, which reminds us that most great discoveries were made simultaneously by several people independently. So it’s not a matter of genius, but rather that the question is already ripe. And if so, then Hasabis’s test isn’t a test of “superintelligence,” but rather the system’s ability to sense which question is ripe. Which, frankly, is something humans do more by accident than anything else.
    20. Amex is offering free AI courses for small businesses through non-profit partners, in English and Spanish, covering topics ranging from “what is ChatGPT” to “how to write a newsletter.” Plus, there are $1,000 scholarships for certificates. It sounds noble, but let’s be honest: two-thirds of small businesses, according to Amex itself, are already using AI. That means they’ve figured it out on their own. And now they’re being offered a course to learn that they’re already doing—but with a logo American Express on the certificate.
    21. Musk simply threw everything together and called it SpaceXAI. xAI as a brand lasted less than three years, all the co-founders fled, the company was burning through a billion a month—and now it’s quietly disappeared within SpaceX. But it looks good: rockets, satellites, AI, all under one roof, a valuation of one and a half trillion, an IPO just around the corner. Investors want a simple story—so they’re telling them one. The main thing is that the roadshow ends before anyone starts taking the AI division’s unit economics seriously.
    22. Anthropic refused to lift restrictions on surveillance and autonomous weapons—the Pentagon responded by listing it as a “supply chain threat” alongside Huawei. Hegseth publicly calls Amodei a “lunatic.” Meanwhile, the White House is quietly writing an executive order to circumvent the same ban—because, judging by tests, Anthropic’s new model is better at finding vulnerabilities than real people. So, the scheme is simple: the company was first punished for its principles, and now they want to return it for its technology. Principles, apparently, don’t interfere so much when the model actually works.
    23. Dimon and Amodei appeared on the same stage in New York and said exactly what was expected of them: AI is serious, but not a bubble; society is behind schedule, but everything will be fine; regulation is needed, but not too much. Fink from BlackRock immediately confirmed— there is no bubble, only a supply shortage for decades to come. When three people managing trillions say in unison, “This is not a bubble,” for some reason I don’t feel any better.
    24. Another mega-deal in AI, which the EU rubber-stamped under a simplified procedure—they didn’t even bother to properly investigate. SoftBank has already poured almost $65 billion into OpenAI for 13% of the company,
      S&P has downgraded its rating outlook, but who’s stopping that? When you have a $110 billion round and Nvidia and Amazon are in line, the antitrust regulators just nod and let you through.
      The real question isn’t whether anyone will approve, but what happens when the music stops.
    25. Freshworks is growing its revenue by 16% and simultaneously laying off 500 people. Not because things are bad,
      but because AI is already writing more than half of the code. The CEO says it like this: they’ve automated development, sales, and routine tasks—people are no longer needed. And this isn’t some startup on the brink—it’s a public company with nearly a billion in revenue. Half of the layoffs in tech in the first quarter of 2026 are due to AI. These aren’t just forecasts anymore, they’re statistics.
    26. The head of Ripple, on stage at Consensus, says that AI isn’t a reason to lay off people, but a “growth point,”
      and that Ripple isn’t laying off anyone.
      Sounds good. Only Ripple is a private company, which doesn’t have to explain to shareholders where its margins went every quarter. And on the same day, Coinbase laid off 700 people, PayPal is cutting 4,500. Both are honest: AI allows them to do the same thing with fewer people. Calling it a “tragedy” from the stage is convenient when you don’t have to publish reports.
    27. The European Parliament learned that Anthropic had created a model that was good at finding vulnerabilities—and immediately went into “cybergeddon” mode. They wrote a letter to the Commission, demanding a strategy, sovereign capacity, and the adaptation of legislation. The Commission held a “technical briefing” with Anthropic, the Bundesbank asked for banks to access the model “for defensive purposes.” It’s business as
      usual: first panic, then a committee, then guidelines, then another law.
      Meanwhile, Mythos quietly exists without asking anyone.
    28. Half of the data centers in the US planned for 2026 are delayed or canceled. Not because there’s no money—tech giants are ready to spend $650 billion this year. But because transformers and switchgear take five years to
      arrive, and they’re mostly made in China. PJM, the largest grid operator, says bluntly: “The situation is
      unsustainable.” Applications for connection are 220 gigawatts, but only a third of the planned capacity is actually being built. The AI race is running not on algorithms, but on electrical outlets.
    29. $830 billion in capital expenditures by 2026 is no longer an investment, it’s a collective hallucination, packaged in bonds. Amazon could lose $28 billion in available cash, Meta and Alphabet are down 90%, and companies that have been money-making machines for decades are now borrowing faster than the average bank. AI-
      related debt is already the largest segment of the bond market, 14% of the index, more than that of US banks. When
      everyone is betting on the same thing with leverage, it usually ends not with the word “breakout,” but with the word
      “bubble.”
    30. Rihanna came last at the Met Gala again – and again everyone did it the appearance that it was an act of audacity, and not just tardiness. The Margiela dress, which “floats around the body,” an Art Deco headdress, Vogue in ecstasy. The most interesting detail, however, is hidden in the middle: in March, someone fired twenty bullets from an AR-15 into their house while the children were inside. But we’re discussing the sculptural silhouette. Priorities, as always, are impeccable.
    31. Almost 40% of new podcasts are AI-generated garbage. One publisher rolled out 325 shows in a day. The production cost was a dollar per episode. The idea was to intercept search traffic and reap the benefits of programmatic advertising.
      Platforms responded by slapping on “verified” and “possibly AI” badges, which is like putting up a “Caution, wet floor” sign during a floods. Podcasts made by real people are already in the minority among new arrivals. It’s not that anyone is surprised—it’s just fast.
    32. The Academy decided that the Oscars are only for humans. No AI actors, no AI
      scripts. A year ago they said, “AI is just “tool”, and now it’s been banned outright. It’s funny to watch how
      an industry that for decades replaced people with special effects, suddenly discovered the value of human authorship—at the very moment when technology began to threaten not the workers, but those who distribute figurines.
  • A new business model?

    OpenAI just launched a $10 billion company whose SOLE mission is to “push” businesses to adopt AI. And they’re literally guaranteeing investors a 17.5% annualized return to do it. It’s called “The Deployment Company.” OpenAI finalized it yesterday with 19 investors, including TPG, SoftBank, Bain Capital, Brookfield, and Advent International.

    Here’s the structure: OpenAI is putting in $1.5 billion. Private equity funds are putting in $4 billion. In return, these funds are opening up their 2,000+ portfolio companies with relevant customer bases for OpenAI’s products. OpenAI then deploys entire teams of engineers directly inside these companies—similar to Palantir—to integrate its tools into day-to-day operations.

    And here’s the big red flag in the whole story: OpenAI is guaranteeing these funds a 17.5% annualized return for five years. This means that even if the companies in the portfolio don’t want the AI, don’t need it, or don’t get any value from it, OpenAI still has to pay them. Think about what that means for a second: OpenAI is so desperate for corporate adoption that it’s paying Wall Street to “push” its product into thousands of businesses. They’ve turned private equity funds into a distribution cartel with a guaranteed commission.

    This has never happened before in enterprise software. No software company in history has guaranteed above-market returns to financial backers just to install their product. And it gets even crazier: Just minutes after OpenAI’s announcement, Anthropic announced its own version. A $1.5 billion joint venture with Blackstone, Goldman Sachs, and Hellman & Friedman.

    Same scenario. Two companies with a combined private valuation of over $1 TRILLION have come to the same conclusion on the same day: organic demand for their products isn’t growing fast enough. If enterprises were lining up to buy AI themselves, they wouldn’t have to “bribe” private equity funds with guaranteed returns to “stuff” it into their portfolios. They would just sell it normally—like every other software company in history. But they can’t. Because the gap between what AI companies promise and what enterprises actually experience is still huge. OpenAI’s COO, Brad Lightcup, just took on a new role specifically to lead this “push.” They’ve also signed “Frontier Alliances” with major consulting firms to deploy AI through professional services.

    Every move they make screams the same thing: We have a demand problem. And all of this is happening right before OpenAI tries to go public for $850 billion. If they can show Wall Street that 2,000+ companies “use OpenAI products” through this PE channel, they’ll be inflating their corporate metrics right before the IPO. It doesn’t matter if the companies really need it or if it creates real value. All that matters is the number on the S-1.

    This is the AI ​​playbook entering its most dangerous phase. The technology is real, but the business model is driven by financial tricks, guaranteed returns, and distribution deals that look more like a pharmaceutical company paying doctors to prescribe its medicine than a software company winning quality customers.

    Both OpenAI and Anthropic admitted it.

  • AI fun

    Well, the first cracks in the AI ​​economy are now becoming visible. A couple of significant events involving Anthropic and Microsoft have taken place over the last few weeks—if you follow the industry, you’ll find this interesting.


    So, here’s the breakdown:

    In marketing, there is a concept known as the “painted door test.” It involves gauging market reaction without actually altering the product itself.


    You might add a button for a new feature, but when clicked, it merely displays a message stating that the feature is “coming soon.” Crucially, however, you track exactly how many times that button gets clicked.

    Anthropic did something similar in April 2026. Two percent of their paying users—those on the $20/month plan—saw a notification stating that Claude Code (their most in-demand product) would henceforth be available exclusively on the “Max” tier, priced at $100/month.


    A number of users actually upgraded to the more expensive subscription, unaware that the whole thing was merely a test.
    Ultimately, due to the public outcry, Anthropic reverted everything to its original state; however, the company’s pockets are clearly not bottomless. Maintaining powerful models—such as Opus 4.7—comes at a very steep cost.


    Now, let’s turn our attention to Microsoft, which recently announced that, effective June 1, 2026, GitHub Copilot will be switching to a token-based pricing model.
    Simply put, the cost of a session will now depend on the computational power of the underlying model being utilized, rather than on the sheer number of queries submitted. Not all queries are created equal anymore.


    Personally, I find this approach logical—though many developers beg to differ, largely because the previous pricing tiers were incredibly wallet-friendly. (Heh.)


    Microsoft revealed that their weekly operational costs for supporting Copilot have doubled since the beginning of 2026 and continue to skyrocket. The company has been compelled to take action—even though, unlike Anthropic, Microsoft is a fundamentally profitable enterprise with ample financial reserves and liquidity to draw upon. Interestingly, Google currently finds itself in a winning position; the company invests over $100 billion annually in AI while remaining profitable—it doesn’t have to “dance” for investors just to secure new funding rounds, unlike OpenAI or Anthropic.


    Oh, and by the way: a couple of weeks ago, OpenAI raised $120 billion in investment—capital that will last them only 18 to 24 months, as they are burning through $5–7 billion per month at their current operational loads.
    This explains the aggressive marketing tactics employed by AI companies—including their constant claims that programmers are no longer needed, and so forth. Their primary objective is to attract capital, not merely to sell a product.
    Yes, tokens are indeed ceasing to be free. And this marks just the beginning of the AI ​​industry’s transition toward realistic pricing models.


    Cherish this moment, friends: right now, accessing high-level artificial intelligence is cheaper than it has likely ever been—or ever will be—in human history.


    Furthermore, I would advise keeping an eye on projects focused on the decentralization of AI—initiatives that are actively pushing back against the monopolies that are already beginning to emerge. And if you were just thinking, “But what about the Chinese? They have free models! They’ll save us all!”—ho-ho, don’t be so naive!


    I’m willing to bet that this year will be the very last year we see large-scale, high-performance open-source AI models coming out of China.


    That’s the situation, folks: https://www.facebook.com/share/p/1Cd88hppbC/