<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
<channel>
<title><![CDATA[ creston.blog ]]></title>
<description><![CDATA[ A software development blog focused on the next 5 billion years. ]]></description>
<link>https://creston.blog</link>

<lastBuildDate>Fri, 13 Mar 2026 22:14:15 +0000</lastBuildDate>
<atom:link href="https://creston.blog" rel="self" type="application/rss+xml"/>
<ttl>60</ttl>

    <item>
        <title><![CDATA[ 2026 Predictions ]]></title>
        <description><![CDATA[ There&#39;s absolutely no way I can predict what will happen in 2026. All of these are a coin toss. No one can produce accurate probabilities of these events. I&#39;m writing about some predictions I have an interest in, but I am by no means an expert. ]]></description>
        <link>https://creston.blog/2026-predictions/</link>
        <guid isPermaLink="false">69554288785f590001a80b3a</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Creston ]]></dc:creator>
        <pubDate>Wed, 31 Dec 2025 17:46:39 +0000</pubDate>
        <media:content url="https://creston.blog/content/images/2025/12/BA3A4116-1.jpg" medium="image"/>
        <content:encoded><![CDATA[ <p>There's absolutely no way I can predict what will happen in 2026. All of these are a coin toss. No one can produce accurate probabilities of these events. I'm writing about some predictions I have an interest in, but I am by no means an expert. Nevertheless, it's fun so here are my 2026 predictions:</p><h2 id="ai-sparkle-emoji">AI (sparkle emoji)</h2><p>2025 was an eventful year for AI (to be precise, the use of LLMs in applications called AI. I don't believe AI is a useful term outside of marketing purposes.) I expect 2026 to be similar. It's fairly predictable that models will get better. There's also discussion of an AI bubble bursting. I don't know how to define a bubble, and it might take years before we can point to the time it burst. What I do know is that we're currently in a data center construction boom. These data centers will take years to come online, and power generation is a larger bottleneck than compute. Knowing that the market can stay irrational longer than you can stay solvent, I think we won't see any major shifts in 2026 regarding AI spending and bubbles. As long as the money is flowing AI will boom, demand be damned.</p><h3 id="an-8b-parameter-model-scores-at-least-08-on-mmlu-pro-by-december-31">An 8B parameter model scores at least 0.8 on MMLU-Pro by December 31.</h3><p>Basically, I'm predicting there will be open source models that can run locally in 2026 that are as good as cloud models from 2025. Let's use <a href="https://huggingface.co/spaces/TIGER-Lab/MMLU-Pro">this benchmark</a>. Currently GPT-5(high) scores 0.871. InternLM3 (open source) at 8B parameters scores 0.576. LLaMA 3.1 (2024) Instruct 8B basically doubled the score of LLaMA 2 7B (2023) in about a year. LLaMA-4 Scout (2025) is showing the same performance as LLaMA3.1-Instruct with 75% fewer parameters. The trend is towards smaller models with better benchmark performance. For this prediction, I will only consider the total number of model parameters (not "active" parameters). An optional side prediction: the model that achieves a score of 0.8 on MMLU-Pro with 8B parameters will come from China.</p><h3 id="usage-of-chatgpt-and-gemini-decline-after-launch-of-new-siri">Usage of ChatGPT and Gemini decline after launch of new Siri</h3><p>The current leaders in AI chat apps are OpenAI's ChatGPT and Google's Gemini. Apple is famously "behind" in the AI race. But Apple has a history of winning, even when it's late to the game. Lagging in AI hasn't hurt Apple's stock price in the slightest (up 8.93% in 2025.) It's long been expected that Apple will release their <a href="https://www.macrumors.com/2025/12/30/apple-ai-strategy-could-pay-off-in-2026/">revamped Siri</a> in 2026, and it's <a href="https://www.macrumors.com/2025/11/05/apple-siri-google-gemini-partnership/">been reported</a> that the new Siri will be powered by Google's Gemini. So no matter how this plays out, Google wins. Nevertheless I predict Apple's offering will be good enough that most iPhone users won't need a dedicated ChatGPT or Gemini app. In other words, AI chatbots will be <a href="https://www.urbandictionary.com/define.php?term=sherlocked">Sherlocked</a>. Official numbers may be hard to find, so to verify this result I'll rely on third-party reporting and some speculation. Gemini had 21M downloads last month according to <a href="https://app.sensortower.com/overview/6477489729?country=US">Sensor Tower</a>. And ChatGPT <a href="https://app.sensortower.com/overview/com.openai.chatgpt?country=US">had 65M</a>. A decline in these numbers doesn't necessarily mean Apple is gaining market share, but we might be able to speculate based on reporting about Apple's new Siri features.</p><h3 id="new-standards-for-agent-tooling">New standards for agent tooling</h3><p>With the formation of the new <a href="https://aaif.io">Agentic AI Foundation</a>, I anticipate many more standards being announced in 2026. The most obvious one I can think of is a standard for AI tool calling. Look at this <a href="https://github.com/jujumilk3/leaked-system-prompts/blob/main/openai-chatgpt5_20250808.md">leaked system prompt for ChatGPT 5</a> or this <a href="https://github.com/jujumilk3/leaked-system-prompts/blob/main/google-gemini-cli_20250626.md">Gemini CLI prompt</a>. They have to describe every tool available to the agent and how to use it. Both have different syntax. Presumably some non-LLM deterministic code understands how to interpret and run the commands in the output This prediction passes if a standard is drafted that includes syntax for AI agent tool calling, or an approach emerges that replaces the need for it.</p><h2 id="politics">Politics</h2><p>I don't like to write or talk about politics much, because such discussions are rarely fruitful. Nevertheless, we live in a world where almost everything is politicized so you can't avoid it. These are my predictions.</p><h3 id="many-of-trumps-tariffs-are-ruled-unconstitutional">Many of Trump's tariffs are ruled unconstitutional</h3><p>The Supreme Court heard arguments in <a href="https://en.wikipedia.org/wiki/Learning_Resources_v._Trump">Learning Resources v. Trump</a> in November. We're still waiting on a ruling. The case hinges on whether the International Emergency Economic Powers Act authorizes tariffs under Trump's emergency declarations. My prediction is that the Supreme Court will rule against Trump. This would unravel many (but not all) of the imposed tariffs.</p><h3 id="democrats-win-in-the-house-midterms">Democrats win in the House midterms</h3><p>Every once in a while I like to peak at Nate Silver's <a href="https://www.natesilver.net/p/trump-approval-ratings-nate-silver-bulletin">Trump Approval Dashboard</a>. The accumulation of polls presents an interesting picture over time. The exact numbers aren't meaningful, but the trends seem pretty clear to me: Trump is losing popularity. The political pendulum swings—if people don't feel like the current party is improving their life, then they'll vote for the other one. There's precedent for this too. Trump lost the <a href="https://en.wikipedia.org/wiki/2018_United_States_elections">house in 2018</a> with record voter turnout. This time, the Republicans are trying new strategies like redistricting in Texas, but I think these efforts won't matter. Hispanic voters were critical to Trump's election, but <a href="https://www.pbs.org/newshour/politics/trumps-favorability-has-fallen-among-hispanics-since-january-a-new-ap-norc-poll-finds">now they've soured</a> on him. </p><h3 id="billionaires-get-richer">Billionaires get richer</h3><p>Billionaires earned a collective <a href="https://www.bloomberg.com/news/articles/2025-12-31/richest-billionaires-added-2-2-trillion-in-wealth-in-2025-led-by-musk-ellison">$2.2 trillion in 2025</a>. That's an absurd amount of money, but I don't see it stopping. I predict they'll earn a collective $3 trillion in 2026.</p><h2 id="other">Other</h2><p>Here are a few more predictions in other dimensions of life.</p><h3 id="avatar-fire-and-ash-reaches-top-10-highest-grossing-films-of-all-time">Avatar: Fire and Ash reaches top 10 highest grossing films of all time</h3><p>James Cameron will do it <a href="https://en.wikipedia.org/wiki/List_of_highest-grossing_films">again</a>. Avatar 1 is the highest grossing film of all time. Avatar 2 is the 3rd highest grossing film of all time, and Titanic is the 4th highest grossing film of all time. Betting against James Cameron just seems silly. As of writing, Avatar 3 has about $800 million in box office revenue. It just needs to double that to get into the top 10. I don't really know much about the economics of movies, but maybe that's possible. I'll award partial credit if it makes top 50 (~$1 billion.)</p><h3 id="bitcoin-drops-to-50000">Bitcoin drops to $50,000</h3><p>After hitting a peak around $125,000 BTC is now down to $87,000. The last time it was $50k was in 2024. There are no fundamentals driving the price of Bitcoin. People either take advantage of the dip to pump it up again, or it continues to sink. That's right, you heard it here first: bitcoin will either go up or down! But my bet is down. Bonus prediction: there will be <a href="https://www.wsj.com/finance/banking/walmart-amazon-stablecoin-07de2fdd?gaa_at=eafs&amp;gaa_n=AWEtsqf0-ps5Img9QTnUUBHZAeU0NUJJ7KnG01SHbRpR8dihgBJwwbeDLNlT&amp;gaa_ts=69555ac7&amp;gaa_sig=HfIFzamKlKocslC1sEFyCbZfGGpqi27m3eH-7z5lYC6rgRmg4c7ej6GX-uGNwFvfNGUBiliVFBttSWNwxzG8xw%3D%3D">no Walmart or Amazon stablecoin</a> in 2026 (who would use this?)</p><h3 id="5-of-steam-users-will-be-on-linux">5% of Steam users will be on Linux</h3><p>Linux usage <a href="https://www.gamingonlinux.com/steam-tracker/">grew from 2% to 3% in 2025</a>. Linux users currently make up <a href="https://www.gamingonlinux.com/steam-tracker/">3.2% since November</a>. With the recently announced Steam Machine and additional hardware, switching to Linux will probably be even more appealing. Microsoft has done gamers no favors. Valve's investment in Linux protects them from dependence on Microsoft. Windows is getting worse. I don't put much stake in <a href="https://www.reddit.com/r/windows/comments/1kmvfak/open_letter_to_microsoft_please_stop_the/">public outcry</a>—because rants on Reddit rarely reflect on a company's top line metrics—but I think it fits into a larger narrative around the decline of Windows and Xbox. Microsoft has been losing in the gaming sphere for some time. The PlayStation 5 way <a href="https://en.wikipedia.org/wiki/List_of_best-selling_game_consoles">outperformed</a> the Xbox Series X/S. Now they're putting <a href="https://www.playstation.com/en-us/games/halo-campaign-evolved/">Halo on PlayStation</a>, presumably because they can't make money on their own consoles anymore.  They severely botched their <a href="https://en.wikipedia.org/wiki/Acquisition_of_Activision_Blizzard_by_Microsoft#Aftermath">acquisition of Activision Blizzard</a>, which looked like an attempt to rescue their gaming division, but it backfired. They fired most of the Activision employees already and the future of some franchises seems uncertain. Microsoft lost their lead in consoles, and now they're losing in PC gaming. <a href="https://www.cdprojekt.com/en/media/news/cyberpunk-2077-ultimate-edition-is-available-now-on-mac/">Even Apple</a> is getting AAA studios to port their games. Basically—Windows is giving gamers every reason to abandon ship and Valve is over here offering a life raft.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ AI Second ]]></title>
        <description><![CDATA[ Many companies these days are announcing &quot;AI first&quot; initiatives. Other than being good for stock prices, they believe AI is good enough for human work. I don&#39;t disagree. I use AI tools for work, and for personal reasons. I&#39;ve found them to be very ]]></description>
        <link>https://creston.blog/ai-second/</link>
        <guid isPermaLink="false">68362cf66348e500014c24f5</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Creston ]]></dc:creator>
        <pubDate>Tue, 27 May 2025 21:34:08 +0000</pubDate>
        <media:content url="https://creston.blog/content/images/2025/05/5O2A7558.jpg" medium="image"/>
        <content:encoded><![CDATA[ <p>Many companies these days are announcing "AI first" initiatives. Other than being good for stock prices, they believe AI is good enough for human work. I don't disagree. I use AI tools for work, and for personal reasons. I've found them to be very helpful. So then do AI think is "good" or "bad"? I believe–like everything–<em>it depends</em>.</p><h2 id="it-depends">It depends</h2><p>The best use cases I've found for AI are the things I'm bad at: drawing art for video games, writing physics simulations, using new programming languages, preparing taxes, etc. Because these things take so long for me to figure out on my own, the results from AI are a huge time saver. Not only that, the results are better than what I can do. I don't have time to practice drawing, so I'm happy that AI can draw me a tree, even if it's soulless and devoid of anything that could be perceived as art. It lets me focus on the pieces I care about, which is writing code. Obviously hiring a human artist would be better, but also more expensive. Maybe I can do that once I have a game that makes money.</p><p>There's another category of use-cases where the stakes are very low, and there's not much skill involved: writing unit tests, taking meeting notes, preparing outlines, building static websites, etc. AI tools do these tasks perfectly fine. The results won't blow you away, but they'll save you a lot of time.</p><p>That said, I won't use AI for anything I'm good at, where quality is important to me. I'm almost never happy with the API design choices than AI agents produce. They get the job done, but aren't designed well for long term maintenance and growth. Furthermore, I've never found them good at large code refactors, in part because they start to hallucinate new functionality. Additionally, when I do a lot of writing, I don't trust AI to effectively convey what I want in text, but it's fine if it wants to check my grammar or whatever. Usually, if I have the experience to visualize the end state I want, and how to get there, AI won't meet my standards. It might be capable of bits and pieces, but it won't one-shot an entire implementation of something I'm skilled enough to make myself.</p><p>One of the reasons I think AI can't do what I'm skilled at is <em>context</em>. In our brains there is a lot of context we take for granted. Perhaps the AI agent <em>can</em> do what I want, but only if I write enough context for it. LLMs are statistical models of the collective sum of all written human knowledge. They're specifically trained to produce the <em>expected</em> sequence of tokens. Meaning: they're not creative. Any creativity produced by an LLM must be derived from the (human-generated) prompt. But writing a highly detailed prompted might be just as much work as doing it myself.</p><h2 id="mediocrity">Mediocrity</h2><p>In short, AI lifts up the bottom. It gives us all tools to make us better at things we're bad at, but it doesn't make you better at what you're good at. If you've never written code in your life, you now have access to a dozen (and counting) tools to build no-code or low-code apps. You'll never replace a real engineer with these, but now you can build <em>something</em>. If you've never been good at art, you can produce almost anything you want on demand. It might not be "art", but it can bring life to an empty page. If you've never been good at languages, you have an on-demand translator for hundreds of languages. It might not capture nuance or intention in your Booker Prize-winning novel or poem, but it can grow the audience of your video game or app.</p><p>We're at a point where the baseline skill for a lot of things is <em>good enough</em> thanks to AI. This means individually, we are capable of a lot more. But I don't think AI is yet an expert at anything. If you want the best software, you hire an expert. If you want the best art, you hire an expert. If you wan the best writing, you hire an expert. There is a corollary: in this future where AI makes us all mediocre artists, how do you become a master? Companies are hiring fewer software developers (in part) because of AI. How will we train the next generation of senior software developers if the new crop of candidates rely on AI to do mediocre work?</p><p>I'm not convinced the current models will ever be good enough to replace experts. Iterative improvements in benchmarks might make the LLMs "smarter", but they don't make them better decision makers. They don't have anything at stake, so they can't make long-term/short-term trade-offs like a human. They have no accountability, so they don't care if they mess up. They don't experience the human condition, so they can't make artistic choices. They only "know" what you tell them in the prompt, and no more.</p><p>One person can get a lot more done with AI than they can alone, but the results just won't be that good. AI can do the job of mediocre humans. No company wants to hire mediocre humans to begin with, but that's because mediocre humans are expensive! By comparison, AI is cheap. When a company says something like "AI first," I don't assume they mean they will replace humans with AI. I assume it means they want more mediocrity. Sometimes that's fine–plenty of businesses want quantity over quality. But for anyone that wants excellence, reach for humans first, and AI second.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ You should write a game ]]></title>
        <description><![CDATA[ Download Cursor/Aider/whatever, vibe code a game in Godot, Bevy, Unity, Unreal, etc. Seriously. It&#39;s fun. It&#39;s creative. It&#39;s better than engaging in proactive stakeholder alignment, leveraging strategic communication methodologies to drive consensus and mitigate potential friction points.


Humanity

A solo project will ]]></description>
        <link>https://creston.blog/you-should-write-a-game/</link>
        <guid isPermaLink="false">67e5f1d36e43ae00010a350b</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Creston ]]></dc:creator>
        <pubDate>Fri, 28 Mar 2025 01:01:46 +0000</pubDate>
        <media:content url="https://creston.blog/content/images/2025/03/BA3A1236.jpg" medium="image"/>
        <content:encoded><![CDATA[ <p>Download Cursor/Aider/whatever, vibe code a game in Godot, Bevy, Unity, Unreal, etc. Seriously. It's fun. It's creative. It's better than engaging in proactive stakeholder alignment, leveraging strategic communication methodologies to drive consensus and mitigate potential friction points.</p><h2 id="humanity">Humanity</h2><p>A solo project will remind you that you are a human. You are creative. Whatever your company thinks you're worth, it's ten times that. It will remind you that you can create truly amazing things when you're not implementing a robust governance structure and fostering a culture of transparency to empower teams to proactively identify and course-correct risks before they escalate into critical path blockers.</p><p>What's that? An excuse? You don't have time? Well delivering a project on time is not merely a function of deadline adherence but a holistic exercise in resource optimization, expectation management, and adaptive execution.</p><p>Make a game. Do it yourself, maybe jam with a friend. But do it because it's fun, it's liberating, and you can feel like there's something human still left in you after a day spent ensuring project trajectory remains on schedule and aligned with broader business objectives, thereby maximizing value creation and reinforcing stakeholder trust.</p><h2 id="learning">Learning</h2><p>You can learn a lot too. Have you ever written something in an <a href="https://en.wikipedia.org/wiki/Entity_component_system">ECS</a> framework? Did you know Prime Video <a href="https://www.infoq.com/presentations/prime-video-rust/">implemented</a> an entire UI renderer in WebAssembly using an ECS framework for delivering a fast, consistent experience across devices with over-the-air updates? That's right: knowing how to do more than one thing can drive sustained value realization by remaining impact-oriented.</p><p>If you're like me, you probably work with HTTP APIs all day (or maybe gRPC or whatever.) But game networking is much different. Try adding netcode to your game. Make it multiplayer. Learn how to simulate the game on the server using UDP to transmit commands. Figure out how to scale it (hint: Kubernetes won't save you.) The challenges are new, fresh, and fun.</p><p>Why should you care about learning new things? Claude Shannon invented digital circuits because he knew analog circuits <em>and</em> <a href="https://en.wikipedia.org/wiki/A_Symbolic_Analysis_of_Relay_and_Switching_Circuits">boolean algebra</a> (a rare combination at the time.) Computers exist because of him. Today's AI exists because David Rumelhart knew <a href="https://www.nature.com/articles/323533a0"><em>differential calculus</em></a>. For context, backpropogation was unsolved for 17 years. Maybe the only thing stopping you from discovering a seismic shift in the fabric of technology is that you're too busy making real-time reporting dashboards.</p><h2 id="productivity">Productivity</h2><p>We're in a golden era where the cost of creativity has plummeted. What might have taken a team of 10 people can be done by one person. Sure, AI is <a href="https://www.nature.com/articles/d41586-024-00478-x">destroying the environment</a>, <a href="https://www.404media.co/brainrot-ai-on-instagram-is-monetizing-the-most-fucked-up-things-you-can-imagine-and-lots-you-cant/">poising our brains</a>, and <a href="https://www.reuters.com/technology/artificial-intelligence/adobe-adds-ai-tools-its-stock-photography-business-2024-11-12/">killing jobs</a>, but it's also created the opportunity for smaller teams to compete with larger incumbents. And, if it's not abundantly clear, small teams spend less time instituting a governance architecture that balances strategic oversight with operational agility to preemptively address misalignment risks before they escalate into critical disruptions.</p><p>Here's an early video from my first game, which implements simple kinematics to move the ball and the board. The amazing thing is, AI implemented almost everything up to this point.</p><figure class="kg-card kg-video-card kg-width-regular kg-card-hascaption" data-kg-thumbnail="https://creston.blog/content/media/2025/03/bonky_thumb.jpg" data-kg-custom-thumbnail="">
            <div class="kg-video-container">
                <video src="https://storage.ghost.io/c/d9/94/d9946053-1215-46b7-aee3-db49a913c6f4/content/media/2025/03/bonky.mp4" poster="https://img.spacergif.org/v1/2540x1432/0a/spacer.png" width="2540" height="1432" loop="" autoplay="" muted="" playsinline="" preload="metadata" style="background: transparent url('https://storage.ghost.io/c/d9/94/d9946053-1215-46b7-aee3-db49a913c6f4/content/media/2025/03/bonky_thumb.jpg') 50% 50% / cover no-repeat;"></video>
                <div class="kg-video-overlay">
                    <button class="kg-video-large-play-icon" aria-label="Play video">
                        <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
                            <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"></path>
                        </svg>
                    </button>
                </div>
                <div class="kg-video-player-container kg-video-hide">
                    <div class="kg-video-player">
                        <button class="kg-video-play-icon" aria-label="Play video">
                            <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
                                <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"></path>
                            </svg>
                        </button>
                        <button class="kg-video-pause-icon kg-video-hide" aria-label="Pause video">
                            <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
                                <rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"></rect>
                                <rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"></rect>
                            </svg>
                        </button>
                        <span class="kg-video-current-time">0:00</span>
                        <div class="kg-video-time">
                            /<span class="kg-video-duration">0:11</span>
                        </div>
                        <input type="range" class="kg-video-seek-slider" max="100" value="0">
                        <button class="kg-video-playback-rate" aria-label="Adjust playback speed">1×</button>
                        <button class="kg-video-unmute-icon" aria-label="Unmute">
                            <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
                                <path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"></path>
                            </svg>
                        </button>
                        <button class="kg-video-mute-icon kg-video-hide" aria-label="Mute">
                            <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
                                <path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"></path>
                            </svg>
                        </button>
                        <input type="range" class="kg-video-volume-slider" max="100" value="100">
                    </div>
                </div>
            </div>
            <figcaption><p><span style="white-space: pre-wrap;">A simple physics simulation from vibe coding.</span></p></figcaption>
        </figure><p>Okay, maybe games aren't your thing. Maybe what you really want is to make an iOS camera app that applies nostalgic film filters. Maybe you want to create a calendar app that isn't just a wrapper around Google, Apple, or Microsoft. Maybe you want to destroy your least favorite SaaS provider by undercutting them with a fraction of the capital. Those are fine too. Find your creative outlet and show the world what you can do.</p><p>Or maybe coding is what you do for work and you want to spend your free time with your family, going on long walks in nature, painting a landscape, and eating fancy ice cream. Honestly, I can't fault that.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Rust can replace Protobuf ]]></title>
        <description><![CDATA[ Rust has a lot to like. Protobuf has a lot to hate. Can we make Protobuf better by replacing it with Rust? What follows is part of a long tradition of replacing things with Rust [1][2][3][4][5][6].

Protobuf is a language that describes an interface. An ]]></description>
        <link>https://creston.blog/rust-can-replace-protobuf/</link>
        <guid isPermaLink="false">67b24cc5170ffb00019786ed</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Creston ]]></dc:creator>
        <pubDate>Mon, 17 Feb 2025 15:00:05 +0000</pubDate>
        <media:content url="https://creston.blog/content/images/2025/02/0S2A6851-1.jpg" medium="image"/>
        <content:encoded><![CDATA[ <p>Rust has a lot to like. Protobuf has a lot to hate. Can we make Protobuf better by replacing it with Rust? What follows is part of a long tradition of replacing things with Rust [<a href="https://astral.sh">1</a>][<a href="https://rspack.dev">2</a>][<a href="https://github.com/BurntSushi/ripgrep">3</a>][<a href="https://github.com/RustPython/RustPython">4</a>][<a href="https://github.com/nushell/nushell">5</a>][<a href="https://en.wikipedia.org/wiki/Weathering_steel">6</a>].</p><p><a href="https://protobuf.dev">Protobuf</a> is a language that describes an interface. An <a href="https://en.wikipedia.org/wiki/Interface_description_language">interface description language</a> (IDL), if you will. It's also a serialization format that transports data across the wire. It's also a command line tool (<code>protoc</code>) that compiles the IDL into language-specific packages to read and write said serialization format. It generates code in many languages.</p><p>Rust is a language that can do everything, but it's very boilerplate-heavy. Not good for simple interface definitions. Blessedly, Rust supports macros, which reduce boilerplate. Macros can auto-generate virtually any Rust code at compile time. There are two kinds of macros that are important for replacing Protobuf.</p><ul><li>Macros like <a href="https://rustwasm.github.io/wasm-bindgen/examples/hello-world.html">wasm-bindgen</a> (JS) and <a href="https://pyo3.rs/v0.23.4/getting-started.html">pyo3</a> (Python) that generate glue code to bind to foreign function interfaces. With hardly any effort, we can expose our basic Rust types to JavaScript and Python.</li><li><a href="https://serde.rs">Serde</a>, a macro that describes how any Rust struct or enum should be serialized. It's format-agnostic, so you can plug-and-play any serialization format you want such as <a href="https://github.com/serde-rs/json" rel="noreferrer">JSON</a> or <a href="https://crates.io/crates/rmp-serde">msgpack</a>.</li></ul><p>If it's not abundantly obvious what we're going to do, let me make it clear: we can write our interface using Rust structs, bind them to other languages, and serialize them with serde. Rust bindings in any language give us the portability of Protobuf with the expressiveness of Rust/serde types.</p><h2 id="protobuf-is-bad">Protobuf is bad</h2><p>Perhaps because it has to interface with so many languages which do not share the same language features, some things are just needlessly verbose. Google publishes a standalone <a href="https://github.com/googleapis/proto-plus-python">Python library</a> (not baked into the generated Protobuf code) just to smooth over the painful ergonomics in Python. For example, everything has a a default field, so if you don't check <code>HasField</code> everywhere you can't tell the difference between something that was set to the default, or just something that is the default. (The pythonic approach would be to simply check <code>if not field</code>.)</p><p>For a language that's meant to describe data, Protobuf is extremely limited in its expressiveness. There are lots of <a href="https://reasonablypolymorphic.com/blog/protos-are-wrong/">complaints</a> about this online. To provide an example we can use, consider this interface meant to represent arithmetic expressions:</p><pre><code class="language-protobuf">syntax = "proto3";

package algebraic;

// An algebraic data type for arithmetic expressions
message Expr {
  oneof expr {
    int32 number = 1;
    BinaryOp binary_op = 2;
    UnaryOp unary_op = 3;
  }
}

// Represents a binary operation (e.g., addition, multiplication)
message BinaryOp {
  enum Operator {
    ADD = 0;
    SUBTRACT = 1;
    MULTIPLY = 2;
    DIVIDE = 3;
  }
  Operator op = 1;
  Expr left = 2;
  Expr right = 3;
}

// Represents a unary operation (e.g., negation)
message UnaryOp {
  enum Operator {
    NEGATE = 0;
  }
  Operator op = 1;
  Expr operand = 2;
}
</code></pre><p>Probably, you won't "just use" the generated Protobuf types in your language. It's going to be tedious. So instead, you'll write your own types and convert them to/from Protobuf. This isn't wrong per se, it's similar to how you probably wouldn't "just use" JSON off the wire. But it <em>is</em> annoying. If we're going through the effort to define our types, we should get more for our troubles.</p><p>In Rust we could represent the above far  more concisely:</p><pre><code class="language-rust">pub enum Expr {
    Number(i32),
    Op(Op),
}
pub struct UnaryOp(Expr);
pub struct BinaryOp(Expr, Expr);
pub enum Op {
    Add(BinaryOp),
    Subtract(BinaryOp),
    Multiply(BinaryOp),
    Divide(BinaryOp),
    Negate(UnaryOp),
}</code></pre><p>Not only is this more concise, it may map more neatly to our data model. Rust enum values mean we can represent all operations as a single type <code>Op</code> instead of two types: <code>BinaryOp</code> and <code>UnaryOp</code>. More flexible types like this make it easier to express what we mean in our APIs. </p><h2 id="serde-is-great">Serde is great</h2><p>The 10th most downloaded crate on crates.io is <a href="https://serde.rs">serde</a>. It's usually the first dependency I add to a project. It's so good I have a hard time using other languages that don't have something similar.</p><p>We can annotate a struct with serde derive macros, and convert it to any number of formats.</p><p>This example is from the serde docs. Observe that the <code>Point</code> struct has no knowledge of what formats it will serialize. Only that it <em>can be serialized</em>. We can delay the choice of format all the way until the moment we absolutely need it.</p><pre><code class="language-rust">use serde::{Serialize, Deserialize};

#[derive(Serialize, Deserialize, Debug)]
struct Point {
    x: i32,
    y: i32,
}

fn main() {
    let point = Point { x: 1, y: 2 };

    let serialized = serde_json::to_string(&amp;point).unwrap();
    println!("serialized = {}", serialized);

    let deserialized: Point = serde_json::from_str(&amp;serialized).unwrap();
    println!("deserialized = {:?}", deserialized);
}</code></pre><p>This can be used, for example, to respond intelligently to different <code>Accept</code> headers. We can use JSON for a smooth development experience and switch to MsgPack when we want better performance in prod.</p><h2 id="ridl">Ridl</h2><p>I've published a <a href="https://github.com/crestonbunch/ridl" rel="noreferrer">demo</a> of a crate I call <code>ridl</code> (sounds like "riddle") which does roughly everything I said above. It's free for all to see and modify. I can't say for sure that people should be using this. What I can say though, is if you have a reason to invent another IDL, maybe start with Rust. If you think this idea could actually be something, then please open a pull request against ridl and start hacking. I'd love to see people's ideas.</p><p>At the time of writing, a ridl object looks something like this, with support for Python and JavaScript.</p><pre><code class="language-rust">#[cfg_attr(feature = "py", ridl::popo("hello"))]
#[cfg_attr(feature = "wasm", ridl::pojso)]
pub struct Hello {
    pub name: String,
}</code></pre><pre><code class="language-rust">pub struct Hello {
  pub name: String,
}</code></pre><p>That's it! A whole message in just a few lines of code. </p><p>For now, ridl serves as a proof-of-concept and further development will depend on how much people find value in this idea. After all, maximizing value is what we're here to do.</p><p>To <a href="https://github.com/crestonbunch/ridl/blob/main/example/webserver/app/main.py">write</a> hello in Python:</p><pre><code class="language-python">from typing import Annotated
from rust_idl import hello

from fastapi import FastAPI, Response, Header

app = FastAPI()

@app.get("/hello/{name}")
def greet(name: str, accept: Annotated[str, Header()] = None):
    greeting = hello.Hello(name)

    if accept == "application/msgpack":
        response = Response(content=greeting.to_msgpack())
        response.headers["Content-Type"] = "application/msgpack"
    else:
        response = Response(content=greeting.to_json())
        response.headers["Content-Type"] = "application/json"

    return response</code></pre><p>To <a href="https://github.com/crestonbunch/ridl/blob/afd7753279b2534c39ff97fc1dbd2c9b364f1b9e/example/webclient/src/routes/%2Bpage.svelte#L13-L29">read</a> hello in JavaScript</p><pre><code class="language-typescript">import init, { Hello } from "rust-idl";

await init(); // load wasm

const greet = async (name: string): Promise&lt;Hello&gt; =&gt; {
  const result = await fetch(`${WEBSERVER_URL}/hello/${name}`);
  payload = new Uint8Array(await result.arrayBuffer());
  return Hello.from_json(payload);
};</code></pre><h2 id="future">Future</h2><p>There are limitations. For example, pyo3 doesn't support enum values like the arithmetic example from earlier. Similarly, generic types present some challenges in language bindings. These may get better over time as the Rust ecosystem evolves, or they may be fundamental limitations that are never fully resolved.</p><p>Ridl may not be significantly more expressive than Protobuf without enum values and generics, but it can only get better. Protobuf is a highly constrained language, but Rust is a very expressive language. For example, ridl could, with the help of some more custom proc macros, rewrite the value enums into something pyo3 can bind.</p><p>There are some ergonomic improvements to be made. For example, having a single proc macro instead of one-per-language. Also the build steps have to be run separately for each language (<code>wasm-pack</code> for JS, <code>maturin</code> for Python, etc.) but even that could be improved. </p><p>The future API for ridl could be much better. Here's the simplest one you could imagine:</p><pre><code class="language-rust">#[ridl]
pub struct Hello {
  name: String
}

// build with `cargo build -F js -F py` or similar</code></pre><p>Serde lacks the backward compatibility guarantees of the Protobuf serialization format. While serde can't serialize Protobuf (or anything like it), we don't <em>have</em> to use serde either. We could invent a new format. Or you could just, you know, not break backwards-compatibility.</p><p>There are other conceivable benefits to using Rust over Protobuf and its ilk. For example, you could extend your types with bits of logic. Arguably, that might stretch the idea of an IDL a bit too far. Taken to the extreme, you might as well write your whole app in Rust. But delivering value sometimes means being pragmatic. Just like this whole blog post is about being pragmatic. And rewriting everything in Rust is pragmatic.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Stop writing code ]]></title>
        <description><![CDATA[ The best code is code that doesn&#39;t need to be written. ]]></description>
        <link>https://creston.blog/stop-writing-code/</link>
        <guid isPermaLink="false">67abda0321857000015d2686</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Creston ]]></dc:creator>
        <pubDate>Tue, 11 Feb 2025 23:22:12 +0000</pubDate>
        <media:content url="https://creston.blog/content/images/2025/02/BA3A8886-1.jpg" medium="image"/>
        <content:encoded><![CDATA[ <p>If you spend more than 10% of your time as a software engineer writing code, you're doing it wrong. (I mean the time you spend doing your job, not the time you spend slacking your coworker that your PM unironically said "let's double click on that" in a meeting.)</p><p>The code you write isn't going to be around in 5 to 10 years. When your product is successful, old assumptions won't work anymore. Requirements evolve. You need to think about how your code will get replaced. Will it be easy or hard?</p><p>Don't leave behind a half-assed project, finished in one quarter and abandoned after you got promoted. If you can't commit to maintenance and support, you shouldn't write a single line of code. Use a popular open-source solution instead of writing your own. With zero effort on your part, there's documentation and a community that will keep working even when you stop. If there's no community solution, reflect on why that might be.</p><p>Stop writing code, and start <em>thinking about not writing code</em>. Only use 10% of your time writing code (or telling AI to write code) and the remaining 90% of your time figuring out what code not to write. Find off-the-shelf solutions. They age better. They come with better support. AI can auto-complete boto3 calls, it can't auto-complete your bespoke DynamoDB API wrapper written in Lua.</p><p>In the business world, they sometimes talk about core competencies. Very few companies have a core competency writing databases. Or build pipelines. Or CI/CD automation. Or container orchestration. You would never write those things by hand. Just like you probably shouldn't be writing whatever you're about to write by hand.</p><p><em>If you're the best company in the world to solve that problem, then do it.</em> Otherwise, figure out how to avoid it.</p><blockquote><em>Another</em> Dockerfile? You sure?<br>A Helm chart? Are you serious right now?<br><em>Authentication???</em> The door is right there.</blockquote> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ WASM will replace containers ]]></title>
        <description><![CDATA[ WebAssembly is a true write-once-run-anywhere experience. ]]></description>
        <link>https://creston.blog/wasm-will-replace-containers/</link>
        <guid isPermaLink="false">67aa89d749969d0008cab1a3</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Creston ]]></dc:creator>
        <pubDate>Mon, 10 Feb 2025 23:20:55 +0000</pubDate>
        <media:content url="https://creston.blog/content/images/2025/02/BA3A1573-1.jpg" medium="image"/>
        <content:encoded><![CDATA[ <p>In the year 2030, no one will remember Kubernetes.</p><h2 id="portability">Portability</h2><p>Containers solved a lot of important problems in software development. We had VMs before containers, but they were not as ergonomic to use. The experience of containers was, by comparison, a true delight. Fast(er) builds, near-instant startup, no virtualization, etc.</p><p>Now we're in an era where containers are annoying to work with. The promise of DevOps has been eroded by complicated tooling and tight coupling of program-container-linux. In my experience, developers want to write code and ship features to hit their quarterly goals. Learning how to use Docker is a distraction. No one has a goal to "improve Docker build times" unless you're part of the new PlatformOps (formerly DevOps (formerly Ops)) team.</p><p>My money is on WebAssembly (WASM) to replace containers. It already has in some places. WebAssembly is a true write-once-run-anywhere experience. (Anywhere that can spin up a V8 engine, which is a lot of places these days.) You can compile several languages into WebAssembly already. <a href="https://github.com/RustPython/RustPython">Languages</a> that can't be compiled will eventually have their own interpreters compiled to WebAssembly. The main thing holding back wider adoption is a lack of system interfaces. File access, networking, etc. But it's just a matter of time before these features get integrated.</p><p>A very obvious argument against WASM succeeding is the Java Virtual Machine (JVM). It's almost exactly the same promise: write once, run anywhere. After all, over 3 billion devices run Java. There are many languages that run on the JVM: Java, Kotlin, Scala, Clojure, Jython, etc. The biggest limitation is that JVM bytecode cannot run in a web browser (RIP <a href="https://en.wikipedia.org/wiki/Java_applet">Java Applets</a>). Web browsers are a crucial target for app development. Anyone that wants to share code between platforms will avoid the JVM. There is an interesting, slow-moving trend away from the JVM and towards statically compiled binaries. See: GraalVM, Kotlin Native, Scala Native, Jank.</p><h2 id="microservices">Microservices</h2><p>In a microservice architecture, you communicate with other services through HTTP or RPC calls, or using a message broker. This decoupling has important tradeoffs. The success of the model suggests most companies find the benefits outweigh the costs. Importantly, it creates strict boundaries between parts of your system. Conway's Law, etc. You don't <em>need</em> network boundaries to create good encapsulation, but it sure helps when you have a lot of developers.</p><p>Perhaps the biggest downside is the cost associated with having many small services communicating over a network. Not only do you pay for the bandwidth and resource overhead, but you have to engineer solutions to improve the reliability of each system in the face of network partitions.</p><p>With the advent of "serverless" platforms like AWS Lambda, you can take microservices to the extreme: just deploy single functions in the cloud. My favorite serverless model is Cloudflare Workers. If you have one Worker call another Worker, there is no actual network request. Instead, it just calls the Worker code in the same V8 runtime. Meaning you don't pay the cost of a network roundtrip.</p><p>Importantly, Cloudflare Workers all run in V8 sandboxes. There are no containers. You have the option of writing your worker in JavaScript/TypeScript, or compiled WASM. A container cannot call another container in the same process. But V8 can. In other words: by deploying WASM in V8 sandboxes, you get all of the developer benefits of microservices with all of the runtime benefits of monoliths. Cloudflare is not the only provider doing this. Wasmer is trying to build a solution in this space as well.</p><h2 id="adoption">Adoption</h2><p>WebAssembly is still a young technology. But it's rapidly developing, and support seems to be growing. Maybe it doesn't work for you <em>yet</em>, but keep an eye on it. Try developing on Cloudflare to see what the future looks like. (TV infomercial voice): if you or a loved one are building Docker images, you may be entitled to a better experience.</p><p>If you primarily develop in languages like Python, Ruby, or PHP, be patient. I recommend adding a compiled language like Go or Rust to your <a href="https://charity.wtf/2018/12/02/software-sprawl-the-golden-path-and-scaling-teams-with-agency/">Golden Path</a> so you can be prepared when the WASM reckoning is upon us.</p> ]]></content:encoded>
    </item>

</channel>
</rss>