# Hacker News Top 30 — 2026-04-24

Generated on 2026-04-24 03:08 UTC

## [HN-TITLE] 1. Why I Write (1946)

- **Source**: [https://www.orwellfoundation.com/the-orwell-foundation/orwell/essays-and-other-works/why-i-write/](https://www.orwellfoundation.com/the-orwell-foundation/orwell/essays-and-other-works/why-i-write/)
- **Site**: The Orwell Foundation
- **Author**: Eric Blair
- **Published**: 2011-06-03
- **HN activity**: 24 points · [2 comments](https://news.ycombinator.com/item?id=47884768)
- **Length**: 2.8K words (~13 min read)
- **Language**: en-GB

*This material remains under copyright in some jurisdictions, including the US, and is reproduced here with the kind permission of [the Orwell Estate](http://www.amheath.com/profile.php?a=198). The Orwell Foundation is an independent charity – please consider [making a donation](https://www.orwellfoundation.com/the-orwell-foundation/support-us/) or becoming a Friend of the Foundation to help us maintain these resources for readers everywhere.* 

From a very early age, perhaps the age of five or six, I knew that when I grew up I should be a writer. Between the ages of about seventeen and twenty-four I tried to abandon this idea, but I did so with the consciousness that I was outraging my true nature and that sooner or later I should have to settle down and write books.

I was the middle child of three, but there was a gap of five years on either side, and I barely saw my father before I was eight. For this and other reasons I was somewhat lonely, and I soon developed disagreeable mannerisms which made me unpopular throughout my schooldays. I had the lonely child’s habit of making up stories and holding conversations with imaginary persons, and I think from the very start my literary ambitions were mixed up with the feeling of being isolated and undervalued. I knew that I had a facility with words and a power of facing unpleasant facts, and I felt that this created a sort of private world in which I could get my own back for my failure in everyday life. Nevertheless the volume of serious – i.e. seriously intended – writing which I produced all through my childhood and boyhood would not amount to half a dozen pages. I wrote my first [poem](https://orwellfoundation.com/george-orwell/about-orwell/d-j-taylor-orwells-poetry/) at the age of four or five, my mother taking it down to dictation. I cannot remember anything about it except that it was about a tiger and the tiger had ‘chair-like teeth’ – a good enough phrase, but I fancy the poem was a plagiarism of Blake’s ‘Tiger, Tiger’. At eleven, when the war or 1914-18 broke out, I wrote a patriotic poem which was printed in the local newspaper, as was another, two years later, on the death of Kitchener. From time to time, when I was a bit older, I wrote bad and usually unfinished ‘nature poems’ in the Georgian style. I also, about twice, attempted a short story which was a ghastly failure. That was the total of the would-be serious work that I actually set down on paper during all those years.

However, throughout this time I did in a sense engage in literary activities. To begin with there was the made-to-order stuff which I produced quickly, easily and without much pleasure to myself. Apart from school work, I wrote *vers d’occasion*, semi-comic poems which I could turn out at what now seems to me astonishing speed – at fourteen I wrote a whole rhyming play, in imitation of Aristophanes, in about a week – and helped to edit school magazines, both printed and in manuscript. These magazines were the most pitiful burlesque stuff that you could imagine, and I took far less trouble with them than I now would with the cheapest journalism. But side by side with all this, for fifteen years or more, I was carrying out a literary exercise of a quite different kind: this was the making up of a continuous “story” about myself, a sort of diary existing only in the mind. I believe this is a common habit of children and adolescents. As a very small child I used to imagine that I was, say, Robin Hood, and picture myself as the hero of thrilling adventures, but quite soon my “story” ceased to be narcissistic in a crude way and became more and more a mere description of what I was doing and the things I saw. For minutes at a time this kind of thing would be running through my head: ‘He pushed the door open and entered the room. A yellow beam of sunlight, filtering through the muslin curtains, slanted on to the table, where a matchbox, half-open, lay beside the inkpot. With his right hand in his pocket he moved across to the window. Down in the street a tortoiseshell cat was chasing a dead leaf,’ etc., etc. This habit continued until I was about twenty-five, right through my non-literary years. Although I had to search, and did search, for the right words, I seemed to be making this descriptive effort almost against my will, under a kind of compulsion from outside. The ‘story’ must, I suppose, have reflected the styles of the various writers I admired at different ages, but so far as I remember it always had the same meticulous descriptive quality.

When I was about sixteen I suddenly discovered the joy of mere words, i.e. the sounds and associations of words. The lines from [*Paradise Lost*](http://en.wikipedia.org/wiki/Paradise_lost) –

> So hee with difficulty and labour hard  
> Moved on: with difficulty and labour hee,

which do not now seem to me so very wonderful, sent shivers down my backbone; and the spelling ‘hee’ for ‘he’ was an added pleasure. As for the need to describe things, I knew all about it already. So it is clear what kind of books I wanted to write, in so far as I could be said to want to write books at that time. I wanted to write enormous naturalistic novels with unhappy endings, full of detailed descriptions and arresting similes, and also full of purple passages in which words were used partly for the sake of their sound. And in fact my first completed novel, [*Burmese Days*](https://orwellfoundation.com/george-orwell/by-orwell/burmese-days/), which I wrote when I was thirty but projected much earlier, is rather that kind of book.

I give all this background information because I do not think one can assess a writer’s motives without knowing something of his early development. His subject-matter will be determined by the age he lives in – at least this is true in tumultuous, revolutionary ages like our own – but before he ever begins to write he will have acquired an emotional attitude from which he will never completely escape. It is his job, no doubt, to discipline his temperament and avoid getting stuck at some immature stage, or in some perverse mood: but if he escapes from his early influences altogether, he will have killed his impulse to write. Putting aside the need to earn a living, I think there are four great motives for writing, at any rate for writing prose. They exist in different degrees in every writer, and in any one writer the proportions will vary from time to time, according to the atmosphere in which he is living. They are:

(i) Sheer egoism. Desire to seem clever, to be talked about, to be remembered after death, to get your own back on grown-ups who snubbed you in childhood, etc., etc. It is humbug to pretend this is not a motive, and a strong one. Writers share this characteristic with scientists, artists, politicians, lawyers, soldiers, successful business men – in short, with the whole top crust of humanity. The great mass of human beings are not acutely selfish. After the age of about thirty they abandon individual ambition – in many cases, indeed, they almost abandon the sense of being individuals at all – and live chiefly for others, or are simply smothered under drudgery. But there is also the minority of gifted, willful people who are determined to live their own lives to the end, and writers belong in this class. Serious writers, I should say, are on the whole more vain and self-centered than journalists, though less interested in money.

(ii) Aesthetic enthusiasm. Perception of beauty in the external world, or, on the other hand, in words and their right arrangement. Pleasure in the impact of one sound on another, in the firmness of good prose or the rhythm of a good story. Desire to share an experience which one feels is valuable and ought not to be missed. The aesthetic motive is very feeble in a lot of writers, but even a pamphleteer or writer of textbooks will have pet words and phrases which appeal to him for non-utilitarian reasons; or he may feel strongly about typography, width of margins, etc. Above the level of a railway guide, no book is quite free from aesthetic considerations.

(iii) Historical impulse. Desire to see things as they are, to find out true facts and store them up for the use of posterity.

(iv) Political purpose – using the word ‘political’ in the widest possible sense. Desire to push the world in a certain direction, to alter other people’s idea of the kind of society that they should strive after. Once again, no book is genuinely free from political bias. The opinion that art should have nothing to do with politics is itself a political attitude.

It can be seen how these various impulses must war against one another, and how they must fluctuate from person to person and from time to time. By nature – taking your ‘nature’ to be the state you have attained when you are first adult – I am a person in whom the first three motives would outweigh the fourth. In a peaceful age I might have written ornate or merely descriptive books, and might have remained almost unaware of my political loyalties. As it is I have been forced into becoming a sort of pamphleteer. First I spent five years in an unsuitable profession (the Indian Imperial Police, in Burma), and then I underwent poverty and the sense of failure. This increased my natural hatred of authority and made me for the first time fully aware of the existence of the working classes, and the job in Burma had given me some understanding of the nature of imperialism: but these experiences were not enough to give me an accurate political orientation. Then came Hitler, the Spanish Civil War, etc. By the end of 1935 I had still failed to reach a firm decision. I remember [a little poem](https://orwellfoundation.com/george-orwell/by-orwell/poetry/a-happy-vicar-i-might-have-been/) that I wrote at that date, expressing my dilemma:

> A happy vicar I might have been  
> Two hundred years ago,  
> To preach upon eternal doom  
> And watch my walnuts grow
> 
> But born, alas, in an evil time,  
> I missed that pleasant haven,  
> For the hair has grown on my upper lip  
> And the clergy are all clean-shaven.
> 
> And later still the times were good,  
> We were so easy to please,  
> We rocked our troubled thoughts to sleep  
> On the bosoms of the trees.
> 
> All ignorant we dared to own  
> The joys we now dissemble;  
> The greenfinch on the apple bough  
> Could make my enemies tremble.
> 
> But girls’ bellies and apricots,  
> Roach in a shaded stream,  
> Horses, ducks in flight at dawn,  
> All these are a dream.
> 
> It is forbidden to dream again;  
> We maim our joys or hide them;  
> Horses are made of chromium steel  
> And little fat men shall ride them.
> 
> I am the worm who never turned,  
> The eunuch without a harem;  
> Between the priest and the commissar  
> I walk like Eugene Aram;
> 
> And the commissar is telling my fortune  
> While the radio plays,  
> But the priest has promised an Austin Seven,  
> For Duggie always pays.
> 
> I dreamt I dwelt in marble halls,  
> And woke to find it true;  
> I wasn’t born for an age like this;  
> Was Smith? Was Jones? Were you?

The Spanish war and other events in 1936-37 turned the scale and thereafter I knew where I stood. Every line of serious work that I have written since 1936 has been written, directly or indirectly, *against* totalitarianism and *for* democratic socialism, as I understand it. It seems to me nonsense, in a period like our own, to think that one can avoid writing of such subjects. Everyone writes of them in one guise or another. It is simply a question of which side one takes and what approach one follows. And the more one is conscious of one’s political bias, the more chance one has of acting politically without sacrificing one’s aesthetic and intellectual integrity.

What I have most wanted to do throughout the past ten years is to make political writing into an art. My starting point is always a feeling of partisanship, a sense of injustice. When I sit down to write a book, I do not say to myself, ‘I am going to produce a work of art’. I write it because there is some lie that I want to expose, some fact to which I want to draw attention, and my initial concern is to get a hearing. But I could not do the work of writing a book, or even a long magazine article, if it were not also an aesthetic experience. Anyone who cares to examine my work will see that even when it is downright propaganda it contains much that a full-time politician would consider irrelevant. I am not able, and do not want, completely to abandon the world view that I acquired in childhood. So long as I remain alive and well I shall continue to feel strongly about prose style, to love the surface of the earth, and to take a pleasure in solid objects and scraps of useless information. It is no use trying to suppress that side of myself. The job is to reconcile my ingrained likes and dislikes with the essentially public, non-individual activities that this age forces on all of us.

It is not easy. It raises problems of construction and of language, and it raises in a new way the problem of truthfulness. Let me give just one example of the cruder kind of difficulty that arises. My book about the Spanish civil war, [*Homage to Catalonia*](https://orwellfoundation.com/george-orwell/by-orwell/homage-to-catalonia/), is of course a frankly political book, but in the main it is written with a certain detachment and regard for form. I did try very hard in it to tell the whole truth without violating my literary instincts. But among other things it contains a long chapter, full of newspaper quotations and the like, defending the Trotskyists who were accused of plotting with Franco. Clearly such a chapter, which after a year or two would lose its interest for any ordinary reader, must ruin the book. A critic whom I respect read me a lecture about it. ‘Why did you put in all that stuff?’ he said. ‘You’ve turned what might have been a good book into journalism.’ What he said was true, but I could not have done otherwise. I happened to know, what very few people in England had been allowed to know, that innocent men were being falsely accused. If I had not been angry about that I should never have written the book.

In one form or another this problem comes up again. The problem of language is subtler and would take too long to discuss. I will only say that of late years I have tried to write less picturesquely and more exactly. In any case I find that by the time you have perfected any style of writing, you have always outgrown it. [*Animal Farm*](https://orwellfoundation.com/george-orwell/by-orwell/animal-farm/) was the first book in which I tried, with full consciousness of what I was doing, to fuse political purpose and artistic purpose into one whole. I have not written a novel for seven years, but I hope to write another fairly soon. It is bound to be a failure, every book is a failure, but I do know with some clarity what kind of book I want to write.

Looking back through the last page or two, I see that I have made it appear as though my motives in writing were wholly public-spirited. I don’t want to leave that as the final impression. All writers are vain, selfish, and lazy, and at the very bottom of their motives there lies a mystery. Writing a book is a horrible, exhausting struggle, like a long bout of some painful illness. One would never undertake such a thing if one were not driven on by some demon whom one can neither resist or understand. For all one knows that demon is simply the same instinct that makes a baby squall for attention. And yet it is also true that one can write nothing readable unless one constantly struggles to efface one’s own personality. Good prose is like a windowpane. I cannot say with certainty which of my motives are the strongest, but I know which of them deserve to be followed. And looking back through my work, I see that it is invariably where I lacked a *political* purpose that I wrote lifeless books and was betrayed into purple passages, sentences without meaning, decorative adjectives and humbug generally.

*Gangrel*, No. 4, Summer 1946

---

## [HN-TITLE] 2. GPT-5.5

- **Source**: [https://openai.com/index/introducing-gpt-5-5/](https://openai.com/index/introducing-gpt-5-5/)
- **Site**: OpenAI
- **Submitter**: rd (Hacker News)
- **Submitted**: 2026-04-23 18:01 UTC (Hacker News)
- **HN activity**: 1144 points · [790 comments](https://news.ycombinator.com/item?id=47879092)
- **Length**: 4.0K words (~18 min read)
- **Language**: en-US

We’re releasing GPT‑5.5, our smartest and most intuitive to use model yet, and the next step toward a new way of getting work done on a computer.

GPT‑5.5 understands what you’re trying to do faster and can carry more of the work itself. It excels at writing and debugging code, researching online, analyzing data, creating documents and spreadsheets, operating software, and moving across tools until a task is finished. Instead of carefully managing every step, you can give GPT‑5.5 a messy, multi-part task and trust it to plan, use tools, check its work, navigate through ambiguity, and keep going.

The gains are especially strong in agentic coding, computer use, knowledge work, and early scientific research—areas where progress depends on reasoning across context and taking action over time. GPT‑5.5 delivers this step up in intelligence without compromising on speed: larger, more capable models are often slower to serve, but GPT‑5.5 matches GPT‑5.4 per-token latency in real-world serving, while performing at a much higher level of intelligence. It also uses significantly fewer tokens to complete the same Codex tasks, making it more efficient as well as more capable.

We are releasing GPT‑5.5 with our strongest set of safeguards to date, designed to reduce misuse while preserving access for beneficial work. We evaluated this model across our full suite of safety and preparedness frameworks, worked with internal and external redteamers, added targeted testing for advanced cybersecurity and biology capabilities, and collected feedback on real use cases from nearly 200 trusted early-access partners before release.

Today, GPT‑5.5 is rolling out to Plus, Pro, Business, and Enterprise users in ChatGPT and Codex, and GPT‑5.5 Pro is rolling out to Pro, Business, and Enterprise users in ChatGPT. API deployments require different safeguards and we are working closely with partners and customers on the safety and security requirements for serving it at scale. We'll bring GPT‑5.5 and GPT‑5.5 Pro to the API very soon.

**GPT-5.5**

**GPT-5.4** 

**GPT-5.5 Pro**

**GPT-5.4 Pro**

**Claude Opus 4.7**

**Gemini 3.1 Pro**

Terminal-Bench 2.0

**82.7%**

75.1%

\-

\-

69.4%

68.5%

Expert-SWE (Internal)

**73.1%**

68.5%

\-

\-

\-

\-

GDPval (wins or ties)

**84.9%**

83.0%

82.3%

82.0%

80.3%

67.3%

OSWorld-Verified

**78.7%**

75.0%

\-

\-

78.0%

\-

Toolathlon

**55.6%**

54.6%

\-

\-

\-

48.8%

BrowseComp

84.4%

82.7%

**90.1%**

89.3%

79.3%

85.9%

FrontierMath Tier 1–3

51.7%

47.6%

**52.4%**

50.0%

43.8%

36.9%

FrontierMath Tier 4

35.4%

27.1%

**39.6%**

38.0%

22.9%

16.7%

CyberGym

**81.8%**

79.0%

\-

\-

73.1%

\-

OpenAI is building the global infrastructure for agentic AI, making it possible for people and businesses around the world to get work done with AI. Over the past year, we’ve seen AI dramatically accelerate software engineering. With GPT‑5.5 in Codex and ChatGPT, that same transformation is beginning to extend into scientific research and the broader work people do on computers.

Across these domains, GPT‑5.5 is not just more intelligent; it is more efficient in how it works through problems, often reaching higher-quality outputs with fewer tokens and fewer retries. On Artificial Analysis's Coding Index, GPT‑5.5 delivers state-of-the-art intelligence at half the cost of competitive frontier coding models.

GPT‑5.5 is our strongest agentic coding model to date. On **Terminal-Bench 2.0,** which tests complex command-line workflows requiring planning, iteration, and tool coordination, it achieves a state-of-the-art accuracy of 82.7%. On **SWE-Bench Pro**, which evaluates real-world GitHub issue resolution, it reaches 58.6%, solving more tasks end-to-end in a single pass than previous models. On **Expert-SWE**, our internal frontier eval for long-horizon coding tasks with a median estimated human completion time of 20 hours, GPT‑5.5 also outperforms GPT‑5.4.

Across all three evals, GPT‑5.5 improves on GPT‑5.4’s scores while using fewer tokens.

The model’s coding strengths show up especially clearly in Codex where it can take on engineering work ranging from implementation and refactors to debugging, testing, and validation. Early testing suggests GPT‑5.5 is better at the behaviors real engineering work depends on, like holding context across large systems, reasoning through ambiguous failures, checking assumptions with tools, and carrying changes through the surrounding codebase.

The rendered trajectory uses NASA/JPL Horizons vector data for Orion, the Moon, and the Sun, with display scaling applied for readability.

**Prompt:** \[attached image] Implement this as a new app using webgl and vite using real data from the artemis II mission. Make sure to test the app thoroughly until it is fully functional and looks like the app in the picture. Pay close attention to the rendering of the planets and fly paths. I want to be able to interact with the 3D rendering. Ensure it has realistic orbital mechanics.

Beyond benchmarks, early testers said GPT‑5.5 shows a stronger ability to understand the shape of a system: why something is failing, where the fix needs to land, and what else in the codebase would be affected.

![alt](https://images.ctfassets.net/kftzwdyauwt9/5A8f5mO7aKrwLH5ClDV0si/e49a0a3c56f63d9998dd338ce16d0dd6/Blog1.png?w=3840&q=90&fm=webp)

“The first coding model I’ve used that has serious conceptual clarity.”

Dan Shipper, Founder and CEO of Every, described GPT‑5.5 as “the first coding model I’ve used that has serious conceptual clarity.”

After launching an app, he spent days debugging a post-launch issue before bringing in one of his best engineers to rewrite part of the system. To test GPT‑5.5, he effectively rewound the clock: could the model look at the broken state and produce the same kind of rewrite the engineer eventually decided on? GPT‑5.4 could not. GPT‑5.5 could.

![alt](https://images.ctfassets.net/kftzwdyauwt9/1eFs7ss7lMxUlZlbCCd6mC/d62c48414621c37d251564b6880dccc0/Blog2.png?w=3840&q=90&fm=webp)

“It genuinely feels like I’m working with a higher intelligence, and there’s almost a sense of respect.”

Pietro Schirano, CEO of MagicPath, saw a similar step change when GPT‑5.5 merged a branch with hundreds of frontend and refactor changes into a main branch that had also changed substantially, resolving the work in one shot in about 20 minutes.

Senior engineers who tested the model said GPT‑5.5 was noticeably stronger than GPT‑5.4 and Claude Opus 4.7 at reasoning and autonomy, catching issues in advance and predicting testing and review needs without explicit prompting. In one case, an engineer asked it to re-architect a comment system in a collaborative markdown editor and returned to a 12-diff stack that was nearly complete. Others said they needed surprisingly little implementation correction and felt more confident in GPT‑5.5’s plans compared with GPT‑5.4.

One engineer at NVIDIA who had early access to the model went as far as to say: "Losing access to GPT‑5.5 feels like I've had a limb amputated.”

> “GPT-5.5 is noticeably smarter and more persistent than GPT-5.4, with stronger coding performance and more reliable tool use. It stays on task for significantly longer without stopping early, which matters most for the complex, long-running work our users delegate to Cursor.”

— Michael Truell, Co-founder & CEO at Cursor

The same strengths that make GPT‑5.5 great at coding also make it powerful for everyday work on a computer. Because the model is better at understanding intent, it can move more naturally through the full loop of knowledge work: finding information, understanding what matters, using tools, checking the output, and turning raw material into something useful.

In Codex, GPT‑5.5 is better than GPT‑5.4 at generating documents, spreadsheets, and slide presentations. Alpha testers said it outperformed past models on work like operational research, spreadsheet modeling, and turning messy business inputs into plans. When combined with Codex’s computer use skills, GPT‑5.5 brings us closer to the feeling that the model can actually use the computer with you: seeing what’s on screen, clicking, typing, navigating interfaces, and moving across tools with precision.

Teams at OpenAI are already using these strengths in real workflows. Today, more than 85% of the company uses Codex every week across functions including software engineering, finance, communications, marketing, data science, and product management. In Comms, the team used GPT‑5.5 in Codex to analyze six months of speaking request data, build a scoring and risk framework, and validate an automated Slack agent so low-risk requests could be handled automatically while higher-risk requests still route to human review. In Finance, the team used Codex to review 24,771 K-1 tax forms totaling 71,637 pages, using a workflow that excluded personal information and helped the team accelerate the task by two weeks compared to the prior year. On the Go-to-Market team, an employee automated generating weekly business reports, saving 5-10 hours a week.

In ChatGPT, **GPT‑5.5 Thinking** unlocks faster help for harder problems, with smarter and more concise answers to help you move through complex work more efficiently. It excels at professional work like coding, research, information synthesis and analysis, and document-heavy tasks, especially when using plugins.

In **GPT‑5.5 Pro**, early testers are seeing a significant step up in both the difficulty and quality of work ChatGPT can take on, with latency improvements that make it much more practical for demanding tasks. Compared to GPT‑5.4 Pro, testers found GPT‑5.5 Pro’s responses significantly more comprehensive, well-structured, accurate, relevant, and useful, with especially strong performance in business, legal, education, and data science.

GPT‑5.5 reaches state-of-the-art performance across multiple benchmarks that reflect this kind of work. On [GDPval⁠⁠](https://openai.com/index/gdpval/), which tests agents’ abilities to produce well-specified knowledge work across 44 occupations, GPT‑5.5 scores 84.9%. On **OSWorld-Verified**, which measures whether a model can operate real computer environments on its own, it reaches 78.7%. And on **Tau2-bench Telecom**, which tests complex customer-service workflows, it reaches 98.0% without prompt tuning. GPT‑5.5 also performs strongly across other knowledge work benchmarks: 60.0% on **FinanceAgent**, 88.5% on **internal investment-banking modeling tasks**, and 54.1% on **OfficeQA Pro**.

Tau2-bench Telecom was run without prompt tuning (and GPT‑4.1 as user model). GPT‑5.5 understands the intent of the task better and is more token efficient than its predecessors.

> “GPT-5.5 delivers the sustained performance required for execution-heavy work. Built and served on NVIDIA GB200 NVL72 systems, the model enables our teams to ship end-to-end features from natural language prompts, cut debug time from days to hours, and turn weeks of experimentation into overnight progress in complex codebases. It’s more than faster coding—it’s a new way of working that helps people operate at a fundamentally different speed.”

— Justin Boitano, VP of Enterprise AI at NVIDIA

GPT‑5.5 also shows gains on scientific and technical research workflows, which require more than answering a hard question. Researchers need to explore an idea, gather evidence, test assumptions, interpret results, and decide what to try next. GPT‑5.5 is better at persisting across that loop than other models.

Notably, GPT‑5.5 shows a clear improvement over GPT‑5.4 on [**GeneBench**⁠(opens in a new window)](https://cdn.openai.com/pdf/6dc7175d-d9e7-4b8d-96b8-48fe5798cd5b/oai_genebench_benchmark.pdf), a new eval focusing on multi-stage scientific data analysis in genetics and quantitative biology. These problems require models to reason about potentially ambiguous or errorful data with minimal supervisory guidance, address realistic obstacles such as hidden confounders or QC failures, and correctly implement and interpret modern statistical methods. The model’s performance is striking in light of the fact that tasks here often correspond to multi-day projects for scientific experts.

Similarly, on [BixBench⁠(opens in a new window)](https://arxiv.org/abs/2503.00096), a benchmark designed around real-world bioinformatics and data analysis, GPT‑5.5 achieved leading performance among models with published scores. The model’s scientific capabilities are now strong enough to meaningfully accelerate progress at the frontiers of biomedical research as a bona fide co-scientist.

In another example, an internal version of GPT‑5.5 with a custom harness helped discover a [new proof⁠(opens in a new window)](https://cdn.openai.com/pdf/6dc7175d-d9e7-4b8d-96b8-48fe5798cd5b/Ramsey.pdf) about Ramsey numbers, one of the central objects in combinatorics. Combinatorics studies how discrete objects fit together: graphs, networks, sets, and patterns. Ramsey numbers ask, roughly, how large a network has to be before some kind of order is guaranteed to appear. Results in this area are rare and often technically difficult. Here, GPT‑5.5 found a proof of a longstanding asymptotic fact about off-diagonal Ramsey numbers, later verified in Lean. The result is a concrete example of GPT‑5.5 contributing not just code or explanation, but a surprising and useful mathematical argument in a core research area.

Early testers used GPT‑5.5 Pro in ChatGPT less like a one-shot answer engine and more like a research partner: critiquing manuscripts over multiple passes, stress-testing technical arguments, proposing analyses, and working with code, notes, and PDF context. The common thread is that GPT‑5.5 is better at helping researchers move from question to experiment to output.

Derya Unutmaz, an immunology professor and researcher at the Jackson Laboratory for Genomic Medicine, used GPT‑5.5 Pro to analyze a gene-expression dataset with 62 samples and nearly 28,000 genes, producing a detailed research report that not only summarized the findings but also surfaced key questions and insights—work he said would have taken his team months.

Bartosz Naskręcki, assistant professor of mathematics at Adam Mickiewicz University in Poznań, Poland, used GPT‑5.5 in Codex to build an algebraic-geometry app from a single prompt in 11 minutes, visualizing the intersection of quadratic surfaces and converting the resulting curve into a Weierstrass model.

He later extended the app with more stable singularity visualization and exact coefficients that can be reused in further work. For him, the bigger shift is that Codex can now help implement custom mathematical visualization and computer-algebra workflows that previously required dedicated tools. Together, these examples show GPT‑5.5 turning expert intent into working research tools and analyses.

![""](https://images.ctfassets.net/kftzwdyauwt9/1WiRj8XUEoqruFEKNQJPRr/4063977e99833d7129b6a238b4d3d876/Bartosz_Visual.png?w=3840&q=90&fm=webp)

[Credit: Bartosz Naskręcki⁠(opens in a new window)](https://bnaskrecki.faculty.wmi.amu.edu.pl/quadr/)

**Prompt:** # Algebraic geometry surface intersection

Make an app which draws two quadratic surfaces and colors in red the intersection curve. Use computational Riemann-Roch theorem to convert this into Weierstrass curve.

\## Main window

Two tinted surfaces with a slightly transparent shading, high quality rendering intersect along a red colored algebraic curve

Rotation with mouses in both directions, full pinch mechanism for zoom, haptic press to show the little menu with sliders for changing the coefficients of each surface; detection via Z-buffor level

\## Side right window

Short Weierstrass equation (over Q or quadratic field extension) computed on the go via effective Riemann-Roch theorem formulas

\## Ambient mode where all the controls are hidden and the user can admire the beauty of the shapes

\## Specs

App is running in the browser, light-weight implementation with full stack newest libraries, portable, deployable

\## Docs

Git repo, journal, plan (Markdown files)

> “It’s incredibly energizing to use OpenAI’s new GPT-5.5 model in our harness, have it reason over massive biochemical datasets to predict human drug outcomes, and then see it deliver significant accuracy gains on our hardest drug discovery evals. If OpenAI keeps cooking like this, the foundations of drug discovery will change by the end of the year.”

— Brandon White, Co-Founder & CEO at Axiom Bio

Serving GPT‑5.5 at GPT‑5.4 latency required rethinking inference as an integrated system, not a set of isolated optimizations. GPT‑5.5 was co-designed for, trained with, and served on NVIDIA GB200 and GB300 NVL72 systems. Codex and GPT‑5.5 were instrumental in how we achieved our performance targets. Codex helped the team move faster from idea to benchmarkable implementation, sketching approaches, wiring experiments, and helping identify which optimizations were worth deeper investment. GPT‑5.5 helped find and implement key improvements in the stack itself. Put simply, the model helped improve the infrastructure that serves it.

One such improvement was load balancing and partitioning heuristics. Before GPT‑5.5, we split requests on an accelerator into a fixed number of chunks to balance work across computing cores, ensuring big and small requests could run on the same GPU. However, a pre-determined number of static chunks is not optimal for all traffic shapes. To better utilize GPUs, Codex analyzed weeks’ worth of production traffic patterns and wrote custom heuristic algorithms to optimally partition and balance work. The effort had an outsized impact, increasing token generation speeds by over 20%.

Preparing the world for models that are very good at finding and patching security vulnerabilities is a team sport and will require the entire ecosystem to work hard to build resilience, with democratized model access and iterative deployment for the [next era of cyber defense⁠](https://openai.com/index/scaling-trusted-access-for-cyber-defense/).

Frontier models are becoming increasingly more capable in cybersecurity. Those capabilities will become broadly distributed and we believe the best path forward is to make sure they can be put to use for accelerating cyber defense and strengthening the ecosystem.

GPT‑5.5 is an incremental but important step towards AI that can solve some of the world’s toughest challenges like cybersecurity. With GPT‑5.2 in December, we proactively deployed the necessary [cyber safeguards⁠](https://openai.com/index/strengthening-cyber-resilience/) to limit potential cyber abuse with our models; now with GPT‑5.5, we’re deploying stricter classifiers for potential cyber risk which some users may find annoying initially, as we tune them over time.

We’ve identified cybersecurity as a category in our [Preparedness Framework⁠(opens in a new window)](https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf) for years as our models have incrementally improved, while we develop and calibrate mitigations iteratively, to be able to responsibly release models with meaningful cybersecurity capabilities.

- **We are deploying industry-leading safeguards for this level of cyber capability.** We first introduced cyber-specific safeguards with [GPT‑5.2⁠(opens in a new window)](https://deploymentsafety.openai.com/gpt-5-2/deception) last year, which we have continued to test, refine, and build on in subsequent deployments. For GPT‑5.5, we designed tighter controls around higher-risk activity, sensitive cyber requests, and added protections for repeated misuse. Broad access is made possible through our investments in model safety, authenticated usage, and monitoring for impermissible use. We have been working with external experts for months to develop, test and iterate on the robustness of these safeguards. With GPT‑5.5, we are ensuring developers can secure their code with ease, while putting stronger controls around the cyber workflows most likely to cause harm by malicious actors.
- **We are expanding access to accelerate cyber defense at every level.** We are making our cyber-permissive models available through [Trusted Access for Cyber⁠](https://openai.com/index/scaling-trusted-access-for-cyber-defense/), starting with Codex, which includes expanded access to the advanced cybersecurity capabilities of GPT‑5.5 with fewer restrictions for verified users meeting certain [trust signals⁠(opens in a new window)](https://developers.openai.com/codex/concepts/cyber-safety) at launch. Organizations who are responsible for [defending critical infrastructure⁠](https://openai.com/index/accelerating-cyber-defense-ecosystem/) can apply to access cyber-permissive models like GPT‑5.4‑Cyber, while meeting strict security requirements to use these models for securing their internal systems. This gives a wide range of verified defenders more capable tools for legitimate security work with less unnecessary friction to ensure we democratize access to important defensive capabilities. Users can apply for trusted access at [chatgpt.com/cyber⁠(opens in a new window)](http://chatgpt.com/cyber) to reduce unnecessary refusals while using GPT‑5.5 for verified defensive work.
- **We are working with government partners to help protect critical infrastructure for the public.** Together, we are exploring how advanced AI can support the defensive work of trusted officials responsible for systems people rely on, from the digital systems that secure important taxpayer data to the power grid and water supplies in local communities.

We are treating the biological/chemical and cybersecurity capabilities of GPT‑5.5 as High under our [Preparedness Framework⁠(opens in a new window)](https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf). While GPT‑5.5 didn’t reach Critical cybersecurity capability level, our evaluations and testing showed that its cybersecurity capabilities are a step up compared to GPT‑5.4.

In addition, GPT‑5.5 went through our full safety and governance process prior to release, including preparedness evaluations, domain-specific testing, new targeted evaluations for advanced biology and cybersecurity capabilities, and robust testing with external experts. We share more details in the GPT‑5.5 [system card⁠(opens in a new window)](https://deploymentsafety.openai.com/gpt-5-5).

This work reflects our broader AI resilience approach, which we believe is needed as model capabilities advance. We want powerful AI to be available to the people using it to defend systems, institutions, and the public. The viable path is trusted access, robust safeguards that scale with capability, and the operational capacity to detect and respond to serious misuse.

Today, GPT‑5.5 is rolling out to Plus, Pro, Business, and Enterprise users in ChatGPT and Codex, and GPT‑5.5 Pro is rolling out to Pro, Business, and Enterprise users in ChatGPT. We'll bring GPT‑5.5 and GPT‑5.5 Pro to the API very soon.

In ChatGPT, GPT‑5.5 Thinking is available to Plus, Pro, Business, and Enterprise users. GPT‑5.5 Pro, designed for even harder questions and higher-accuracy work, is available to Pro, Business, and Enterprise users.

In Codex, GPT‑5.5 is available for Plus, Pro, Business, Enterprise, Edu, and Go plans with a 400K context window. GPT‑5.5 is also available in Fast mode, generating tokens 1.5x faster for 2.5x the cost.

For API developers, gpt-5.5 will soon be available in the Responses and Chat Completions APIs at $5 per 1M input tokens and $30 per 1M output tokens, with a 1M context window. Batch and Flex pricing are available at half the standard API rate, while Priority processing is available at 2.5x the standard rate. We will also release gpt-5.5-pro in the API for even higher accuracy, priced at $30 per 1M input tokens and $180 per 1M output tokens. See the [pricing page⁠](https://openai.com/api/pricing/) for full details.

While GPT‑5.5 is priced higher than GPT‑5.4, it is both more intelligent and much more token efficient. In Codex, we have carefully tuned the experience so GPT‑5.5 delivers better results with fewer tokens than GPT‑5.4 for most users, while continuing to offer generous usage across subscription levels.

##### Coding

**Eval**

**GPT-5.5**

**GPT‑5.4**

**GPT-5.5 Pro**

**GPT‑5.4 Pro**

**Claude Opus 4.7**

**Gemini 3.1 Pro**

SWE-Bench Pro (Public) *

58.6%

57.7%

\-

\-

64.3%

54.2%

Terminal-Bench 2.0

82.7%

75.1%

\-

\-

69.4%

68.5%

Expert-SWE (Internal)

73.1%

68.5%

\-

\-

\-

\-

##### Professional

**Eval**

**GPT-5.5**

**GPT‑5.4**

**GPT-5.5 Pro**

**GPT‑5.4 Pro**

**Claude Opus 4.7**

**Gemini 3.1 Pro**

GDPval (wins or ties)

84.9%

83.0%

82.3%

82.0%

80.3%

67.3%

FinanceAgent v1.1

60.0%

56.0%

\-

61.5%

64.4%

59.7%

Investment Banking Modeling Tasks (Internal)

88.5%

87.3%

88.6%

83.6%

\-

\-

OfficeQA Pro

54.1%

53.2%

\-

\-

43.6%

18.1%

##### Computer use and vision

**Eval**

**GPT-5.5**

**GPT‑5.4**

**GPT-5.5 Pro**

**GPT‑5.4 Pro**

**Claude Opus 4.7**

**Gemini 3.1 Pro**

OSWorld-Verified

78.7%

75.0%

\-

\-

78.0%

\-

MMMU Pro (no tools)

81.2%

81.2%

\-

\-

\-

80.5%

MMMU Pro (with tools)

83.2%

82.1%

\-

\-

\-

\-

##### Tool use

**Eval**

**GPT-5.5**

**GPT‑5.4**

**GPT-5.5 Pro**

**GPT‑5.4 Pro**

**Claude Opus 4.7**

**Gemini 3.1 Pro**

BrowseComp

84.4%

82.7%

90.1%

89.3%

79.3%

85.9%

MCP Atlas\**

75.3%

70.6%

\-

\-

79.1%

78.2%

Toolathlon

55.6%

54.6%

\-

\-

\-

48.8%

Tau2-bench Telecom\*\**  
(original prompts)

98.0%

92.8%

\-

\-

\-

\-

\** MCP Atlas: results from Scale AI after the latest 2026 April update.   
\*\** Tau2-bench telecom: results for 5.5 and 5.4 with original prompts i.e no prompt adjustment. This omits results from other labs that were evaluated with prompt adjustments.

##### Academic

**Eval**

**GPT-5.5**

**GPT‑5.4**

**GPT-5.5 Pro**

**GPT‑5.4 Pro**

**Claude Opus 4.7**

**Gemini 3.1 Pro**

GeneBench

25.0%

19.0%

33.2%

25.6%

\-

\-

FrontierMath Tier 1–3

51.7%

47.6%

52.4%

50.0%

43.8%

36.9%

FrontierMath Tier 4

35.4%

27.1%

39.6%

38.0%

22.9%

16.7%

BixBench

80.5%

74.0%

\-

\-

\-

\-

GPQA Diamond

93.6%

92.8%

\-

94.4%

94.2%

94.3%

Humanity's Last Exam (no tools)

41.4%

39.8%

43.1%

42.7%

46.9%

44.4%

Humanity's Last Exam (with tools)

52.2%

52.1%

57.2%

58.7%

54.7%

51.4%

##### Cybersecurity

**Eval**

**GPT-5.5**

**GPT‑5.4**

**GPT-5.5 Pro**

**GPT‑5.4 Pro**

**Claude Opus 4.7**

**Gemini 3.1 Pro**

Capture-the-Flags challenge tasks (Internal)\*\*\**

88.1%

83.7%

\-

\-

\-

\-

CyberGym

81.8%

79.0%

\-

\-

73.1%

\-

\*\*\** An expansion of the hardest CTFs used in system cards with additional hard challenges.

##### Long context

**Eval**

**GPT-5.5**

**GPT‑5.4**

**GPT-5.5 Pro**

**GPT‑5.4 Pro**

**Claude Opus 4.7**

**Gemini 3.1 Pro**

Graphwalks BFS 256k f1

73.7%

62.5%

\-

\-

76.9%

\-

Graphwalks BFS 1mil f1

45.4%

9.4%

\-

\-

41.2% (Opus 4.6)

\-

Graphwalks parents 256k f1

90.1%

82.8%

\-

\-

93.6%

\-

Graphwalks parents 1mil f1

58.5%

44.4%

\-

\-

72.0% (Opus 4.6)

\-

OpenAI MRCR v2 8-needle 4K-8K

98.1%

97.3%

\-

\-

\-

\-

OpenAI MRCR v2 8-needle 8K-16K

93.0%

91.4%

\-

\-

\-

\-

OpenAI MRCR v2 8-needle 16K-32K

96.5%

97.2%

\-

\-

\-

\-

OpenAI MRCR v2 8-needle 32K-64K

90.0%

90.5%

\-

\-

\-

\-

OpenAI MRCR v2 8-needle 64K-128K

83.1%

86.0%

\-

\-

\-

\-

OpenAI MRCR v2 8-needle 128K-256K

87.5%

79.3%

\-

\-

59.2%

\-

OpenAI MRCR v2 8-needle 256K-512K

81.5%

57.5%

\-

\-

\-

\-

OpenAI MRCR v2 8-needle 512K-1M

74.0%

36.6%

\-

\-

32.2%

\-

##### Abstract reasoning

**Eval**

**GPT-5.5**

**GPT‑5.4**

**GPT-5.5 Pro**

**GPT‑5.4 Pro**

**Claude Opus 4.7**

**Gemini 3.1 Pro**

ARC-AGI-1 (Verified)

95.0%

93.7%

\-

94.5%

93.5%

98.0%

ARC-AGI-2 (Verified)

85.0%

73.3%

\-

83.3%

75.8%

77.1%

Evals of GPT were run with reasoning effort set to xhigh and were conducted in a research environment, which may provide slightly different output from production ChatGPT in some cases.

---

## [HN-TITLE] 3. Bitwarden CLI compromised in ongoing Checkmarx supply chain campaign

- **Source**: [https://socket.dev/blog/bitwarden-cli-compromised](https://socket.dev/blog/bitwarden-cli-compromised)
- **Site**: socket.dev
- **Submitter**: tosh (Hacker News)
- **Submitted**: 2026-04-23 14:17 UTC (Hacker News)
- **HN activity**: 668 points · [332 comments](https://news.ycombinator.com/item?id=47876043)

> scrape failed: http 403

---

## [HN-TITLE] 4. Show HN: Tolaria – Open-source macOS app to manage Markdown knowledge bases

- **Source**: [https://github.com/refactoringhq/tolaria](https://github.com/refactoringhq/tolaria)
- **Site**: GitHub
- **Submitter**: lucaronin (Hacker News)
- **Submitted**: 2026-04-23 22:01 UTC (Hacker News)
- **HN activity**: 119 points · [37 comments](https://news.ycombinator.com/item?id=47882697)
- **Length**: 569 words (~3 min read)
- **Language**: en

[![Latest stable](https://camo.githubusercontent.com/63b9d2627c5149087f485729c7e8dc6f5d1887f64ebcfd3fdf0bb3533db1a191/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f762f72656c656173652f7265666163746f72696e6768712f746f6c617269613f646973706c61795f6e616d653d746167)](https://camo.githubusercontent.com/63b9d2627c5149087f485729c7e8dc6f5d1887f64ebcfd3fdf0bb3533db1a191/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f762f72656c656173652f7265666163746f72696e6768712f746f6c617269613f646973706c61795f6e616d653d746167) [![CI](https://github.com/refactoringhq/tolaria/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/refactoringhq/tolaria/actions/workflows/ci.yml) [![Build](https://github.com/refactoringhq/tolaria/actions/workflows/release.yml/badge.svg?branch=main)](https://github.com/refactoringhq/tolaria/actions/workflows/release.yml) [![Codecov](https://camo.githubusercontent.com/8fd51d3ec4e16efc165908f94283072efa08033f89b241ca93bfe9b7f01831fa/68747470733a2f2f636f6465636f762e696f2f67682f7265666163746f72696e6768712f746f6c617269612f67726170682f62616467652e7376673f6272616e63683d6d61696e)](https://codecov.io/gh/refactoringhq/tolaria) [![CodeScene Hotspot Code Health](https://camo.githubusercontent.com/ab81929ff57bce9e380153db1f1020f0c2549ba6bdfbc41649b753990718a6b5/68747470733a2f2f636f64657363656e652e696f2f70726f6a656374732f37363836352f7374617475732d6261646765732f686f7473706f742d636f64652d6865616c7468)](https://codescene.io/projects/76865)

Tolaria is a desktop app for Mac for managing **markdown knowledge bases**. People use it for a variety of use cases:

- Operate second brains and personal knowledge
- Organize company docs as context for AI
- Store OpenClaw/assistants memory and procedures

Personally, I use it to **run my life** (hey 👋 [Luca here](http://x.com/lucaronin)). I have a massive workspace of 10,000+ notes, which are the result of my [Refactoring](https://refactoring.fm/) work + a ton of personal journaling and *second braining*.

[![1776506856823-CleanShot_2026-04-18_at_12 06 57_2x](https://private-user-images.githubusercontent.com/695274/580447132-8aeafb0a-b236-43c2-a083-ec111f903c38.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzcwMDA0MTAsIm5iZiI6MTc3NzAwMDExMCwicGF0aCI6Ii82OTUyNzQvNTgwNDQ3MTMyLThhZWFmYjBhLWIyMzYtNDNjMi1hMDgzLWVjMTExZjkwM2MzOC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjYwNDI0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI2MDQyNFQwMzA4MzBaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT03OGE1ZDc0NDUwY2IyNWNhNWM1MjZhNjFiMDgwMTMwMzg3MGYwY2ZiMmM1OGFhN2ZhMDNlOWM5ZmMwODg5YzY0JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZyZXNwb25zZS1jb250ZW50LXR5cGU9aW1hZ2UlMkZwbmcifQ.vJ-5OrrleI7TOnN5QtNVpkg5E5ZdfsL6TWtxCM9a3pU)](https://private-user-images.githubusercontent.com/695274/580447132-8aeafb0a-b236-43c2-a083-ec111f903c38.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzcwMDA0MTAsIm5iZiI6MTc3NzAwMDExMCwicGF0aCI6Ii82OTUyNzQvNTgwNDQ3MTMyLThhZWFmYjBhLWIyMzYtNDNjMi1hMDgzLWVjMTExZjkwM2MzOC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjYwNDI0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI2MDQyNFQwMzA4MzBaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT03OGE1ZDc0NDUwY2IyNWNhNWM1MjZhNjFiMDgwMTMwMzg3MGYwY2ZiMmM1OGFhN2ZhMDNlOWM5ZmMwODg5YzY0JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZyZXNwb25zZS1jb250ZW50LXR5cGU9aW1hZ2UlMkZwbmcifQ.vJ-5OrrleI7TOnN5QtNVpkg5E5ZdfsL6TWtxCM9a3pU)

## Walkthroughs

[](#walkthroughs)

You can find some Loom walkthroughs below — they are short and to the point:

- [How I Organize My Own Tolaria Workspace](https://www.loom.com/share/bb3aaffa238b4be0bd62e4464bca2528)
- [My Inbox Workflow](https://www.loom.com/share/dffda263317b4fa8b47b59cdf9330571)
- [How I Save Web Resources to Tolaria](https://www.loom.com/share/8a3c1776f801402ebbf4d7b0f31e9882)

## Principles

[](#principles)

- 📑 **Files-first** — Your notes are plain markdown files. They're portable, work with any editor, and require no export step. Your data belongs to you, not to any app.
- 🔌 **Git-first** — Every vault is a git repository. You get full version history, the ability to use any git remote, and zero dependency on Tolaria servers.
- 🛜 **Offline-first, zero lock-in** — No accounts, no subscriptions, no cloud dependencies. Your vault works completely offline and always will. If you stop using Tolaria, you lose nothing.
- 🔬 **Open source** — Tolaria is free and open source. I built this for [myself](https://x.com/lucaronin) and for sharing it with others.
- 📋 **Standards-based** — Notes are markdown files with YAML frontmatter. No proprietary formats, no locked-in data. Everything works with standard tools if you decide to move away from Tolaria.
- 🔍 **Types as lenses, not schemas** — Types in Tolaria are navigation aids, not enforcement mechanisms. There's no required fields, no validation, just helpful categories for finding notes.
- 🪄**AI-first but not AI-only** — A vault of files works very well with AI agents, but you are free to use whatever you want. We support Claude Code and Codex CLI (for now), but you can edit the vault with any AI you want. We provide an AGENTS file for your agents to figure out.
- ⌨️ **Keyboard-first** — Tolaria is designed for power-users who want to use keyboard as much as possible. A lot of how we designed the Editor and the Command Palette is based on this.
- 💪 **Built from real use** — Tolaria was created for manage my personal vault of 10,000+ notes, and I use it every day. Every feature exists because it solved a real problem.

## Getting started

[](#getting-started)

Download the [latest release here](https://github.com/refactoringhq/tolaria/releases/latest/download/Tolaria.app.tar.gz).

When you open Tolaria for the first time you get the chance of cloning the [getting started vault](https://github.com/refactoringhq/tolaria-getting-started) — which gives you a walkthrough of the whole app.

## Open source and local setup

[](#open-source-and-local-setup)

Tolaria is open source and built with Tauri, React, and TypeScript. If you want to run or contribute to the app locally, here is [how to get started](https://github.com/refactoringhq/tolaria/blob/main/docs/GETTING-STARTED.md). You can also find the gist below 👇

### Prerequisites

[](#prerequisites)

- Node.js 20+
- pnpm 8+
- Rust stable
- macOS for development

### Quick start

[](#quick-start)

```
pnpm install
pnpm dev
```

Open `http://localhost:5173` for the browser-based mock mode, or run the native desktop app with:

```
pnpm tauri dev
```

## Tech Docs

[](#tech-docs)

- 📐 [ARCHITECTURE.md](https://github.com/refactoringhq/tolaria/blob/main/docs/ARCHITECTURE.md) — System design, tech stack, data flow
- 🧩 [ABSTRACTIONS.md](https://github.com/refactoringhq/tolaria/blob/main/docs/ABSTRACTIONS.md) — Core abstractions and models
- 🚀 [GETTING-STARTED.md](https://github.com/refactoringhq/tolaria/blob/main/docs/GETTING-STARTED.md) — How to navigate the codebase
- 📚 [ADRs](https://github.com/refactoringhq/tolaria/blob/main/docs/adr) — Architecture Decision Records

## Security

[](#security)

If you believe you have found a security issue, please report it privately as described in [SECURITY.md](https://github.com/refactoringhq/tolaria/blob/main/SECURITY.md).

## License

[](#license)

Tolaria is licensed under AGPL-3.0-or-later. The Tolaria name and logo remain covered by the project’s trademark policy.

---

## [HN-TITLE] 5. Meta tells staff it will cut 10% of jobs

- **Source**: [https://www.bloomberg.com/news/articles/2026-04-23/meta-tells-staff-it-will-cut-10-of-jobs-in-push-for-efficiency](https://www.bloomberg.com/news/articles/2026-04-23/meta-tells-staff-it-will-cut-10-of-jobs-in-push-for-efficiency)
- **Site**: Bloomberg
- **Author**: Kurt Wagner
- **Published**: 2026-04-23
- **HN activity**: 418 points · [396 comments](https://news.ycombinator.com/item?id=47879986)
- **Length**: 190 words (~1 min read)
- **Language**: en

[Skip to content](#that-jump-content--default)

- ### Bloomberg
  
  Connecting decision makers to a dynamic network of information, people and ideas, Bloomberg quickly and accurately delivers business and financial information, news and insight around the world
  
  ### For Customers
  
  ### Support
  
  Americas+1 212 318 2000
  
  EMEA+44 20 7330 7500
  
  Asia Pacific+65 6212 1000
- ### Company
  
  ### Communications
  
  ### Follow
- ### Products
  
  ### Industry Products
- ### Media
  
  ### Media Services

<!--THE END-->

- ### Company
  
  ### Communications
  
  ### Follow
- ### Products
  
  ### Industry Products
- ### Media
  
  ### Media Services
- ### Bloomberg
  
  Connecting decision makers to a dynamic network of information, people and ideas, Bloomberg quickly and accurately delivers business and financial information, news and insight around the world
  
  ### For Customers
  
  ### Support
  
  Americas+1 212 318 2000
  
  EMEA+44 20 7330 7500
  
  Asia Pacific+65 6212 1000

Meta:

Meta Tells Staff It Plans to Cut 10% of Jobs in Efficiency Push

April 23, 2026 at 5:35 PM UTC

[Meta Platforms Inc.](https://www.bloomberg.com/quote/META:US) plans to cut 10% of workers, or roughly 8,000 employees, in an effort to boost efficiency and offset its heavy spending on artificial intelligence.

The company disclosed the move in a memo sent to employees Thursday, saying the layoffs will come on May 20. Meta also won’t hire workers for 6,000 open roles that it had intended to fill.

---

## [HN-TITLE] 6. MeshCore development team splits over trademark dispute and AI-generated code

- **Source**: [https://blog.meshcore.io/2026/04/23/the-split](https://blog.meshcore.io/2026/04/23/the-split)
- **Site**: blog.meshcore.io
- **Submitter**: wielebny (Hacker News)
- **Submitted**: 2026-04-23 16:55 UTC (Hacker News)
- **HN activity**: 164 points · [96 comments](https://news.ycombinator.com/item?id=47878117)
- **Length**: 799 words (~4 min read)
- **Language**: en-US

Since inception, the MeshCore development team have been working hard to build MeshCore.

We’ve released more than 85 versions of the MeshCore Companion, Repeater and Room Server firmwares with support for more than 75 hardware variants. All of this has been hand crafted, by humans.

We have always been wary of AI generated code, but felt everyone is free to do what they want and experiment, etc. But, one of our own, Andy Kirby, decided to branch out and extensively use Claude Code, and has decided to aggressively take over all of the components of the MeshCore ecosystem: standalone devices, mobile app, web flasher and web config tools.

And, he’s kept that *small* detail a secret - that it’s all majority *vibe coded*.

We ran a poll recently, and asked in the MeshCore Discord about AI and trust, and these are the results:

![](https://blog.meshcore.io/assets/images/2026/04/23/trust-ai-gen-firmware.png)

![](https://blog.meshcore.io/assets/images/2026/04/23/have-right-to-know.png)

The team didn’t feel it was our place to protest, until we recently discovered that Andy applied for the MeshCore Trademark (on the 29th March, according to filings) and didn’t tell any of us. We have tried discussing this, and what his intentions are, but those broke down and we now have no communication with Andy.

It’s been a stressful few months trying to sort this out, and is now a sad day to bring this out to the public. It’s been a slap in the face to the team that have worked so hard on this project, to have an insider team up with a robot and a lawyer.

## “Official” MeshCore

The use of the ‘official’ status is what is currently being contested. Andy is adamant that he *owns* the brand, and is using the word very heavily with his MeshOS line.

Meanwhile, in reality, the only ‘official’ MeshCore is the github repo. It’s the *source of truth* in terms of what is MeshCore, and Andy has *never* contributed to that.

Since the internal split, we launched the [meshcore.io](https://meshcore.io) site, as Andy controls the meshcore.co.uk site and original discord server. We’ve been left with little other recourse. And, since launching the site, Andy copied the look and feel (again, using Claude) even though we asked him not to.

## Project Growth

The MeshCore project has been on an incredible journey.

Having only started in January 2025, we have grown extremely fast!

As of this post, the official [MeshCore Map](https://map.meshcore.io) shows 38,000+ nodes around the world, and the official [MeshCore App](https://meshcore.io) has more than 100,000+ active users across Android and iOS.

It’s pretty epic how we’ve all built such an incredible community in such as a short time!

As the project grows, so does our need for a dedicated space that provides you with official information from the *core team*.

In recent times, we’ve seen an explosion of growth in MeshCore web sites dedicated to specific countries and mesh communities.

To name a few, we’ve seen:

- MeshCore Portugal over at [https://meshcore.pt](https://meshcore.pt)
- MeshCore Switzerland over at [https://meshcore.ch](https://meshcore.ch)
- and the first successes with MeshCore UK over at [https://meshcore.co.uk](https://meshcore.co.uk)

Andy Kirby did do an amazing job helping to promote the MeshCore project on his personal YouTube, but only promotes his own products now.

## Where To From Here?

So, the core team are pushing ahead with the [meshcore.io](https://meshcore.io) website, the ongoing work of firmware feature development, bug fixes, managing PR’s and developer discussions, etc.

We now release change logs, blog posts and technical documentation for all of our new firmware and app releases here.

- [https://meshcore.io](https://meshcore.io)
- [https://blog.meshcore.io](https://blog.meshcore.io)
- [https://docs.meshcore.io](https://docs.meshcore.io)

You’ll also find some familiar faces on our blog posts, such as:

- **Scott** our project founder, lead firmware engineer and developer of the Ripple firmware!
- **Recrof** our official MeshCore Map developer and Firmware Flasher guru. He has shared some insights into the early development of the MeshCore Map.
- **Liam Cottle** the official MeshCore App developer who will be posting useful guides for getting started with the MeshCore App.
- **FDLamotte** who has done epic work on the Python tooling for MeshCore, as well as the STM32 firmware variants.
- **Oltaco** (Che Aporeps) who has done amazing work on the new OTA Fix bootloader that makes firmware updates much more reliable.

## The Core Team

The MeshCore team, now consisting of **Scott**, **Liam**, **Recrof**, **FDLamotte** and now **Oltaco** remain committed to designing and developing high quality, *human-written* software.

## Our New Home

Please update your bookmarks!

This is where we will be hosting all official releases, technical documentation, and community discussions moving forward.

With the new website, we are also starting fresh with a new Discord server!

This is where you can interact directly with the MeshCore developers, get help with your projects, and contribute to the future of MeshCore.

- Official Website: [https://meshcore.io](https://meshcore.io)
- Latest Updates: [https://blog.meshcore.io](https://blog.meshcore.io)
- Technical Docs: [https://docs.meshcore.io](https://docs.meshcore.io)
- Official GitHub: [https://github.com/meshcore-dev/MeshCore](https://github.com/meshcore-dev/MeshCore)
- Reddit: [https://reddit.com/r/meshcore](https://reddit.com/r/meshcore)
- Facebook: [https://facebook.com/groups/meshcore](https://facebook.com/groups/meshcore)
- Discord: [https://meshcore.gg](https://meshcore.gg)

Thanks for being a part of this journey!

*The MeshCore Team*

---

## [HN-TITLE] 7. TorchTPU: Running PyTorch Natively on TPUs at Google Scale

- **Source**: [https://developers.googleblog.com/torchtpu-running-pytorch-natively-on-tpus-at-google-scale/](https://developers.googleblog.com/torchtpu-running-pytorch-natively-on-tpus-at-google-scale/)
- **Site**: developers.googleblog.com
- **Author**: Claudio Basile, Kat Ko, Ben Wilson, Lee Howes, Bill Jia, Joe Pamer, Michael Voznesensky, Robert Hundt
- **Published**: 2026-04-07
- **HN activity**: 71 points · [2 comments](https://news.ycombinator.com/item?id=47881786)
- **Length**: 1.5K words (~7 min read)
- **Language**: en

The challenges of building for modern AI infrastructure have fundamentally shifted. The modern frontier of machine learning now requires leveraging distributed systems, spanning thousands of accelerators. As models scale to run on clusters of O(100,000) chips, the software that powers these models must meet new demands for performance, hardware portability, and reliability.

At Google, our Tensor Processing Units (TPUs) are foundational to our supercomputing infrastructure. These custom ASICs power training and serving for both Google’s own AI platforms, like Gemini and Veo, and the massive workloads of our Cloud customers. The entire AI community should be able to easily access the full capabilities of TPUs, and because many of these potential users build models in PyTorch, an integration that allows PyTorch to work natively and efficiently on the TPU is crucial.

**Enter TorchTPU.** As an engineering team, our mandate was to build a stack that leads with usability, portability, and excellent performance. We wanted to enable developers to migrate existing PyTorch workloads with minimal code changes while giving them the APIs and the tools to extract every ounce of compute from our hardware. Here is a look under the hood at the engineering principles driving TorchTPU, the technical architecture we’ve built, and our roadmap for 2026.

## **Architecting for Usability, Portability, and Performance**

To understand TorchTPU, you first have to understand the hardware it targets.

A TPU system is not just a chip; it is [an integrated network](https://blog.google/innovation-and-ai/infrastructure-and-cloud/google-cloud/ironwood-tpu-age-of-inference/). A host is attached to multiple chips, and each chip connects to the host and to other chips via our Inter-Chip Interconnect (ICI). This ICI links the chips into a highly efficient 2D or 3D Torus topology, allowing for massive scale-up without traditional networking bottlenecks. Within each chip, execution is divided between TensorCores and SparseCores. TensorCores are single-threaded units dedicated to dense matrix math, while SparseCores handle irregular memory access patterns like embeddings, gather/scatter operations, and offloading collectives.

These features mean TPUs are a powerful tool for machine learning; and our goal is to provide the specialized support needed to fully leverage these unique capabilities. This is where PyTorch comes in: the PyTorch toolchain already creates a consistent, widely-used interface over other device types.

Our core principle for usability is simple: **it should feel like PyTorch**. A developer should be able to take an existing PyTorch script, change their initialization to “tpu”, and run their training loop without modifying a single line of core logic.

Achieving this required an entirely new approach to how PyTorch interacts with the TPU compiler and runtime stack.

## **Engineering the TorchTPU Stack: The Technical Reality**

### **Eager First: Flexibility Without Compromise**

Moving from concept to a native PyTorch experience on TPU meant rethinking the execution stack. We established an "Eager First" philosophy. Instead of requiring developers into static graph compilation immediately, we implemented TorchTPU using PyTorch’s “PrivateUse1” interface. No subclasses, no wrappers; just ordinary, familiar PyTorch Tensors on a TPU. By integrating at this deep level, we are able to fully prioritize the eager execution experience developers expect from PyTorch.

We engineered three distinct eager modes to support the development lifecycle.

The first eager mode is Debug Eager, which dispatches one operation at a time and synchronizes with the CPU after every execution. It is inherently slow, but invaluable for tracking down shape mismatches, NaN values, and out-of-memory crashes.

The second is Strict Eager, which maintains single-op dispatch, but executes asynchronously, with the intent of mirroring the default PyTorch experience. This allows both the CPU and TPU to execute simultaneously, until a synchronization point is reached in the user’s script.

The breakthrough, however, is our Fused Eager mode. Using automated reflection on the stream of operations, TorchTPU fuses steps on the fly into larger, computationally dense chunks before handing them to the TPU. By maximizing TensorCore utilization and minimizing memory bandwidth overhead, Fused Eager consistently delivers a 50% to 100+% performance increase over Strict Eager, with no setup required by the user.

All three modes are backed by a shared Compilation Cache that can operate on a single host, or be configured as persistent across multi-host setups. This means that as TorchTPU learns your workload, you spend less time compiling, and more time running.

### **Static Compilation: Dynamo, XLA, and StableHLO**

For users who want to unlock peak performance on the TPU, TorchTPU integrates natively with the torch.compile interface for full-graph compilation. We start by capturing the FX graph using Torch Dynamo. However, rather than routing through Torch Inductor, we utilize XLA as our primary backend compiler.

This was a highly deliberate architectural decision. XLA is rigorously battle-tested for TPU topologies. More importantly, it natively understands how to optimize the critical overlap between dense computation and collective communications across the ICI. Our translation layer maps PyTorch's operators directly into [StableHLO](https://openxla.org/stablehlo), XLA’s primary Intermediate Representation (IR) for tensor math. This creates a direct connection from PyTorch into XLA’s core lowering path, allowing us to generate highly optimized TPU binaries while reusing the execution paths established by our eager modes.

For developers writing custom operators, we ensure extensibility doesn't break performance. TorchTPU natively supports custom kernels written in [Pallas](https://docs.jax.dev/en/latest/jax.experimental.pallas.tpu.html) and JAX. By decorating a JAX function with @torch\_tpu.pallas.custom\_jax\_kernel, engineers can write low-level hardware instructions that interface directly with our lowering path. Work is ongoing to also support [Helion](https://github.com/pytorch/helion) kernels.

### **Distributed Training and the MPMD Challenge**

To preserve the flexibility and usability of eager and compiled modes at scale, we focused heavily on PyTorch's distributed APIs. Today, TorchTPU supports Distributed Data Parallel (DDP), Fully Sharded Data Parallel v2 (FSDPv2), and PyTorch’s DTensor out of the box. We've validated that many third-party libraries that build on PyTorch's distributed APIs work unchanged on TorchTPU.

One major limitation of PyTorch/XLA (a predecessor to TorchTPU) was that it only supported pure SPMD code. The reality of PyTorch inputs is that there is frequently slight divergence in the code running on different ranks: for instance, it is common for the “rank 0” process to do a little extra work for logging or analytics. This kind of input represents a challenge for the TPU stack, which is heavily optimized for SPMD optimization. XLA works best with a global view of code running on the system, but working around it adds overhead to the developer who has to carefully remove impure behavior.

TorchTPU is architected to carefully support divergent executions (MPMD), and will isolate communication primitives where necessary to preserve correctness, at minimal cost. This approach helps ensure that the experience of using PyTorch on the TPU is as natural as possible to existing PyTorch developers, while preserving XLA’s ability to overlap communication and computation with a global view of a distributed TPU deployment wherever possible.

### **TPU Hardware Awareness**

The TPU can achieve very high performance and efficiency, but optimal model design may differ slightly from other hardware. For example, we frequently see models hardcoding attention head dimensions to 64, while current-generation TPUs achieve peak matrix multiplication efficiency at dimensions of 128 or 256. Modifying the model to target 128 or 256 dimensions better utilizes the large, dense and efficient tensor cores on the TPU chip.

Portability doesn't eliminate hardware realities, so TorchTPU facilitates a tiered workflow: establish correct execution first, then use our upcoming deep-dive guidelines to identify and refactor suboptimal architectures, or to inject custom kernels, for optimal hardware utilization.

## **The Road Ahead: 2026 and Beyond**

We have laid a rock-solid foundation across training and serving support today, and we are actively tackling several open challenges to make TorchTPU a frictionless backend in the PyTorch ecosystem.

A primary focus for our compiler team is reducing recompilations triggered by dynamic sequence lengths and batch sizes. By implementing advanced bounded dynamism within XLA, we aim to handle shape changes without incurring compilation overhead. This can be an important feature for certain workloads, such as iterative next-token prediction.

We are also building out a comprehensive library of precompiled TPU kernels for standard operations to drastically reduce the latency of the first execution iteration.

Looking through the rest of 2026, we are working on:

- The launch of our public GitHub repository, complete with extensive documentation and reproducible architectural tutorials.
- Integration with PyTorch’s Helion DSL to further expand our custom kernel capabilities.
- First-class support for dynamic shapes directly through torch.compile.
- Native multi-queue support to ease migration of heavily asynchronous codebases with decoupled memory and compute streams.
- Deep integrations with ecosystem pillars like vLLM and TorchTitan, alongside validated linear scaling up to full Pod-size infrastructure.

TorchTPU represents our dedicated engineering effort to provide a seamless, high-performance PyTorch experience on TPU hardware. We are breaking down obstacles and removing friction between the framework you love and the TPU supercomputing hardware required for the next generation of AI.

*To stay informed on the latest TorchTPU updates, please visit the* [*TPU Developer Hub*](https://cloud.google.com/products/tpu/tpu-developer)*.*

---

## [HN-TITLE] 8. I am building a cloud

- **Source**: [https://crawshaw.io/blog/building-a-cloud](https://crawshaw.io/blog/building-a-cloud)
- **Site**: crawshaw.io
- **Submitter**: bumbledraven (Hacker News)
- **Submitted**: 2026-04-23 04:44 UTC (Hacker News)
- **HN activity**: 1014 points · [498 comments](https://news.ycombinator.com/item?id=47872324)
- **Length**: 1.6K words (~7 min read)

## I am building a cloud

*2026-04-22*

Today is fundraising [announcement day](https://blog.exe.dev/series-a). As is the nature of writing for a larger audience, it is a formal, safe announcement. As it should be. Writing must necessarily become impersonal at scale. But I would like to write something personal about why I am doing this. What is the goal of building [exe.dev](https://exe.dev)? I am already the co-founder of [one startup](https://tailscale.com) that is doing very well, selling a product I love as much as when I first helped design and build it.

What could possess me to go through all the pain of starting another company? Some fellow founders have looked at me with incredulity and shock that I would throw myself back into the frying pan. (Worse yet, experience tells me that most of the pain is still in my future.) It has been a genuinely hard question to answer because I start searching for a “big” reason, a principle or a social need, a reason or motivation beyond challenge. But I believe the truth is far simpler, and to some I am sure almost equally incredulous.

I like computers.

In some tech circles, that is an unusual statement. (“In this house, we curse computers!”) I get it, computers can be really frustrating. But I like computers. I always have. It is really fun getting computers to do things. Painful, sure, but the results are worth it. Small microcontrollers are fun, desktops are fun, phones are fun, and servers are fun, whether racked in your basement or in a data center across the world. I like them all.

So it is no small thing for me when I admit: I do not like the cloud today.

I want to. Computers are great, whether it is a BSD installed directly on a PC or a Linux VM. I can enjoy Windows, BeOS, Novell NetWare, I even installed OS/2 Warp back in the day and had a great time with it. Linux is particularly powerful today and a source of endless potential. And for all the pages of products, the cloud is just Linux VMs. Better, they are API driven Linux VMs. I should be in heaven.

But every cloud product I try is wrong. Some are better than others, but I am constantly constrained by the choices cloud vendors make in ways that make it hard to get computers to do the things I want them to do.

These issues go beyond UX or bad API design. Some of the fundamental building blocks of today’s clouds are the wrong shape. VMs are the wrong shape because they are tied to CPU/memory resources. I want to buy some CPUs, memory, and disk, and then run VMs on it. A Linux VM is a process running in another Linux’s cgroup, I should be able to run as many as I like on the computer I have. The only way to do that easily on today’s clouds is to take isolation into my own hands, with gVisor or nested virtualization on a single cloud VM, paying the nesting performance penalty, and then I am left with the job of running and managing, at a minimum, a reverse proxy onto my VMs. All because the cloud abstraction is the wrong shape.

Clouds have tried to solve this with “PaaS” systems. Abstractions that are inherently less powerful than a computer, bespoke to a particular provider. Learn a new way to write software for each compute vendor, only to find half way into your project that something that is easy on a normal computer is nearly impossible because of some obscure limit of the platform system buried so deep you cannot find it until you are deeply committed to a project. Time and again I have said “this is the one” only to be betrayed by some half-assed, half-implemented, or half-thought-through abstraction. No thank you.

Consider disk. Cloud providers want you to use remote block devices (or something even more limited and slow, like S3). When remote block devices were introduced they made sense, because computers used hard drives. Remote does not hurt sequential read/write performance, if the buffering implementation is good. Random seeks on a hard drive take 10ms, so 1ms RTT for the Ethernet connection to remote storage is a fine price to pay. It is a good product for hard drives and makes the cloud vendor’s life a lot easier because it removes an entire dimension from their standard instance types.

But then we all switched to SSD. Seek time went from 10 milliseconds to 20 microseconds. Heroic efforts have cut the network RTT a bit for really good remote block systems, but the IOPS overhead of remote systems went from 10% with hard drives to more than 10x with SSDs. It is a lot of work to configure an EC2 VM to have 200k IOPS, and you will pay $10k/month for the privilege. My MacBook has 500k IOPS. Why are we hobbling our cloud infrastructure with slow disk?

Finally networking. Hyperscalers have great networks. They charge you the earth for them and make it miserable to do deals with other vendors. The standard price for a GB of egress from a cloud provider is 10x what you pay racking a server in a normal data center. At moderate volume the multiplier is even worse. Sure, if you spend $XXm/month with a cloud the prices get much better, but most of my projects want to spend $XX/month, without the little m. The fundamental technology here is fine, but this is where limits are placed on you to make sure whatever you build cannot be affordable.

Finally, clouds have painful APIs. This is where projects like K8S come in, papering over the pain so engineers suffer a bit less from using the cloud. But VMs are hard with Kubernetes because the cloud makes you do it all yourself with lumpy nested virtualization. Disk is hard because back when they were designing K8S Google didn’t really even do usable remote block devices, and even if you can find a common pattern among clouds today to paper over, it will be slow. Networking is hard because if it were easy you would private link in a few systems from a neighboring open DC and drop a zero from your cloud spend. It is tempting to dismiss Kubernetes as a scam, artificial make work designed to avoid doing real product work, but the truth is worse: it is a product attempting to solve an impossible problem: make clouds portable and usable. It cannot be done.

You cannot solve the fundamental problems with cloud abstractions by building new abstractions on top. Making Kubernetes good is inherently impossible, a project in putting (admittedly high quality) lipstick on a pig.

We have been muddying along with these miserable clouds for 15 years now. We make do, in the way we do with all the unpleasant parts of our software stack, holding our nose whenever we have to deal with and trying to minimize how often that happens.

This however, is the moment to fix it.

This is the moment because something has changed: we have agents now. (Indeed my co-founder Josh and I started tinkering because we wanted to use LLMs in programming. It turns out what needs building for LLMs are better traditional abstractions.) Agents, by making it easiest to write code, means there will be a lot more software. Economists would call this an instance of [Jevons paradox](https://en.wikipedia.org/wiki/Jevons_paradox). Each of us will write more programs, for fun and for work. We need private places to run them, easy sharing with friends and colleagues, minimal overhead.

With more total software in our lives the cloud, which was an annoying pain, becomes a much bigger pain. We need a lot more compute, we need it to be easier to manage. Agents help to some degree. If you trust them with your credentials they will do a great job driving the AWS API for you (though occasionally it will delete your production DB). But agents struggle with the fundamental limits of the abstractions as much as we do. You need more tokens than you should and you get a worse result than you should. Every percent of context window the agent spends thinking about how to contort classic clouds into working is context window is not using to solve your problem.

So we are going to fix it. What we have launched on exe.dev today addresses the VM resource isolation problem: instead of provisioning individual VMs, you get CPU and memory and run the VMs you want. We took care of a TLS proxy and an authentication proxy, because I do not actually want my fresh VMs dumped directly on the internet. Your disk is local NVMe with blocks replicated off machine asynchronously. We have regions around the world for your machines, because you want your machines close. Your machines are behind an anycast network to give all your global users a low latency entrypoint to your product (and so we can build some new exciting things soon).

There is a lot more to build here, from obvious things like static IPs to UX challenges like how to give you access to our automatic historical disk snapshots. Those will get built. And at the same time we are going right back to the beginning, racking computers in data centers, thinking through every layer of the software stack, exploring all the options for how we wire up networks.

So, I am building a cloud. One I actually want to use. I hope it is useful to you.

---

## [HN-TITLE] 9. An update on recent Claude Code quality reports

- **Source**: [https://www.anthropic.com/engineering/april-23-postmortem](https://www.anthropic.com/engineering/april-23-postmortem)
- **Site**: anthropic.com
- **Submitter**: mfiguiere (Hacker News)
- **Submitted**: 2026-04-23 17:48 UTC (Hacker News)
- **HN activity**: 591 points · [455 comments](https://news.ycombinator.com/item?id=47878905)
- **Length**: 1.7K words (~8 min read)
- **Language**: en

Over the past month, we’ve been looking into reports that Claude’s responses have worsened for some users. We’ve traced these reports to three separate changes that affected Claude Code, the Claude Agent SDK, and Claude Cowork. The API was not impacted.

All three issues have now been resolved as of April 20 (v2.1.116).

In this post, we explain what we found, what we fixed, and what we’ll do differently to ensure similar issues are much less likely to happen again.

We take reports about degradation very seriously. We never intentionally degrade our models, and we were able to immediately confirm that our API and inference layer were unaffected.

After investigation, we identified three different issues:

1. On March 4, we changed Claude Code's default reasoning effort from `high` to `medium` to reduce the very long latency—enough to make the UI appear frozen—some users were seeing in `high` mode. This was the wrong tradeoff. We reverted this change on April 7 after users told us they'd prefer to default to higher intelligence and opt into lower effort for simple tasks. This impacted Sonnet 4.6 and Opus 4.6.
2. On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6.
3. On April 16, we added a system prompt instruction to reduce verbosity. In combination with other prompt changes, it hurt coding quality and was reverted on April 20. This impacted Sonnet 4.6, Opus 4.6, and Opus 4.7.

Because each change affected a different slice of traffic on a different schedule, the aggregate effect looked like broad, inconsistent degradation. While we began investigating reports in early March, they were challenging to distinguish from normal variation in user feedback at first, and neither our internal usage nor evals initially reproduced the issues identified.

This isn’t the experience users should expect from Claude Code. As of April 23, we’re resetting usage limits for all subscribers.

## A change to Claude Code's default reasoning effort

When we released Opus 4.6 in Claude Code in February, we set the default reasoning effort to `high`.

Soon after, we received user feedback that Claude Opus 4.6 in high effort mode would occasionally think for too long, causing the UI to appear frozen and leading to disproportionate latency and token usage for those users.

In general, the longer the model thinks, the better the output. Effort levels are how Claude Code lets users set that tradeoff—more thinking versus lower latency and fewer usage limit hits. As we calibrate effort levels for our models, we take this tradeoff into account in order to pick points along the test-time-compute curve that give people the best range of options. In the product layer, we then choose which point along this curve we set as our default, and that is the value we send to the Messages API as the effort parameter; we then make the other options available via `/effort`.

![](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2Fde3bcf9733b61f57234d8c45e663b1bd48677ea1-3840x2160.png&w=3840&q=75)

In our internal evals and testing, medium effort achieved slightly lower intelligence with significantly less latency for the majority of tasks. It also didn’t suffer from the same issues with occasional very long tail latencies for thinking, and it helped maximize users’ usage limits. As a result, we rolled out a change making medium the default effort, and explained the rationale via in-product dialog.

![](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F459b2a8a0baa88937eebcbe4566dde4d6cc7f185-3794x2260.png&w=3840&q=75)

Soon after rolling out, users began reporting that Claude Code felt less intelligent. We shipped a number of design iterations to make the current effort setting clearer in order to alert people they could change the default (notices on startup, an inline effort selector, and bringing back ultrathink), but most users retained the medium effort default.

After hearing feedback from more customers, we reversed this decision on April 7. All users now default to `xhigh` effort for Opus 4.7, and `high` effort for all other models.

## A caching optimization that dropped prior reasoning

When Claude reasons through a task, that reasoning is normally kept in the conversation history so that on every subsequent turn, Claude can see why it made the edits and tool calls it did.

On March 26, we shipped what was meant to be an efficiency improvement to this feature. We use prompt caching to make back-to-back API calls cheaper and faster for users. Claude writes the input tokens to the cache when it makes an API request, then after a period of inactivity the prompt is evicted from cache, making room for other prompts. Cache utilization is something we manage carefully (more on our [approach](https://x.com/trq212/status/2024574133011673516)).

The design should have been simple: if a session has been idle for more than an hour, we could reduce users’ cost of resuming that session by clearing old thinking sections. Since the request would be a cache miss anyway, we could prune unnecessary messages from the request to reduce the number of uncached tokens sent to the API. We’d then resume sending full reasoning history. To do this we used the `clear_thinking_20251015` API header along with `keep:1`.

The implementation had a bug. Instead of clearing thinking history once, it cleared it on every turn for the rest of the session. After a session crossed the idle threshold once, each request for the rest of that process told the API to keep only the most recent block of reasoning and discard everything before it. This compounded: if you sent a follow-up message while Claude was in the middle of a tool use, that started a new turn under the broken flag, so even the reasoning from the current turn was dropped. Claude would continue executing, but increasingly without memory of why it had chosen to do what it was doing. This surfaced as the forgetfulness, repetition, and odd tool choices people reported.

Because this would continuously drop thinking blocks from subsequent requests, those requests also resulted in cache misses. We believe this is what drove the separate reports of usage limits draining faster than expected.

![](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F332d9c487bb73c8078686068dcbe1b616720a8dd-3016x1198.png&w=3840&q=75)

Two unrelated experiments made it challenging for us to reproduce the issue at first: an internal-only server-side experiment related to message queuing; and an orthogonal change in how we display thinking suppressed this bug in most CLI sessions, so we didn’t catch it even when testing external builds.

This bug was at the intersection of Claude Code’s context management, the Anthropic API, and extended thinking. The changes it introduced made it past multiple human and automated code reviews, as well as unit tests, end-to-end tests, automated verification, and dogfooding. Combined with this only happening in a corner case (stale sessions) and the difficulty of reproducing the issue, it took us over a week to discover and confirm the root cause.

As part of the investigation, we back-tested [Code Review](https://code.claude.com/docs/en/code-review) against the offending pull requests using Opus 4.7. When provided the code repositories necessary to gather complete context, Opus 4.7 found the bug, while Opus 4.6 didn't. To prevent this from happening again, we are now landing support for additional repositories as context for code reviews.

We fixed this bug on April 10 in v2.1.101.

## A system prompt change to reduce verbosity

Our latest model, Claude Opus 4.7, has a notable behavioral quirk relative to its predecessor: as we [wrote about](https://www.anthropic.com/news/claude-opus-4-7) at launch, it tends to be quite verbose. This makes it smarter on hard problems, but it also produces more output tokens.

A few weeks before we released Opus 4.7, we started tuning Claude Code in preparation. Each model behaves slightly differently, and we spend time before each release optimizing the harness and product for it.

We have a number of tools to reduce verbosity: model training, prompting, and improving thinking UX in the product. Ultimately we used all of these, but one addition to the system prompt caused an outsized effect on intelligence in Claude Code:

> *“Length limits: keep text between tool calls to ≤25 words. Keep final responses to ≤100 words unless the task requires more detail.”*

After multiple weeks of internal testing and no regressions in the set of evaluations we ran, we felt confident about the change and shipped it alongside Opus 4.7 on April 16.

As part of this investigation, we ran more ablations (removing lines from the system prompt to understand the impact of each line) using a broader set of evaluations. One of these evaluations showed a 3% drop for both Opus 4.6 and 4.7. We immediately reverted the prompt as part of the April 20 release.

## Going forward

We are going to do several things differently to avoid these issues: we’ll ensure that a larger share of internal staff use the exact public build of Claude Code (as opposed to the version we use to test new features); and we'll make improvements to our [Code Review](https://code.claude.com/docs/en/code-review) tool that we use internally, and ship this improved version to customers.

We’re also adding tighter controls on system prompt changes. We will run a broad suite of per-model evals for every system prompt change to Claude Code, continuing ablations to understand the impact of each line, and we have built new tooling to make prompt changes easier to review and audit. We've additionally added guidance to our CLAUDE.md to ensure model-specific changes are gated to the specific model they're targeting. For any change that could trade off against intelligence, we'll add soak periods, a broader eval suite, and gradual rollouts so we catch issues earlier.

We recently created @ClaudeDevs on X to give us the room to explain product decisions and the reasoning behind them in depth. We'll share the same updates in centralized threads on GitHub.

Finally, we’d like to thank our users: the people who used the `/feedback` command to share their issues with us (or who posted specific, reproducible examples online) are the ones who ultimately allowed us to identify and fix these problems. Today we are resetting usage limits for all subscribers.

We’re immensely grateful for your feedback and for your patience.

---

## [HN-TITLE] 10. My phone replaced a brass plug

- **Source**: [https://drobinin.com/posts/my-phone-replaced-a-brass-plug/](https://drobinin.com/posts/my-phone-replaced-a-brass-plug/)
- **Site**: Drobinin Limited
- **Author**: Vadim Drobinin
- **Published**: 2026-04-23
- **HN activity**: 90 points · [15 comments](https://news.ycombinator.com/item?id=47877715)
- **Length**: 2.4K words (~11 min read)
- **Language**: en

For months, I spent my Wednesday evenings in a tin tunnel just outside Edinburgh, wearing a ridiculously looking (and equally uncomfortable) jacket. I'd lie on the floor and count breaths, then walk down the range, ducking under ceiling beams. The floor says DUCK in white paint every five metres, the beams have posters saying "DUCK" as well, but occasionally I still hit my head, too busy checking the scoring cards.

If you're unlucky, a shot lands near a ring line and you need help. You walk up to a tray of Greggs sausage rolls[\[1\]](#fn1)1. Best gourmet pastries this side of the pond (and they also do doughnuts!). - we're in the North, and so are our sponsors - find a wooden box which holds brass plugs in every size, choose the right one, carefully push it into the hole (ideally only once, to avoid tearing), and where it sits is your score.

![An example of using a scoring gauge; via A Primer on Scoring Gauges by Gary Anderson, DCM](https://drobinin.com/assets/scoring-gauge-usage.jpg)

The bullet pushes paper inwards, so even if ring lines are untouched, as long as the flange extends beyond the outer ring you get a lower score.

The shooting part is fun. The score-counting-head-hitting-plug-pushing ritual had to end.

* * *

The reason I was there was cooking.

I got into it decades ago and gradually became more obsessed: from shy attempts at recreating dishes from every fine-dining restaurant I'd visited to [building automated curing chambers](https://drobinin.com/posts/designing-software-for-things-that-rot/). Not buying koji but growing the mold, hydrating ramen dough in a chamber vacuum, heating protease and grasshoppers in an immersion circulator to make garum.

Then I got into charcuterie, which meant getting whole animal carcasses and butchering them myself. As I decided to get serious about cooking meat I figured I should learn to hunt. I'd never really held a gun, and while in the UK we love licences and don't like guns (we prefer knives), deer hunting requires neither a hunting nor a rifle licence[\[2\]](#fn2)2. The Firearms Act lets a landowner hand you one of theirs if they "supervise" you using it, which is how folks have hunted on their estates for centuries (on that note, it's deer stalking - hunting is for rich twats on horses, shooting is for rich twats in tweeds).. Red deer are essentially pests - they eat woodland faster than it regrows and have no natural predators, so culling them comes with almost no restrictions.

You do need the rifle though, and preferably know how to use it[\[3\]](#fn3)3. I shot myself in the foot too many times writing code, imagine what I could do with a firearm. - so there I was, on a mat twice a week. Not quite the same discipline as stalking a deer: shoot, change cards, have a doughnut, repeat. Half a year later I had gained a few pounds on my way to a venison steak I was yet to shoot, and spent most evenings searching for the right-sized scoring gauge.

Bored as I was, I figured I might as well automate it.

## Negative space [¶](#negative-space)

I am an iOS engineer, so I started with vanilla iOS: Apple's Vision framework is around for a while, and has ready-to-use detectors for objects, person segmentation, text recognition, and even barcode scanning, but it kept tagging random parts of the image as bullet holes - from the dot in the target's centre to pieces of one of the scoring rings.

A bullet hole is a negative space. Object detectors are trained on the thing that should be there, so it's not very straightforward how to use them for finding something that was there before and then got removed.

![A close-up of a single NSRA bull card with pink computer-vision overlays drawn across every detected feature: the concentric scoring rings, the central aiming cross, the small printed scoring numerals, and the annotation squares at the cardinal points. Actual bullet holes on the card are small and mostly unmarked.](https://drobinin.com/assets/notch-vision-overdetection.jpg)

Vision's ring and object detectors applied to an NSRA card.

I tried a few more obvious things: grayscale, inverting the image, adding and removing noise, but even when everything else worked, shots landing on ring lines were turning into fragments too small to register.

A better approach would be to treat the target as an object with known geometry: find the ring structure first, then look for holes inside it. I accepted I won't be reinventing the wheel this time, and looked up alternatives.

## Port and whatnot [¶](#port-and-whatnot)

### A 2012 paper [¶](#a-2012-paper)

Scored shooting targets are boring enough as a computer-vision problem that somebody has published on it. I found [Automatic Scoring of Shooting Targets with Tournament Precision](https://ebooks.iospress.nl/DOI/10.3233/978-1-61499-105-2-324), by Rudzinski and Luckner at Warsaw University of Technology, published in 2012 and promising 99% of holes detected.

There were a few caveats: the approach was optimised for low-resolution pictures but required flat ISSF targets [\[4\]](#fn4)4. Comparing to NSRA, ISSF targets have different size of bulls and amount of rings, different rings, and different background  
![ISSF Air Rifle target, Guvava, CC BY-SA 3.0 viva Wikimedia Commons](https://upload.wikimedia.org/wikipedia/commons/d/da/AR_target_paper.jpg), low camera angle, annotations not similar to holes, and was generally designed for pellet shooting. Air-rifle pellets make a cookie-cutter hole in paper, while a .22 bullet at twenty-five yards leaves ragged edges.

I reproduced the paper step-by-step. If you don't fancy reading the publication, it boils down to four steps: erase the ring lines, flood-fill to find the hole shapes, run a Prewitt edge detector, and fit circles with a Hough transform.

![Grayscale of the bull: a white scoring ring line runs straight across two bullet holes.](https://drobinin.com/assets/notch-stage-1-grayscale.png)

Starting with a grayscale target

![Ring lines removed: each bullet hole is now two crescent fragments on either side of where the ring used to be.](https://drobinin.com/assets/notch-stage-2-rings-erased.png)

1\. Erase ring lines: these holes split into two crescents.

![Flood-fill result: four small dark fragments on white, each below the minimum-region threshold.](https://drobinin.com/assets/notch-stage-3-flood-fill.png)

2\. Flood-fill

![Prewitt edges of the fragments: only their outlines, no complete circles for Hough to fit.](https://drobinin.com/assets/notch-stage-4-prewitt.png)

3\. Detect edges and 4. Use Hough transform to fit circles.

Vision framework doesn't have a Prewitt edge detector, so I brought in OpenCV as well, and the first three steps worked well. But step four has a catch. NSRA cards print ring scores at the cardinal points - a "9" north, east, south, and west of the 9 ring - and Hough fits circles to those digits too.

Also occasionally the crescents after ring erasure would be too small for flood-fill, so I ended up using a V-value radial-intensity profile - pick a strip from the bull's centre outward, sample brightness along it, and look for the spikes where the white ring lines cross. The spike positions are the ring radii.

![A black bullseye with concentric white scoring rings, overlaid with an orange horizontal strip running from the centre outward to the right edge.](https://drobinin.com/assets/notch-vradial-1-strip.png)

1\. Pick a strip.

![The same bullseye and strip; small orange-rimmed white circles mark each point where the strip crosses a white ring line.](https://drobinin.com/assets/notch-vradial-2-samples.png)

2\. Sample brightness.

![A bar chart of brightness along the strip: five tall orange bars (one per ring crossing) sitting on a flat baseline; the last bar is paler.](https://drobinin.com/assets/notch-vradial-3-spikes.png)

3\. Plot the spikes.

Bundled with Vision's `VNDetectContoursRequest` and a perimeter filter, this got me an average of four shots per card of five - so 80% accuracy, still a long way to go, and we hadn't even got to edge cases like overlapping shots yet.

*Want Vision & CoreML in your app too? [Let's chat →](https://drobinin.com/consulting/coreml-on-device-ml/?ref=brassplug)*

### Adding machine learning [¶](#adding-machine-learning)

Shooters do things to target cards. They put names, add dates, circle around close shots. Most cards have torn staple holes, sometimes multiple bullets tear the paper so close to each other, it turns in a single flood-fill region.

My first attempt was to add an heuristic for each, come back every week with another set of shot cards, and keep tuning again, but that was hardly scalable and/or sustainable.

So I went back to Google and came across a paper published in late 2023 [\[5\]](#fn5)5. Z. Ali et al., "Application of YOLOv8 and Detectron2 for Bullet Hole Detection and Score Calculation from Shooting Cards", *AI*, vol. 5, no. 1, 72-90, 2024. [10.3390/ai5010005](https://www.mdpi.com/2673-2688/5/1/5).. The authors promised 96.5% average precision but focused on hole detection and read scores off the bounding-box class.

I couldn't afford manually preparing bounding-box classes for my cards, but I already had a working geometry, thanks to Rudzinski and Luckner. What I didn't have was reliable hole detection. The YOLO paper did the hole detection well but left geometry to an assumption that the card is perfectly aligned.

![A bullseye target with three white bullet holes on the 9 and 8 rings, overlaid with a dashed grey grid that divides the image into equal square cells.](https://drobinin.com/assets/notch-yolo-1-grid.png)

1\. Image becomes a grid: every patch is one cell.

![The same gridded target. Three grid cells - each containing the centre of a bullet hole - are tinted green with a tick mark.](https://drobinin.com/assets/notch-yolo-2-cells-vote.png)

2\. Cells vote: hole inside me?

![The green yes-cells now also have pink dashed bounding boxes drawn around each hole. The boxes are slightly offset from each other and overlap one another.](https://drobinin.com/assets/notch-yolo-3-boxes-proposed.png)

3\. Yes-cells draw their own boxes; neighbouring cells' boxes are slightly different and overlap.

![The clean final result: one solid green bounding box per bullet hole, no grid, no overlapping proposals.](https://drobinin.com/assets/notch-yolo-4-nms.png)

4\. NMS (non-maximum suppression) keeps the strongest box per hole.

Naturally, I merged two approaches: OpenCV does the structural geometry - bulls, ellipses, rings, the perspective transform. YOLOv8 does hole localisation, same architecture as the MDPI paper, fine-tuned on my own dataset. The learned model's class prediction is discarded at inference; score comes from distance to the bull centre compared to geometric ring radii.

Once labelled and trained, I used `coremltools` to export the model to CoreML - the final package weighs 22.4 MB after Xcode imports it.

![Xcode's CoreML model preview for 'BulletHoleDetector', 22.4 MB, targeting iOS 15+. On the right, a .22 target card with handwritten VAZ #5 annotations upside-down. Five white bounding boxes mark the detected shots - one high above the bull, three near the centre touching each other, and one low-left.](https://drobinin.com/assets/notch-coreml-preview.png)

The packaged detector in Xcode's CoreML preview, running on one of my own cards.

## Scoring [¶](#scoring)

### Mapping back [¶](#mapping-back)

With both pieces wired up, I expected scoring to be the easy bit. Mostly it was, except I kept losing time to the coordinate spaces and completely forgot about perspective.

![A ten-bull NSRA competition card photographed at a slight angle. Green ellipses are drawn around each detected bull; dozens of small red circles are scattered across the margins where decorative printed dots have been tagged as bullet holes. Total score displayed top-left: 40.](https://drobinin.com/assets/notch-rings-skewed.jpg)

Photos at an angle turn each bull into an ellipse, and the radial-intensity profile needs post-processing to fit them properly.

Vision returns bounding boxes in 0-1 with the origin at the bottom-left but UIKit has the origin at the top-left, so it's very easy to get off-by-a-ring errors that look plausible enough to blame anything but the transformation method.

Once the coordinates line up, the math looks simple. The scoring gauge exists for a reason though - the detected bullet hole is smaller than the bullet that made it, because paper tears and gets pushed inwards. CoreML returns a bbox around the visibly torn paper, not the bullet, so for scoring I need the centre of the hole plus the bullet's radius - the furthest point the bullet reached.

### Bullet radius [¶](#bullet-radius)

A .22 bullet is 0.22" across (duh), and an NSRA .22 card is 2.05", so geometrically the bullet radius should be 10.87% of the bull diameter.

The paper doesn't tear that cleanly though. I also did terribly bad in both physics and machine learning in the past, so after re-reading both papers I gave up and started to tune the multiplier empirically - that is, changing the constant and re-running tests on cards I scored manually. 30% is what worked for me (so 14.13% of the bull's diameter), but I'd love to learn a proper way to do this - please [drop me a message](mailto:nsra@drobinin.com) if you know it.

![The lower NSRA card from earlier with each bullet hole annotated: a red rectangle tightly bounds the torn paper region of every hole (the bounding box CoreML detects), and a smaller orange circle sits inside each rectangle at the bullet's actual centre (the reach used for scoring).](https://drobinin.com/assets/notch-bbox-vs-reach.jpg)

Red rectangles around each torn region are what CoreML detects; the yellow circles are the bullet placements used for scoring.

## Beyond the gauge [¶](#beyond-the-gauge)

Six months in, I still think I am bad at shooting but now most of the time I know why - posture, trigger pressure, or breathing through the shot. These are things you feel rather than measure though, which is why beginners shoot grouping cards in the first place. The shape of the group on a card tells you a lot - from common issues like trigger pull and breathing through shot to issues with the rifle [\[6\]](#fn6)6. A clean tight group sitting a few rings off the bull usually means you'd hit the bull perfectly if it wasn't for the tune - an explanation I find very comforting (and my mates disagree with)..

![A target bull with concentric scoring rings. A tight cluster of about ten white holes sits low-right of centre, touching the 8 and 9 rings.](https://drobinin.com/assets/notch-group-pull-trigger.png)

Tight cluster, low-right of centre - trigger pull.

![A target bull with concentric scoring rings. Shots form a vertical string running from above the 10 ring down through the centre to below it.](https://drobinin.com/assets/notch-group-breathing.png)

Vertical string through the centre - breathing through the shot.

Competition cards are shot bull-by-bull in a known order, so once you know which bull went first you can plot accuracy against position to spot trends. I often see myself drifting in the middle and then tightening back up at the end, when I notice the card is running out.

I embarked on this quest trying to bring down the gauge (and stop bringing home piles of shot cardboard), but by the time the scoring worked I got way more interested in trying to automate not only the scoring but the feedback - after a couple of months' worth of scored cards I could stack all shots on a cumulative heat map to see the trends, or compare my performance across all four communal rifles.

I could even finally prove that eating a doughnut minutes before my turn on average drops my performance by 7 points out of 100 - this one might be placebo, but my working theory is that it raises my sugar levels (as you can tell, I am not only doing bad at physics but also at biology - I am doing my best though).

I ended up wrapping it up as [a small offline-first app](https://apps.apple.com/app/apple-store/id6747980153?pt=126702974&ct=drobinin.com-brass&mt=8) - originally for my mates back at the club, but really for anyone keen to make their cooking shooting routine a bit more fun. I did learn that most of the world doesn't care much for NSRA targets though, so I keep slowly adding support for other disciplines and cards.

![A short loop of the Notch app on iPhone: a competition card photographed from above, bulls auto-detected, holes localised, and scores written next to each bull.](https://drobinin.com/assets/notch-demo.gif)

Notch on iPhone, scoring a competition card.

* * *

I left for Canada before I felt confident enough to go deer stalking.

Somebody else lies on that mat[\[7\]](#fn7)7. I'll be back to it eventually.  
![The shooting range in Prestonpans: a long indoor lane with shooting mats lined up in front of paper-target stands.](https://drobinin.com/assets/notch-range-prestonpans.jpg) in Prestonpans now on Monday and Wednesday evenings, ducking the same beams and counting breaths between cards. The wooden box of scoring gauges is still on the table - most of the lads said it's not the first time someone has reckoned they could do better than the good ol' brass plug.

The scoring gauge's real job is settling disputes. When two shooters disagree on a borderline shot, the score is whatever the gauge says it is, because that's the rule. Neither of them care for my state-of-the-art computer vision models.

The chances are, you've got your own equivalent somewhere - a tool that's been doing its job for longer than you've been around and would be silly to try to replace.

We might fail to retire them. But I aspire to build things that end up on somebody else's table twenty years from now, lasting long enough that somebody's silly enough to try to replace them too.

*Fancy adding an on-device ML to your app? [I can help →](https://drobinin.com/consulting/coreml-on-device-ml/?ref=nsra)*

* * *

1. Best gourmet pastries this side of the pond (and they also do doughnuts!). [↩︎](#fnref1)
2. The Firearms Act lets a landowner hand you one of theirs if they "supervise" you using it, which is how folks have hunted on their estates for centuries (on that note, it's deer stalking - hunting is for rich twats on horses, shooting is for rich twats in tweeds). [↩︎](#fnref2)
3. I shot myself in the foot too many times writing code, imagine what I could do with a firearm. [↩︎](#fnref3)
4. Comparing to NSRA, ISSF targets have different size of bulls and amount of rings, different rings, and different background  
   ![ISSF Air Rifle target, Guvava, CC BY-SA 3.0 viva Wikimedia Commons](https://upload.wikimedia.org/wikipedia/commons/d/da/AR_target_paper.jpg) [↩︎](#fnref4)
5. Z. Ali et al., "Application of YOLOv8 and Detectron2 for Bullet Hole Detection and Score Calculation from Shooting Cards", *AI*, vol. 5, no. 1, 72-90, 2024. [10.3390/ai5010005](https://www.mdpi.com/2673-2688/5/1/5). [↩︎](#fnref5)
6. A clean tight group sitting a few rings off the bull usually means you'd hit the bull perfectly if it wasn't for the tune - an explanation I find very comforting (and my mates disagree with). [↩︎](#fnref6)
7. I'll be back to it eventually.  
   ![The shooting range in Prestonpans: a long indoor lane with shooting mats lined up in front of paper-target stands.](https://drobinin.com/assets/notch-range-prestonpans.jpg) [↩︎](#fnref7)

---

## [HN-TITLE] 11. Your hex editor should color-code bytes

- **Source**: [https://simonomi.dev/blog/color-code-your-bytes/](https://simonomi.dev/blog/color-code-your-bytes/)
- **Site**: simonomi.dev
- **Submitter**: tobr (Hacker News)
- **Published**: 2026-03-31
- **HN activity**: 518 points · [144 comments](https://news.ycombinator.com/item?id=47846688)
- **Length**: 6.2K words (~28 min read)
- **Language**: en

[![an icon meant to depict a blog](https://simonomi.dev/images/blog.svg)](https://simonomi.dev/blog "blog") [![an icon of a home](https://simonomi.dev/images/home.svg)](https://simonomi.dev/ "home")

alice pellerin • 2026-03-31

too often, i see hex editors[1](#footnote:1) that look like this:

```
00000000  00 00 02 00  28 00 00 00  88 15 00 00  C4 01 00 00  ⋄⋄•⋄(⋄⋄⋄×•⋄⋄×•⋄⋄
00000010  14 00 00 00  03 00 00 00  00 01 00 00  03 00 00 00  •⋄⋄⋄•⋄⋄⋄⋄•⋄⋄•⋄⋄⋄
00000020  3C 00 00 00  C4 0A 00 00  50 00 00 00  18 00 00 00  <⋄⋄⋄×⏎⋄⋄P⋄⋄⋄•⋄⋄⋄
00000030  14 00 00 10  00 00 00 00  18 00 00 20  00 00 00 00  •⋄⋄•⋄⋄⋄⋄•⋄⋄ ⋄⋄⋄⋄
00000040  20 00 00 30  00 00 00 00  51 00 00 00  48 00 00 00   ⋄⋄0⋄⋄⋄⋄Q⋄⋄⋄H⋄⋄⋄
00000050  10 00 00 80  00 00 00 00  00 00 00 A0  00 00 00 00  •⋄⋄×⋄⋄⋄⋄⋄⋄⋄×⋄⋄⋄⋄
00000060  01 00 00 A0  01 00 00 00  02 00 00 A0  02 00 00 00  •⋄⋄×•⋄⋄⋄•⋄⋄×•⋄⋄⋄
00000070  03 00 00 A0  03 00 00 00  04 00 00 A0  04 00 00 00  •⋄⋄×•⋄⋄⋄•⋄⋄×•⋄⋄⋄
00000080  05 00 00 A0  05 00 00 00  06 00 00 A0  06 00 00 00  •⋄⋄×•⋄⋄⋄•⋄⋄×•⋄⋄⋄
00000090  20 00 00 30  00 00 00 00  53 00 00 00  00 DE 00 00   ⋄⋄0⋄⋄⋄⋄S⋄⋄⋄⋄×⋄⋄
000000a0  5D FA 01 44  E1 3A 9A 0F  52 00 00 00  FC 14 00 00  ]×•D×:×•R⋄⋄⋄×•⋄⋄
000000b0  1B 20 2A 2B  00 80 00 00  00 80 00 00  00 80 00 00  • *+⋄×⋄⋄⋄×⋄⋄⋄×⋄⋄
000000c0  FF 7F 00 00  00 00 33 52  00 00 00 00  29 10 15 10  ╳•⋄⋄⋄⋄3R⋄⋄⋄⋄)•••
000000d0  80 00 1F 00  03 00 00 00  02 00 00 00  40 14 22 23  ×⋄•⋄•⋄⋄⋄•⋄⋄⋄@•"#
000000e0  03 00 00 00  06 00 00 00  23 00 9D 05  6B FA C0 05  •⋄⋄⋄•⋄⋄⋄#⋄×•k××•
000000f0  C8 03 00 00  14 22 23 14  05 00 00 00  2E 00 9E 06  ×•⋄⋄•"#••⋄⋄⋄.⋄×•
```

every time i do, i feel bad for the poor person having to use it (especially if that person is me!). a plain list of bytes makes it hard to notice interesting things in the data. go ahead, try to find the single `C0` in these bytes:

```
00000000  15 29 21 25  03 2F 2E 2B  15 11 24 3F  10 14 3B 13  •)!%•/.+••$?••;•
00000001  32 25 09 01  10 02 01 23  26 1E 25 2D  24 2F 23 3E  2%␣••••#&•%-$/#>
00000002  05 0F 33 2D  18 29 3E 1E  16 3B 29 0D  24 0B 3E 38  ••3-•)>••;)␍$•>8
00000003  33 3C 1E 2C  28 31 C0 1D  11 32 14 05  10 17 3F 01  3<•,(1×••2••••?•
00000004  1E 32 0A 14  2B 2F 0B 14  3E 27 39 0A  17 23 1B 39  •2⏎•+/••>'9⏎•#•9
00000005  18 0B 3B 13  25 14 2C 3B  33 3C 19 10  21 0F 2C 34  ••;•%•,;3<••!•,4
00000006  2F 0C 1D 2C  2E 22 11 28  0D 0A 1F 37  27 39 35 21  /••,."•(␍⏎•7'95!
00000007  23 39 21 2B  37 23 28 16  30 28 02 04  25 22 37 1F  #9!+7#(•0(••%"7•
00000008  36 2F 2D 25  12 25 01 31  3B 39 2D 35  26 37 30 2A  6/-%•%•1;9-5&70*
00000009  06 0D 11 1F  25 0A 1E 29  15 0B 0A 2A  2E 2C 21 16  •␍••%⏎•)••⏎*.,!•
0000000a  1D 37 0F 16  12 03 2C 02  0B 22 24 11  1A 3B 0D 0B  •7••••,••"$••;␍•
0000000b  0D 13 30 2D  3B 15 05 15  32 19 20 30  3C 0E 3D 0B  ␍•0-;•••2• 0<•=•
0000000c  17 24 22 3E  1E 22 18 0D  21 06 29 38  3E 20 3B 12  •$">•"•␍!•)8> ;•
0000000d  06 1F 19 17  29 35 1E 3B  1E 01 31 08  13 0C 27 20  ••••)5•;••1•••' 
0000000e  08 24 2E 32  16 06 1F 3D  35 35 19 16  02 07 31 13  •$.2•••=55••••1•
0000000f  31 33 30 36  14 32 07 05  05 34 19 0B  18 16 12 3C  1306•2•••4•••••<
```

compare that to one with colors:

```
00000000  37 2D 08 13  0D 0B 18 1D  02 1A 2D 12  2A 0D 0F 27  7-••␍•••••-•*␍•'
00000001  04 2A 25 32  0F 17 32 11  2F 2A 2A 0A  0A 16 04 1D  •*%2••2•/**⏎⏎•••
00000002  32 13 09 01  2B 26 1A 30  3D 26 13 39  09 0D 38 3E  2•␣•+&•0=&•9␣␍8>
00000003  0A 0D 1D 0B  36 30 02 36  0E 0B 2F 09  26 1E 33 03  ⏎␍••60•6••/␣&•3•
00000004  3C 3C 08 0A  1E 36 12 11  1B 17 05 09  0B 37 0C 0E  <<•⏎•6•••••␣•7••
00000005  31 05 09 17  2D 1D 05 16  25 03 3E 0A  1A 01 0C 2B  1•␣•-•••%•>⏎•••+
00000006  13 37 17 14  37 03 18 34  2D 03 30 11  2B 19 04 0B  •7••7••4-•0•+•••
00000007  04 2A 18 26  21 25 3F 23  1D 0F 2F 2B  35 0C 09 37  •*•&!%?#••/+5•␣7
00000008  25 33 19 1C  12 1E 2E 38  3A 3A 3C 28  39 0A 30 23  %3••••.8::<(9⏎0#
00000009  21 08 09 24  0B 0E 13 26  04 30 06 20  10 18 15 3C  !•␣$•••&•0• •••<
0000000a  10 3C 30 34  28 28 1D 31  22 23 22 38  0E 12 25 15  •<04((•1"#"8••%•
0000000b  3B 1F 30 0D  26 0E 15 32  1C 2B 12 1A  32 1C 02 07  ;•0␍&••2•+••2•••
0000000c  35 2E 06 13  1F 33 3D 16  05 1C 2A 0F  34 34 21 26  5.•••3=•••*•44!&
0000000d  0C 17 3D 02  27 39 21 17  3F 07 1A 2F  38 0D 2D 1E  ••=•'9!•?••/8␍-•
0000000e  32 0C C0 14  0E 20 25 0E  2E 2D 0D 21  27 13 2C 07  2•×•• %•.-␍!'•,•
0000000f  14 0A 20 31  15 13 2C 3B  0F 12 1A 2D  0C 11 32 11  •⏎ 1••,;•••-••2•
```

it’s much easier to pick out the unique byte when it’s a different color! human brains are really good at spotting visual patterns—given the right format

here are a few more examples:

### example 1

no color

```
00000000  4B 50 53 00  0A 00 00 00  0C 00 00 00  01 00 00 00  KPS⋄⏎⋄⋄⋄•⋄⋄⋄•⋄⋄⋄
00000010  00 00 00 00  B4 00 00 00  46 00 00 00  64 00 00 00  ⋄⋄⋄⋄×⋄⋄⋄F⋄⋄⋄d⋄⋄⋄
00000020  46 00 00 00  02 00 00 00  00 00 00 00  DC 00 00 00  F⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄×⋄⋄⋄
00000030  50 00 00 00  A0 00 00 00  50 00 00 00  03 00 00 00  P⋄⋄⋄×⋄⋄⋄P⋄⋄⋄•⋄⋄⋄
00000040  00 00 00 00  FA 00 00 00  5A 00 00 00  B4 00 00 00  ⋄⋄⋄⋄×⋄⋄⋄Z⋄⋄⋄×⋄⋄⋄
00000050  5A 00 00 00  04 00 00 00  00 00 00 00  18 01 00 00  Z⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄••⋄⋄
00000060  64 00 00 00  C8 00 00 00  64 00 00 00  05 00 00 00  d⋄⋄⋄×⋄⋄⋄d⋄⋄⋄•⋄⋄⋄
00000070  00 00 00 00  4A 01 00 00  78 00 00 00  F0 00 00 00  ⋄⋄⋄⋄J•⋄⋄x⋄⋄⋄×⋄⋄⋄
00000080  78 00 00 00  06 00 00 00  00 00 00 00  90 01 00 00  x⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄×•⋄⋄
00000090  8C 00 00 00  18 01 00 00  8C 00 00 00  07 00 00 00  ×⋄⋄⋄••⋄⋄×⋄⋄⋄•⋄⋄⋄
000000a0  00 00 00 00  F4 01 00 00  B4 00 00 00  68 01 00 00  ⋄⋄⋄⋄×•⋄⋄×⋄⋄⋄h•⋄⋄
000000b0  B4 00 00 00  08 00 00 00  00 00 00 00  58 02 00 00  ×⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄X•⋄⋄
000000c0  DC 00 00 00  B8 01 00 00  DC 00 00 00  09 00 00 00  ×⋄⋄⋄×•⋄⋄×⋄⋄⋄␣⋄⋄⋄
000000d0  E7 03 00 00  E7 03 00 00  00 00 00 00  E7 03 00 00  ×•⋄⋄×•⋄⋄⋄⋄⋄⋄×•⋄⋄
000000e0  E7 03 00 00  00 00 00 00  00 00 00 00  00 00 00 00  ×•⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄
000000f0  00 00 00 00  00 00 00 00  00 00 00 00               ⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄
```

color

```
00000000  4B 50 53 00  0A 00 00 00  0C 00 00 00  01 00 00 00  KPS⋄⏎⋄⋄⋄•⋄⋄⋄•⋄⋄⋄
00000010  00 00 00 00  B4 00 00 00  46 00 00 00  64 00 00 00  ⋄⋄⋄⋄×⋄⋄⋄F⋄⋄⋄d⋄⋄⋄
00000020  46 00 00 00  02 00 00 00  00 00 00 00  DC 00 00 00  F⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄×⋄⋄⋄
00000030  50 00 00 00  A0 00 00 00  50 00 00 00  03 00 00 00  P⋄⋄⋄×⋄⋄⋄P⋄⋄⋄•⋄⋄⋄
00000040  00 00 00 00  FA 00 00 00  5A 00 00 00  B4 00 00 00  ⋄⋄⋄⋄×⋄⋄⋄Z⋄⋄⋄×⋄⋄⋄
00000050  5A 00 00 00  04 00 00 00  00 00 00 00  18 01 00 00  Z⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄••⋄⋄
00000060  64 00 00 00  C8 00 00 00  64 00 00 00  05 00 00 00  d⋄⋄⋄×⋄⋄⋄d⋄⋄⋄•⋄⋄⋄
00000070  00 00 00 00  4A 01 00 00  78 00 00 00  F0 00 00 00  ⋄⋄⋄⋄J•⋄⋄x⋄⋄⋄×⋄⋄⋄
00000080  78 00 00 00  06 00 00 00  00 00 00 00  90 01 00 00  x⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄×•⋄⋄
00000090  8C 00 00 00  18 01 00 00  8C 00 00 00  07 00 00 00  ×⋄⋄⋄••⋄⋄×⋄⋄⋄•⋄⋄⋄
000000a0  00 00 00 00  F4 01 00 00  B4 00 00 00  68 01 00 00  ⋄⋄⋄⋄×•⋄⋄×⋄⋄⋄h•⋄⋄
000000b0  B4 00 00 00  08 00 00 00  00 00 00 00  58 02 00 00  ×⋄⋄⋄•⋄⋄⋄⋄⋄⋄⋄X•⋄⋄
000000c0  DC 00 00 00  B8 01 00 00  DC 00 00 00  09 00 00 00  ×⋄⋄⋄×•⋄⋄×⋄⋄⋄␣⋄⋄⋄
000000d0  E7 03 00 00  E7 03 00 00  00 00 00 00  E7 03 00 00  ×•⋄⋄×•⋄⋄⋄⋄⋄⋄×•⋄⋄
000000e0  E7 03 00 00  00 00 00 00  00 00 00 00  00 00 00 00  ×•⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄
000000f0  00 00 00 00  00 00 00 00  00 00 00 00               ⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄
```

this file starts with the [magic bytes](https://en.wikipedia.org/wiki/List_of_file_signatures) `KPS`, then a bunch of ([little-endian](https://en.wikipedia.org/wiki/Endianness)) 32-bit integers that range from 0 to 999 (`0x3E7`). the colors make it quick to recognize that every 32-bit integer is relatively small, as the two high bytes are always `00 00`. if you look closely, you may notice other patterns, like the numbers counting up every `0x18` bytes starting at `0xC`

if you're curious about this particular file format, [the code that parses it](https://github.com/simonomi/carbonizer/blob/6c311b6a2801576033cd42a8ba95461cee2ac6d1/Sources/Carbonizer/files/ff1/KPS.swift#L4-L25) is pretty simple, even if you're not a programmer. there's even a [wiki page](https://simonomi.dev/fftechwiki/file-formats/KPS/) for the data it represents, if you're into [Fossil Fighters](https://en.wikipedia.org/wiki/Fossil_Fighters)

### example 2

no color

```
00000000  44 41 4C 00  59 06 00 00  F4 07 00 00  F5 01 00 00  DAL⋄Y•⋄⋄×•⋄⋄×•⋄⋄
00000010  14 00 00 00  E8 07 00 00  08 08 00 00  44 08 00 00  •⋄⋄⋄×•⋄⋄••⋄⋄D•⋄⋄
00000020  84 08 00 00  C8 08 00 00  04 09 00 00  40 09 00 00  ×•⋄⋄×•⋄⋄•␣⋄⋄@␣⋄⋄
00000030  7C 09 00 00  B8 09 00 00  F8 09 00 00  34 0A 00 00  |␣⋄⋄×␣⋄⋄×␣⋄⋄4⏎⋄⋄
00000040  70 0A 00 00  AC 0A 00 00  EC 0A 00 00  30 0B 00 00  p⏎⋄⋄×⏎⋄⋄×⏎⋄⋄0•⋄⋄
00000050  6C 0B 00 00  A8 0B 00 00  E8 0B 00 00  24 0C 00 00  l•⋄⋄×•⋄⋄×•⋄⋄$•⋄⋄
00000060  60 0C 00 00  9C 0C 00 00  D8 0C 00 00  14 0D 00 00  `•⋄⋄×•⋄⋄×•⋄⋄•␍⋄⋄
00000070  50 0D 00 00  8C 0D 00 00  CC 0D 00 00  08 0E 00 00  P␍⋄⋄×␍⋄⋄×␍⋄⋄••⋄⋄
00000080  48 0E 00 00  84 0E 00 00  C4 0E 00 00  08 0F 00 00  H•⋄⋄×•⋄⋄×•⋄⋄••⋄⋄
00000090  44 0F 00 00  80 0F 00 00  C0 0F 00 00  04 10 00 00  D•⋄⋄×•⋄⋄×•⋄⋄••⋄⋄
000000a0  40 10 00 00  80 10 00 00  C4 10 00 00  00 11 00 00  @•⋄⋄×•⋄⋄×•⋄⋄⋄•⋄⋄
000000b0  3C 11 00 00  7C 11 00 00  B8 11 00 00  F4 11 00 00  <•⋄⋄|•⋄⋄×•⋄⋄×•⋄⋄
000000c0  34 12 00 00  70 12 00 00  B0 12 00 00  F4 12 00 00  4•⋄⋄p•⋄⋄×•⋄⋄×•⋄⋄
000000d0  30 13 00 00  70 13 00 00  B4 13 00 00  F0 13 00 00  0•⋄⋄p•⋄⋄×•⋄⋄×•⋄⋄
000000e0  2C 14 00 00  68 14 00 00  A4 14 00 00  E4 14 00 00  ,•⋄⋄h•⋄⋄×•⋄⋄×•⋄⋄
000000f0  20 15 00 00  5C 15 00 00  9C 15 00 00  E0 15 00 00   •⋄⋄\•⋄⋄×•⋄⋄×•⋄⋄
00000100  1C 16 00 00  58 16 00 00  98 16 00 00  DC 16 00 00  ••⋄⋄X•⋄⋄×•⋄⋄×•⋄⋄
00000110  18 17 00 00  58 17 00 00  9C 17 00 00  D8 17 00 00  ••⋄⋄X•⋄⋄×•⋄⋄×•⋄⋄
00000120  14 18 00 00  54 18 00 00  90 18 00 00  D0 18 00 00  ••⋄⋄T•⋄⋄×•⋄⋄×•⋄⋄
00000130  14 19 00 00  50 19 00 00  8C 19 00 00  C8 19 00 00  ••⋄⋄P•⋄⋄×•⋄⋄×•⋄⋄
00000140  04 1A 00 00  40 1A 00 00  7C 1A 00 00  B8 1A 00 00  ••⋄⋄@•⋄⋄|•⋄⋄×•⋄⋄
00000150  F4 1A 00 00  30 1B 00 00  6C 1B 00 00  AC 1B 00 00  ×•⋄⋄0•⋄⋄l•⋄⋄×•⋄⋄
00000160  F0 1B 00 00  2C 1C 00 00  68 1C 00 00  A8 1C 00 00  ×•⋄⋄,•⋄⋄h•⋄⋄×•⋄⋄
00000170  EC 1C 00 00  28 1D 00 00  68 1D 00 00  AC 1D 00 00  ×•⋄⋄(•⋄⋄h•⋄⋄×•⋄⋄
00000180  E8 1D 00 00  28 1E 00 00  6C 1E 00 00  A8 1E 00 00  ×•⋄⋄(•⋄⋄l•⋄⋄×•⋄⋄
00000190  E8 1E 00 00  2C 1F 00 00  68 1F 00 00  A8 1F 00 00  ×•⋄⋄,•⋄⋄h•⋄⋄×•⋄⋄
000001a0  EC 1F 00 00  28 20 00 00  68 20 00 00  AC 20 00 00  ×•⋄⋄( ⋄⋄h ⋄⋄× ⋄⋄
000001b0  E8 20 00 00  30 21 00 00  6C 21 00 00  A8 21 00 00  × ⋄⋄0!⋄⋄l!⋄⋄×!⋄⋄
000001c0  E4 21 00 00  24 22 00 00  68 22 00 00  A4 22 00 00  ×!⋄⋄$"⋄⋄h"⋄⋄×"⋄⋄
000001d0  E0 22 00 00  1C 23 00 00  5C 23 00 00  A0 23 00 00  ×"⋄⋄•#⋄⋄\#⋄⋄×#⋄⋄
000001e0  DC 23 00 00  18 24 00 00  58 24 00 00  9C 24 00 00  ×#⋄⋄•$⋄⋄X$⋄⋄×$⋄⋄
000001f0  D8 24 00 00  18 25 00 00  54 25 00 00  94 25 00 00  ×$⋄⋄•%⋄⋄T%⋄⋄×%⋄⋄
00000200  D8 25 00 00  14 26 00 00  54 26 00 00  90 26 00 00  ×%⋄⋄•&⋄⋄T&⋄⋄×&⋄⋄
...
```

color

```
00000000  44 41 4C 00  59 06 00 00  F4 07 00 00  F5 01 00 00  DAL⋄Y•⋄⋄×•⋄⋄×•⋄⋄
00000010  14 00 00 00  E8 07 00 00  08 08 00 00  44 08 00 00  •⋄⋄⋄×•⋄⋄••⋄⋄D•⋄⋄
00000020  84 08 00 00  C8 08 00 00  04 09 00 00  40 09 00 00  ×•⋄⋄×•⋄⋄•␣⋄⋄@␣⋄⋄
00000030  7C 09 00 00  B8 09 00 00  F8 09 00 00  34 0A 00 00  |␣⋄⋄×␣⋄⋄×␣⋄⋄4⏎⋄⋄
00000040  70 0A 00 00  AC 0A 00 00  EC 0A 00 00  30 0B 00 00  p⏎⋄⋄×⏎⋄⋄×⏎⋄⋄0•⋄⋄
00000050  6C 0B 00 00  A8 0B 00 00  E8 0B 00 00  24 0C 00 00  l•⋄⋄×•⋄⋄×•⋄⋄$•⋄⋄
00000060  60 0C 00 00  9C 0C 00 00  D8 0C 00 00  14 0D 00 00  `•⋄⋄×•⋄⋄×•⋄⋄•␍⋄⋄
00000070  50 0D 00 00  8C 0D 00 00  CC 0D 00 00  08 0E 00 00  P␍⋄⋄×␍⋄⋄×␍⋄⋄••⋄⋄
00000080  48 0E 00 00  84 0E 00 00  C4 0E 00 00  08 0F 00 00  H•⋄⋄×•⋄⋄×•⋄⋄••⋄⋄
00000090  44 0F 00 00  80 0F 00 00  C0 0F 00 00  04 10 00 00  D•⋄⋄×•⋄⋄×•⋄⋄••⋄⋄
000000a0  40 10 00 00  80 10 00 00  C4 10 00 00  00 11 00 00  @•⋄⋄×•⋄⋄×•⋄⋄⋄•⋄⋄
000000b0  3C 11 00 00  7C 11 00 00  B8 11 00 00  F4 11 00 00  <•⋄⋄|•⋄⋄×•⋄⋄×•⋄⋄
000000c0  34 12 00 00  70 12 00 00  B0 12 00 00  F4 12 00 00  4•⋄⋄p•⋄⋄×•⋄⋄×•⋄⋄
000000d0  30 13 00 00  70 13 00 00  B4 13 00 00  F0 13 00 00  0•⋄⋄p•⋄⋄×•⋄⋄×•⋄⋄
000000e0  2C 14 00 00  68 14 00 00  A4 14 00 00  E4 14 00 00  ,•⋄⋄h•⋄⋄×•⋄⋄×•⋄⋄
000000f0  20 15 00 00  5C 15 00 00  9C 15 00 00  E0 15 00 00   •⋄⋄\•⋄⋄×•⋄⋄×•⋄⋄
00000100  1C 16 00 00  58 16 00 00  98 16 00 00  DC 16 00 00  ••⋄⋄X•⋄⋄×•⋄⋄×•⋄⋄
00000110  18 17 00 00  58 17 00 00  9C 17 00 00  D8 17 00 00  ••⋄⋄X•⋄⋄×•⋄⋄×•⋄⋄
00000120  14 18 00 00  54 18 00 00  90 18 00 00  D0 18 00 00  ••⋄⋄T•⋄⋄×•⋄⋄×•⋄⋄
00000130  14 19 00 00  50 19 00 00  8C 19 00 00  C8 19 00 00  ••⋄⋄P•⋄⋄×•⋄⋄×•⋄⋄
00000140  04 1A 00 00  40 1A 00 00  7C 1A 00 00  B8 1A 00 00  ••⋄⋄@•⋄⋄|•⋄⋄×•⋄⋄
00000150  F4 1A 00 00  30 1B 00 00  6C 1B 00 00  AC 1B 00 00  ×•⋄⋄0•⋄⋄l•⋄⋄×•⋄⋄
00000160  F0 1B 00 00  2C 1C 00 00  68 1C 00 00  A8 1C 00 00  ×•⋄⋄,•⋄⋄h•⋄⋄×•⋄⋄
00000170  EC 1C 00 00  28 1D 00 00  68 1D 00 00  AC 1D 00 00  ×•⋄⋄(•⋄⋄h•⋄⋄×•⋄⋄
00000180  E8 1D 00 00  28 1E 00 00  6C 1E 00 00  A8 1E 00 00  ×•⋄⋄(•⋄⋄l•⋄⋄×•⋄⋄
00000190  E8 1E 00 00  2C 1F 00 00  68 1F 00 00  A8 1F 00 00  ×•⋄⋄,•⋄⋄h•⋄⋄×•⋄⋄
000001a0  EC 1F 00 00  28 20 00 00  68 20 00 00  AC 20 00 00  ×•⋄⋄( ⋄⋄h ⋄⋄× ⋄⋄
000001b0  E8 20 00 00  30 21 00 00  6C 21 00 00  A8 21 00 00  × ⋄⋄0!⋄⋄l!⋄⋄×!⋄⋄
000001c0  E4 21 00 00  24 22 00 00  68 22 00 00  A4 22 00 00  ×!⋄⋄$"⋄⋄h"⋄⋄×"⋄⋄
000001d0  E0 22 00 00  1C 23 00 00  5C 23 00 00  A0 23 00 00  ×"⋄⋄•#⋄⋄\#⋄⋄×#⋄⋄
000001e0  DC 23 00 00  18 24 00 00  58 24 00 00  9C 24 00 00  ×#⋄⋄•$⋄⋄X$⋄⋄×$⋄⋄
000001f0  D8 24 00 00  18 25 00 00  54 25 00 00  94 25 00 00  ×$⋄⋄•%⋄⋄T%⋄⋄×%⋄⋄
00000200  D8 25 00 00  14 26 00 00  54 26 00 00  90 26 00 00  ×%⋄⋄•&⋄⋄T&⋄⋄×&⋄⋄
...
```

this excerpt, starting at `0x14`, has a long series of increasing 32-bit integers ([little-endian](https://en.wikipedia.org/wiki/Endianness) again). each one is an index to a later point in the file, to [a structure](https://github.com/simonomi/carbonizer/blob/6c311b6a2801576033cd42a8ba95461cee2ac6d1/Sources/Carbonizer/files/ff1/DAL.swift#L20-L107) usually about `0x3C` bytes long. the roughly-evenly-spaced indices make for some very pretty rainbow gradients

### example 3

no color

```
...
00000030  0F 80 00 00  00 01 C1 82  82 83 01 05  04 82 03 82  •×⋄⋄⋄•××××•••×•×
00000040  0F 82 07 C2  0C C2 0B 82  0A 0D 08 02  09 C0 0E 06  •×•×•×•×⏎␍••␣×••
00000050  56 05 E8 43  01 64 52 F5  A4 8D A1 33  D5 98 BF C6  V•×C•dR××××3××××
00000060  63 EB 4C 8C  C6 C3 F8 1A  6A 2A 46 2B  C5 F8 15 F3  c×L××××•j*F+××•×
00000070  60 42 8A 71  E6 56 0C 2A  D5 4C 0C 2B  5F 31 A9 18  `B×q×V•*×L•+_1×•
00000080  4C 8C 55 CC  5B 30 C6 D6  18 37 86 7D  BB C3 8F CD  L×U×[0××•7×}××××
00000090  1E B9 BB BB  91 FA 22 23  9E 71 7A 8B  35 6F F3 84  •×××××"#×qz×5o××
000000a0  38 DE B7 C9  58 76 A4 9C  D7 C5 F8 63  CF A2 B4 BE  8×××Xv×××××c××××
000000b0  B2 45 BC 8D  F7 6A 35 EF  E2 B9 CD A7  46 F7 F9 AD  ×E×××j5×××××F×××
000000c0  7F 6F D7 BC  72 DD DB 9D  6B DE 8F EE  C6 35 EF B7  •o××r×××k××××5××
000000d0  AE 6B E4 9A  AE E9 9B 6B  AF 23 8E 66  B0 2D 22 47  ×k×××××k×#×f×-"G
```

color

```
...
00000030  0F 80 00 00  00 01 C1 82  82 83 01 05  04 82 03 82  •×⋄⋄⋄•××××•••×•×
00000040  0F 82 07 C2  0C C2 0B 82  0A 0D 08 02  09 C0 0E 06  •×•×•×•×⏎␍••␣×••
00000050  56 05 E8 43  01 64 52 F5  A4 8D A1 33  D5 98 BF C6  V•×C•dR××××3××××
00000060  63 EB 4C 8C  C6 C3 F8 1A  6A 2A 46 2B  C5 F8 15 F3  c×L××××•j*F+××•×
00000070  60 42 8A 71  E6 56 0C 2A  D5 4C 0C 2B  5F 31 A9 18  `B×q×V•*×L•+_1×•
00000080  4C 8C 55 CC  5B 30 C6 D6  18 37 86 7D  BB C3 8F CD  L×U×[0××•7×}××××
00000090  1E B9 BB BB  91 FA 22 23  9E 71 7A 8B  35 6F F3 84  •×××××"#×qz×5o××
000000a0  38 DE B7 C9  58 76 A4 9C  D7 C5 F8 63  CF A2 B4 BE  8×××Xv×××××c××××
000000b0  B2 45 BC 8D  F7 6A 35 EF  E2 B9 CD A7  46 F7 F9 AD  ×E×××j5×××××F×××
000000c0  7F 6F D7 BC  72 DD DB 9D  6B DE 8F EE  C6 35 EF B7  •o××r×××k××××5××
000000d0  AE 6B E4 9A  AE E9 9B 6B  AF 23 8E 66  B0 2D 22 47  ×k×××××k×#×f×-"G
```

this data is compressed using a [Huffman code](https://en.wikipedia.org/wiki/Huffman_coding), specifically one compatible with [the Nintendo DS BIOS](https://problemkaputt.de/gbatek.htm#biosdecompressionfunctions). it starts with `0x20` bytes encoding the Huffman tree used, then `0x90` bytes of compressed bitstream—the actual compressed file contents

there's a big difference between the two parts that can be hard to notice without the help of colors. the tree mostly has bytes in the range `00`–`0F` (plus some low `80`s and `C0`s), but the bitstream has bytes evenly distributed throughout the entire range of `00`–`FF`

the bitstream is much more colorful and chaotic because good compression algorithms output data that looks visually random. ideally, any patterns you would've noticed in the data were already found by the algorithm, and then used to make the compressed output smaller

### example 4

no color

```
...
00000028  00 00 00 00  00 00 00 00  88 00 00 00  00 00 00 00  ⋄⋄⋄⋄⋄⋄⋄⋄×⋄⋄⋄⋄⋄⋄⋄
00000038  00 00 00 00  80 80 80 78  46 77 80 08  00 00 00 00  ⋄⋄⋄⋄×××xFw×•⋄⋄⋄⋄
00000048  00 00 00 00  88 44 68 12  21 55 46 74  00 00 00 00  ⋄⋄⋄⋄×Dh•!UFt⋄⋄⋄⋄
00000058  00 00 00 70  25 41 33 53  65 13 54 54  08 00 00 00  ⋄⋄⋄p%A3Se•TT•⋄⋄⋄
00000068  00 00 70 27  22 13 43 B7  9B 67 54 32  76 08 00 00  ⋄⋄p'"•C××gT2v•⋄⋄
00000078  00 00 26 22  76 76 98 BA  AA BA 59 21  44 75 00 00  ⋄⋄&"vv××××Y!Du⋄⋄
00000088  00 80 D2 71  99 AA 99 AA  A9 AB 99 88  48 43 85 00  ⋄××q××××××××HC×⋄
00000098  00 60 12 A5  A9 9A 99 A9  AA 99 99 CA  48 55 07 00  ⋄`•×××××××××HU•⋄
000000a8  00 38 42 B9  AA 99 9A A9  99 99 89 88  77 78 88 00  ⋄8B×××××××××wx×⋄
000000b8  00 36 86 AA  99 99 B9 AA  AA 99 78 78  77 46 75 00  ⋄6××××××××xxwFu⋄
000000c8  80 67 66 A9  99 A9 AA BB  BB AA 78 67  57 44 02 08  ×gf×××××××xgWD••
000000d8  80 23 45 98  A9 AB CB BB  BB AA 89 77  57 12 95 00  ×#E××××××××wW•×⋄
000000e8  58 2E 55 98  99 BA BB CC  BB AB 79 67  56 54 98 00  X.U×××××××ygVT×⋄
000000f8  50 52 87 AA  A9 BA BB BB  CB BB 89 66  56 55 97 00  PR×××××××××fVU×⋄
00000108  48 43 A5 AA  BA BB CC CC  CB 9A 88 66  55 34 84 00  HC×××××××××fU4×⋄
00000118  70 44 A8 99  B9 CB CC CC  AC 8A 56 45  55 33 05 08  pD××××××××VEU3••
00000128  00 77 CB A9  AA BC CC CC  BC 69 45 43  43 22 A5 08  ⋄w×××××××iECC"×•
00000138  80 67 A8 99  BA BB BC CC  AB 58 44 33  32 43 A8 00  ×g×××××××XD32C×⋄
00000148  00 34 74 A9  AA BB BB BB  7A 45 23 22  23 41 99 08  ⋄4t×××××zE#"#A×•
00000158  80 46 74 99  99 AA BA AC  7A 34 22 12  23 41 87 80  ×Ft×××××z4"•#A××
00000168  00 17 52 99  89 AA AA BB  58 34 23 21  E2 4E A7 09  ⋄•R×××××X4#!×N×␣
00000178  00 36 73 99  99 98 98 A9  68 35 22 12  12 4E A9 00  ⋄6s×××××h5"••N×⋄
00000188  70 44 88 87  99 88 78 88  66 45 32 21  E1 62 AA 07  pD××××x×fE2!×b×•
00000198  70 86 69 65  88 88 68 77  56 44 23 12  21 A7 0A 00  p×ie××hwVD#•!×⏎⋄
000001a8  00 90 57 52  85 77 77 66  66 44 33 D1  42 99 00 00  ⋄×WR×wwffD3×B×⋄⋄
000001b8  00 00 70 56  41 55 65 67  54 35 12 21  63 09 00 00  ⋄⋄pVAUegT5•!c␣⋄⋄
000001c8  00 00 00 8A  44 32 22 22  1E 11 12 43  85 80 00 00  ⋄⋄⋄×D2""•••C××⋄⋄
000001d8  00 00 80 A0  57 55 12 EE  2F 22 32 54  85 08 00 00  ⋄⋄××WU•×/"2T×•⋄⋄
000001e8  00 00 00 80  99 57 33 45  75 57 66 78  A8 00 00 00  ⋄⋄⋄××W3EuWfx×⋄⋄⋄
000001f8  00 00 00 00  08 99 A9 0A  9A A0 A9 9A  08 00 00 00  ⋄⋄⋄⋄•××⏎××××•⋄⋄⋄
00000208  00 00 00 00  00 90 80 00  80 00 87 80  00 00 00 00  ⋄⋄⋄⋄⋄××⋄×⋄××⋄⋄⋄⋄
00000218  00 00 00 00  00 00 00 00  00 00 00 00  00 00 00 00  ⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄
...
```

color

```
...
00000028  00 00 00 00  00 00 00 00  88 00 00 00  00 00 00 00  ⋄⋄⋄⋄⋄⋄⋄⋄×⋄⋄⋄⋄⋄⋄⋄
00000038  00 00 00 00  80 80 80 78  46 77 80 08  00 00 00 00  ⋄⋄⋄⋄×××xFw×•⋄⋄⋄⋄
00000048  00 00 00 00  88 44 68 12  21 55 46 74  00 00 00 00  ⋄⋄⋄⋄×Dh•!UFt⋄⋄⋄⋄
00000058  00 00 00 70  25 41 33 53  65 13 54 54  08 00 00 00  ⋄⋄⋄p%A3Se•TT•⋄⋄⋄
00000068  00 00 70 27  22 13 43 B7  9B 67 54 32  76 08 00 00  ⋄⋄p'"•C××gT2v•⋄⋄
00000078  00 00 26 22  76 76 98 BA  AA BA 59 21  44 75 00 00  ⋄⋄&"vv××××Y!Du⋄⋄
00000088  00 80 D2 71  99 AA 99 AA  A9 AB 99 88  48 43 85 00  ⋄××q××××××××HC×⋄
00000098  00 60 12 A5  A9 9A 99 A9  AA 99 99 CA  48 55 07 00  ⋄`•×××××××××HU•⋄
000000a8  00 38 42 B9  AA 99 9A A9  99 99 89 88  77 78 88 00  ⋄8B×××××××××wx×⋄
000000b8  00 36 86 AA  99 99 B9 AA  AA 99 78 78  77 46 75 00  ⋄6××××××××xxwFu⋄
000000c8  80 67 66 A9  99 A9 AA BB  BB AA 78 67  57 44 02 08  ×gf×××××××xgWD••
000000d8  80 23 45 98  A9 AB CB BB  BB AA 89 77  57 12 95 00  ×#E××××××××wW•×⋄
000000e8  58 2E 55 98  99 BA BB CC  BB AB 79 67  56 54 98 00  X.U×××××××ygVT×⋄
000000f8  50 52 87 AA  A9 BA BB BB  CB BB 89 66  56 55 97 00  PR×××××××××fVU×⋄
00000108  48 43 A5 AA  BA BB CC CC  CB 9A 88 66  55 34 84 00  HC×××××××××fU4×⋄
00000118  70 44 A8 99  B9 CB CC CC  AC 8A 56 45  55 33 05 08  pD××××××××VEU3••
00000128  00 77 CB A9  AA BC CC CC  BC 69 45 43  43 22 A5 08  ⋄w×××××××iECC"×•
00000138  80 67 A8 99  BA BB BC CC  AB 58 44 33  32 43 A8 00  ×g×××××××XD32C×⋄
00000148  00 34 74 A9  AA BB BB BB  7A 45 23 22  23 41 99 08  ⋄4t×××××zE#"#A×•
00000158  80 46 74 99  99 AA BA AC  7A 34 22 12  23 41 87 80  ×Ft×××××z4"•#A××
00000168  00 17 52 99  89 AA AA BB  58 34 23 21  E2 4E A7 09  ⋄•R×××××X4#!×N×␣
00000178  00 36 73 99  99 98 98 A9  68 35 22 12  12 4E A9 00  ⋄6s×××××h5"••N×⋄
00000188  70 44 88 87  99 88 78 88  66 45 32 21  E1 62 AA 07  pD××××x×fE2!×b×•
00000198  70 86 69 65  88 88 68 77  56 44 23 12  21 A7 0A 00  p×ie××hwVD#•!×⏎⋄
000001a8  00 90 57 52  85 77 77 66  66 44 33 D1  42 99 00 00  ⋄×WR×wwffD3×B×⋄⋄
000001b8  00 00 70 56  41 55 65 67  54 35 12 21  63 09 00 00  ⋄⋄pVAUegT5•!c␣⋄⋄
000001c8  00 00 00 8A  44 32 22 22  1E 11 12 43  85 80 00 00  ⋄⋄⋄×D2""•••C××⋄⋄
000001d8  00 00 80 A0  57 55 12 EE  2F 22 32 54  85 08 00 00  ⋄⋄××WU•×/"2T×•⋄⋄
000001e8  00 00 00 80  99 57 33 45  75 57 66 78  A8 00 00 00  ⋄⋄⋄××W3EuWfx×⋄⋄⋄
000001f8  00 00 00 00  08 99 A9 0A  9A A0 A9 9A  08 00 00 00  ⋄⋄⋄⋄•××⏎××××•⋄⋄⋄
00000208  00 00 00 00  00 90 80 00  80 00 87 80  00 00 00 00  ⋄⋄⋄⋄⋄××⋄×⋄××⋄⋄⋄⋄
00000218  00 00 00 00  00 00 00 00  00 00 00 00  00 00 00 00  ⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄⋄
...
```

this final excerpt is from the [bitmap data](https://github.com/simonomi/carbonizer/blob/6c311b6a2801576033cd42a8ba95461cee2ac6d1/Sources/Carbonizer/models/Texture.swift#L5-L28) for the following image:

![a shallow, top-down, pixel-art hole in the ground](https://simonomi.dev/images/color-code-your-bytes/ana.png)

like all the other examples, it comes from the Nintendo DS game [Fossil Fighters](https://en.wikipedia.org/wiki/Fossil_Fighters). specifically, the hole the player makes when digging for fossils:

![a screenshot from Fossil Fighters showing the player character holding a pickax, with the hole from before on the ground in front of him](https://simonomi.dev/images/color-code-your-bytes/diggy-diggy-hole.png)

because the bitmap uses 4-bit color indices, each digit of the hexdump encodes exactly one pixel of the image. i think the result mostly speaks for itself, but i'd specifically like to point out the highlight at the bottom right of the hole. in the plain hexdump, you might be able to pick out the general shape of the hole—especially if you look at the character panel on the right—but with color, you can pick up an incredible amount of detail!

## what colors are best?

if you’ve used a hex editor with color-coding before, you may have noticed something different about the way i’m choosing to color-code bytes

most colorful hex editors have a few categories they sort bytes into, like `00` bytes, printable ASCII, ASCII whitespace, other ASCII, non-ASCII, or `FF` bytes

[`hexyl`](https://github.com/sharkdp/hexyl), for example, uses the following categories by default:

```
⋄ NULL bytes (0x00)
a ASCII printable characters (0x20 - 0x7E)
_ ASCII whitespace (0x09 - 0x0D, 0x20)
• ASCII control characters (except NULL and whitespace)
× Non-ASCII bytes (0x80 - 0xFF)
```

which end up looking something like this:

```
00 01 10 20 30 40 50 60 70 80 90 A0 B0 C0 D0 E0 F0 FF
```

full hexdump with `hexyl` colors

```
00000000  00 01 02 03  04 05 06 07  08 09 0A 0B  0C 0D 0E 0F  ⋄••••••••__•__••
00000010  10 11 12 13  14 15 16 17  18 19 1A 1B  1C 1D 1E 1F  ••••••••••••••••
00000020  20 21 22 23  24 25 26 27  28 29 2A 2B  2C 2D 2E 2F   !"#$%&'()*+,-./
00000030  30 31 32 33  34 35 36 37  38 39 3A 3B  3C 3D 3E 3F  0123456789:;<=>?
00000040  40 41 42 43  44 45 46 47  48 49 4A 4B  4C 4D 4E 4F  @ABCDEFGHIJKLMNO
00000050  50 51 52 53  54 55 56 57  58 59 5A 5B  5C 5D 5E 5F  PQRSTUVWXYZ[\]^_
00000060  60 61 62 63  64 65 66 67  68 69 6A 6B  6C 6D 6E 6F  `abcdefghijklmno
00000070  70 71 72 73  74 75 76 77  78 79 7A 7B  7C 7D 7E 7F  pqrstuvwxyz{|}~•
00000080  80 81 82 83  84 85 86 87  88 89 8A 8B  8C 8D 8E 8F  ××××××××××××××××
00000090  90 91 92 93  94 95 96 97  98 99 9A 9B  9C 9D 9E 9F  ××××××××××××××××
000000a0  A0 A1 A2 A3  A4 A5 A6 A7  A8 A9 AA AB  AC AD AE AF  ××××××××××××××××
000000b0  B0 B1 B2 B3  B4 B5 B6 B7  B8 B9 BA BB  BC BD BE BF  ××××××××××××××××
000000c0  C0 C1 C2 C3  C4 C5 C6 C7  C8 C9 CA CB  CC CD CE CF  ××××××××××××××××
000000d0  D0 D1 D2 D3  D4 D5 D6 D7  D8 D9 DA DB  DC DD DE DF  ××××××××××××××××
000000e0  E0 E1 E2 E3  E4 E5 E6 E7  E8 E9 EA EB  EC ED EE EF  ××××××××××××××××
000000f0  F0 F1 F2 F3  F4 F5 F6 F7  F8 F9 FA FB  FC FD FE FF  ××××××××××××××××
```

these broad categories are enough to pick out common patterns like repeated null bytes and ASCII strings. they also create enough variation to track visually when scrolling, which i find quite helpful. it can be really disorienting to scroll around a fully monochrome hexdump

i, however, am going further, with 18 total groups: one for each leading [nybble](https://en.wikipedia.org/wiki/Nibble) (`0X`, `1X`, `2X`...), plus two extras for `00` and`FF`:

```
00 01 10 20 30 40 50 60 70 80 90 A0 B0 C0 D0 E0 F0 FF
```

full hexdump with my colors

```
00000000  00 01 02 03  04 05 06 07  08 09 0A 0B  0C 0D 0E 0F  ⋄••••••••→⏎••␍••
00000010  10 11 12 13  14 15 16 17  18 19 1A 1B  1C 1D 1E 1F  ••••••••••••••••
00000020  20 21 22 23  24 25 26 27  28 29 2A 2B  2C 2D 2E 2F   !"#$%&'()*+,-./
00000030  30 31 32 33  34 35 36 37  38 39 3A 3B  3C 3D 3E 3F  0123456789:;<=>?
00000040  40 41 42 43  44 45 46 47  48 49 4A 4B  4C 4D 4E 4F  @ABCDEFGHIJKLMNO
00000050  50 51 52 53  54 55 56 57  58 59 5A 5B  5C 5D 5E 5F  PQRSTUVWXYZ[\]^_
00000060  60 61 62 63  64 65 66 67  68 69 6A 6B  6C 6D 6E 6F  `abcdefghijklmno
00000070  70 71 72 73  74 75 76 77  78 79 7A 7B  7C 7D 7E 7F  pqrstuvwxyz{|}~•
00000080  80 81 82 83  84 85 86 87  88 89 8A 8B  8C 8D 8E 8F  ××××××××××××××××
00000090  90 91 92 93  94 95 96 97  98 99 9A 9B  9C 9D 9E 9F  ××××××××××××××××
000000a0  A0 A1 A2 A3  A4 A5 A6 A7  A8 A9 AA AB  AC AD AE AF  ××××××××××××××××
000000b0  B0 B1 B2 B3  B4 B5 B6 B7  B8 B9 BA BB  BC BD BE BF  ××××××××××××××××
000000c0  C0 C1 C2 C3  C4 C5 C6 C7  C8 C9 CA CB  CC CD CE CF  ××××××××××××××××
000000d0  D0 D1 D2 D3  D4 D5 D6 D7  D8 D9 DA DB  DC DD DE DF  ××××××××××××××××
000000e0  E0 E1 E2 E3  E4 E5 E6 E7  E8 E9 EA EB  EC ED EE EF  ××××××××××××××××
000000f0  F0 F1 F2 F3  F4 F5 F6 F7  F8 F9 FA FB  FC FD FE FF  ×××××××××××××××╳
```

having more colors makes it possible to recognize more complex patterns, like the ascending offsets from [example 2](#example-2) or the different sections in [example 3](#example-3). ASCII text is still recognizable, but instead of solid cyan, it's a variated green and orange:

my colors

```
00000000  6C 6F 6F 6B  20 6D 61 2C  20 69 27 6D  20 41 53 43  look ma, i'm ASC
00000010  49 49 21 20  6C 6F 72 65  6D 20 69 70  73 75 6D 20  II! lorem ipsum 
00000020  61 6E 64 20  61 6C 6C 20  74 68 61 74  20 69 67     and all that ig
```

`hexyl`'s colors

```
00000000  6C 6F 6F 6B  20 6D 61 2C  20 69 27 6D  20 41 53 43  look ma, i'm ASC
00000010  49 49 21 20  6C 6F 72 65  6D 20 69 70  73 75 6D 20  II! lorem ipsum 
00000020  61 6E 64 20  61 6C 6C 20  74 68 61 74  20 69 67     and all that ig
```

non-ASCII UTF-8, on the other hand, looks completely different, with its own unique pattern that's only visible if you have a large number of color groups:

my colors

```
00000000  73 6F 6D 65  20 55 54 46  2D 38 3A 20  E3 81 93 E3  some UTF-8: ××××
00000010  82 93 E3 81  AB E3 81 A1  E3 81 AF E3  80 81 E3 82  ××××××××××××××××
00000020  A2 E3 83 AA  E3 82 B9 E3  81 A7 E3 81  99 EF BC 81  ××××××××××××××××
```

`hexyl`'s colors

```
00000000  73 6F 6D 65  20 55 54 46  2D 38 3A 20  E3 81 93 E3  some UTF-8: ××××
00000010  82 93 E3 81  AB E3 81 A1  E3 81 AF E3  80 81 E3 82  ××××××××××××××××
00000020  A2 E3 83 AA  E3 82 B9 E3  81 A7 E3 81  99 EF BC 81  ××××××××××××××××
```

there are a million more examples i could give, like negative numbers in [two's complement](https://en.wikipedia.org/wiki/Two%27s_complement) (`BD FF FF FF`), machine code, encrypted data, color palettes, transformation matrices, and so on, but hopefully the ones i've given are enough to get my point across

colorful output in a hexdump is useful for the same reason that syntax highlighting for code is useful: it takes advantage of our brains' powerful visual pattern recognition. it lets us notice details in the data just as quickly as we notice details in the environment around us. color-coded bytes should be as prevalent in hex editors as syntax highlighting is in code editors today

## so what can you do about it?

there are lots of tools out there that use color, here are some that i know of:

hex viewers:

- [`hexyl`](https://github.com/sharkdp/hexyl)
  
  - byte categories by default, gradient option
- [xcd-rgb](https://hacktivis.me/projects/xcd-rgb)
  
  - full rainbow byte coloring
- [hevi](https://codeberg.org/arnauc/hevi)
  
  - uses colors to indicate sections for certain file types
- `xxd`
  
  - option for byte categories, off[2](#footnote:2) by default

hex editors:

- [Hexerator](https://crumblingstatue.github.io/hexerator-book/0.4.0/hexerator.html)
  
  - full rainbow byte coloring, and tons of other features
- [REHex](https://rehex.solemnwarning.net)
  
  - multiple color options (including custom), off by default
- [Hex Fiend](https://hexfiend.com)
  
  - option for byte categories, off by default
  - [custom colors if you're willing to work for it](https://lobste.rs/c/jspwpw)

if you know any other good ones, please let me know! if you work on any tools that show hexdumps, i highly recommend adding colors, ideally with a large number of groups (feel free to copy [mine](https://github.com/simonomi/simonomi.github.io/blob/aa55ea855c17d5253d85d0440f41871aadc27b83/_includes/code.css#L51-L68)!). at the very least, making `00`s more subtle than other bytes is extremely helpful

the main goal of this article is to spread awareness that this feature exists. it provides a lot of utility with practically no downside, and more people should be asking for it. if you'd like to submit a feature request for the tool you use most, i hope this article can serve as an explanation for why it's worth adding

while writing this article, i actually started making my own custom hex editor, called [hexapoda](https://github.com/simonomi/hexapoda) &gt;\_&lt;. it takes inspiration from [Helix](https://helix-editor.com) and [Teehee](https://lib.rs/crates/teehee) (among others), with modal editing, multiple cursors, and selection-first operations (written in Rust, with [Ratatui](https://ratatui.rs)!). if enough people want, i might polish it up and write some docs so anyone can use it, but for now, it's just for me :3

* * *

1. and also tools like `xxd` or `hexyl` that show hex but don't let you edit it [⏎](#footnote-return:1)
2. by default, `xxd`'s color output is set to "auto", which doesn't output any color for me, so i'm not sure what it's doing [⏎](#footnote-return:2)

entirely human-made,  
please don't hesitate to [report a mistake](https://github.com/simonomi/simonomi.github.io/issues) or [suggest a fix](https://github.com/simonomi/simonomi.github.io/edit/main/blog/color-code-your-bytes.html)!

discuss on [Mastodon](https://mstdn.social/@simonomi/116324768496611305) or [Lobsters](https://lobste.rs/s/hssl4e/your_hex_editor_should_color_code_bytes)

---

## [HN-TITLE] 12. UK Biobank health data keeps ending up on GitHub

- **Source**: [https://biobank.rocher.lc](https://biobank.rocher.lc)
- **Site**: biobank.rocher.lc
- **Submitter**: Cynddl (Hacker News)
- **Submitted**: 2026-04-23 13:58 UTC (Hacker News)
- **HN activity**: 82 points · [21 comments](https://news.ycombinator.com/item?id=47875843)
- **Length**: 967 words (~5 min read)
- **Language**: en

— days since last takedown!

[UK Biobank](https://www.ukbiobank.ac.uk/) holds genetic, health, and lifestyle data on half a million British volunteers. It has given 20,000 researchers around the world access under strict agreements that prohibit sharing data further. And yet, researchers are repeatedly uploading participant data by mistake to public GitHub repositories.

According to [The Guardian](https://www.theguardian.com/science/2026/mar/14/confidential-health-records-exposed-online-uk-biobank), UK Biobank has been closely monitoring the situation, contacting researchers directly then issuing takedown notices when repositories are not being deleted—sometimes by researchers and students Biobank never gave data in the first place.

This tracker monitors the 110 notices filed so far, targeting 197 code repositories by 170 developers across the world, using public data from GitHub's [DMCA archive](https://github.com/github/dmca).

From only two pieces of information (approximate date of birth and date of a single major surgery), the Guardian was able to re-identify a volunteer in one of the exposed datasets. For [BMJ](http://bmj.com/cgi/content/full/bmj.s660?ijkey=dEot4dJZGZGXeG1&keytype=ref), Jess Morley and I argue that UK Biobank is harming participants by dismissing re-identification risks but advising them to now limit what they share online. Institutions like Biobank must demonstrate humility, a commitment to listening to privacy experts, and a willingness to learn.

Built by [Luc Rocher](https://rocher.lc), Oxford Internet Institute, University of Oxford

## What is UK Biobank trying to take down

UK Biobank uses copyright takedown notices, a mechanism often associated with removing pirated software and stolen code, to remove health data from GitHub. The UK has no equivalent of DMCA for privacy breaches that would compel a platform to act so quickly.

Looking at the takedown notices, we often see specific files being targeted rather than entire repositories—possibly to justify the copyright infringement as required for a takedown notice. Nearly half are Jupyter or R notebooks, which can contain a few rows of data. A quarter are genetic and genomic data files (PLINK, BOLT-LMM, BGEN) that directly encode participant genotypes or association results. Tabular datasets (CSV, TSV, Excel, and serialised R objects) account for another large share and could contain phenotype or health records. The remainder includes analysis scripts, documentation, and compressed archives.

## Timeline of takedown notices

The first takedown notice was filed in July 2025. Since then, the pace has been steady, with a total of 110 requests to GitHub. Interestingly, the requests stopped in January, February, and most of March 2026. It's hard to believe that no researcher has mistakenly uploaded UK Biobank data during these months. The notices restarted end of March, just after the Guardian's investigations revealed the ongoing data exposure and the ineffectiveness of takedowns.

## Where in the world

Developers targeted by UK Biobank's takedown notices are based in at least 14 countries. The true number is likely higher: of the 170 developers identified in the notices, only 75 list a location on their GitHub profile. Most appear to be from United States and China.

- 24 United States
- 21 China
- 7 United Kingdom
- 5 Germany
- 4 Hong Kong
- 4 Australia
- 3 Spain
- 1 South Korea
- 1 Greece
- 1 Qatar
- 1 United Arab Emirates
- 1 Switzerland
- 1 India
- 1 Netherlands

Methodology

To build this webpage, I used data from the [github/dmca](https://github.com/github/dmca) repository, where GitHub publishes the full text of every DMCA takedown notice it receives. When a rights holder asks GitHub to remove content that infringes their copyright, the notice is posted publicly as a Markdown file in this repository. According to [The Guardian](https://www.theguardian.com/science/2026/mar/14/confidential-health-records-exposed-online-uk-biobank), UK Biobank has used this process to request the removal of files or repositories that contain (or that it believes contain) participant data covered by its data access agreements.

To identify UK Biobank-related notices, I match filenames containing the slug "uk-biobank" (the convention GitHub uses when naming notice files). Just in case, I also search the full text of every other notice file for the phrases "UK Biobank" or "UKBiobank" (case-insensitive) to catch notices filed under different slugs, such as those submitted on behalf of UK Biobank. From each matching notice, I extract the filing date (parsed from the filename, which follows GitHub's `YYYY-MM-DD-slug.md` convention) and all GitHub repository URLs mentioned in the notice body. URLs pointing to GitHub's own infrastructure (e.g. github.com/contact or github.com/site) are excluded.

For each unique GitHub username found in the notices, I query the GitHub REST API (`GET /users/{username}`) to retrieve the user's public profile, specifically the self-reported location field. This is a free-text string that users enter voluntarily. It may be a city, a country, a university name, or left blank entirely. Deleted accounts return a 404 and are not included further.

I derive countries from the raw location strings by hand. When a user's GitHub profile does not include a location, I also determine their country by inspecting their GitHub profile and associated email address domains. This process is inherently imperfect: some locations are ambiguous (e.g. "Cambridge" could refer to the UK or the US), and many users do not provide any location at all. Of the 170 unique developers in the dataset, only 75 have a location that could be resolved to a country.

The data is regularly refreshed by re-running the collection script against the latest state of the github/dmca repository. This page does not make any claims about the content of the targeted repositories, including whether they contained actual participant data, derived datasets, analysis code, or just documentation. It reports only what is visible in the public DMCA notices filed by UK Biobank.

## Further reading

The exposure of Biobank data on GitHub is the latest in a series of governance challenges for UK Biobank.

Mar 2026

[Confidential health records exposed online](https://www.theguardian.com/science/2026/mar/14/confidential-health-records-exposed-online-uk-biobank) — The Guardian  
Investigation revealing that UK Biobank participant data had been uploaded to public GitHub repositories by researchers sharing their code. With a volunteer's consent, journalists successfully matched their record in an exposed dataset using only their month and year of birth and the date of a single major surgery.

---

## [HN-TITLE] 13. Show HN: Agent Vault – Open-source credential proxy and vault for agents

- **Source**: [https://github.com/Infisical/agent-vault](https://github.com/Infisical/agent-vault)
- **Site**: GitHub
- **Submitter**: dangtony98 (Hacker News)
- **Submitted**: 2026-04-22 16:25 UTC (Hacker News)
- **HN activity**: 80 points · [28 comments](https://news.ycombinator.com/item?id=47865822)
- **Length**: 895 words (~4 min read)
- **Language**: en

[![Agent Vault](https://github.com/Infisical/agent-vault/raw/main/assets/banner.png)](https://github.com/Infisical/agent-vault/blob/main/assets/banner.png)

**HTTP credential proxy and vault**

An open-source credential broker by [Infisical](https://infisical.com) that sits between your agents and the APIs they call.  
Agents should not possess credentials. Agent Vault eliminates credential exfiltration risk with brokered access.

**New here? The [launch blog post](https://infisical.com/blog/agent-vault-the-open-source-credential-proxy-and-vault-for-agents) has the full story behind Agent Vault.**

[Documentation](https://docs.agent-vault.dev) | [Installation](https://docs.agent-vault.dev/installation) | [CLI Reference](https://docs.agent-vault.dev/reference/cli) | [Slack](https://infisical.com/slack)

[![Agent Vault demo](https://github.com/Infisical/agent-vault/raw/main/assets/agent-vault.gif)](https://github.com/Infisical/agent-vault/blob/main/assets/agent-vault.gif)

Traditional secrets management relies on returning credentials directly to the caller. This breaks down with AI agents, which are non-deterministic systems vulnerable to prompt injection that can be fooled into leaking its secrets.

Agent Vault takes a different approach: **Agent Vault never reveals vault-stored credentials to agents**. Instead, agents route HTTP requests through a local proxy that injects the right credentials at the network layer.

- **Brokered access, not retrieval** - Your agent gets a scoped session and a local `HTTPS_PROXY`. It calls target APIs normally, and Agent Vault injects the right credential at the network layer. Credentials are never returned to the agent.
- **Works with any agent** - Custom Python/TypeScript agents, sandboxed processes, and coding agents like Claude Code, Cursor, and Codex. Anything that speaks HTTP.
- **Encrypted at rest** - Credentials are encrypted with AES-256-GCM using a random data encryption key (DEK). An optional master password wraps the DEK via Argon2id, so rotating the password does not re-encrypt credentials. A passwordless mode is available for PaaS deploys.
- **Request logs** - Every proxied request is persisted per vault with method, host, path, status, latency, and the credential key names involved. Bodies, headers, and query strings are not recorded. Retention is configurable per vault.

## Installation

[](#installation)

See the [installation guide](https://docs.agent-vault.dev/installation) for full details.

### Script (macOS / Linux)

[](#script-macos--linux)

```
curl -fsSL https://get.agent-vault.dev | sh
agent-vault server -d
```

Supports macOS (Intel + Apple Silicon) and Linux (x86\_64 + ARM64).

### [Docker](https://docs.agent-vault.dev/self-hosting/docker)

[](#docker)

```
docker run -it -p 14321:14321 -p 14322:14322 -v agent-vault-data:/data infisical/agent-vault
```

For non-interactive environments (Docker Compose, CI, detached mode), pass the master password as an env var:

```
docker run -d -p 14321:14321 -p 14322:14322 \
  -e AGENT_VAULT_MASTER_PASSWORD=your-password \
  -v agent-vault-data:/data infisical/agent-vault
```

### From source

[](#from-source)

Requires [Go 1.25+](https://go.dev/dl/) and [Node.js 22+](https://nodejs.org/).

```
git clone https://github.com/Infisical/agent-vault.git
cd agent-vault
make build
sudo mv agent-vault /usr/local/bin/
agent-vault server -d
```

The server starts the HTTP API on port `14321` and a TLS-encrypted transparent HTTPS proxy on port `14322`. A web UI is available at `http://localhost:14321`.

## Quickstart

[](#quickstart)

### CLI — local agents (Claude Code, Cursor, Codex, OpenClaw, Hermes, OpenCode)

[](#cli--local-agents-claude-code-cursor-codex-openclaw-hermes-opencode)

Wrap any local agent process with `agent-vault run` (long form: `agent-vault vault run`). Agent Vault creates a scoped session, sets `HTTPS_PROXY` and CA-trust env vars, and launches the agent — all HTTPS traffic is transparently proxied and authenticated:

```
agent-vault run -- claude
agent-vault vault run -- agent
agent-vault vault run -- codex
agent-vault vault run -- opencode
```

The agent calls APIs normally (e.g. `fetch("https://api.github.com/...")`). Agent Vault intercepts the request, injects the credential, and forwards it upstream. The agent never sees secrets.

For **non-cooperative** sandboxing — where the child physically cannot reach anything except the Agent Vault proxy, regardless of what it tries — launch it in a Docker container with egress locked down by iptables:

```
agent-vault run --sandbox=container --share-agent-dir -- claude
```

`--share-agent-dir` bind-mounts your host's `~/.claude` into the container so the sandboxed agent reuses your existing login. Currently Claude-only; support for other agents is coming soon.

See [Container sandbox](https://docs.agent-vault.dev/guides/container-sandbox) for the threat model and flags.

### SDK — sandboxed agents (Docker, Daytona, E2B)

[](#sdk--sandboxed-agents-docker-daytona-e2b)

For agents running inside containers, use the SDK from your orchestrator to mint a session and pass proxy config into the sandbox:

```
npm install @infisical/agent-vault-sdk
```

```
import { AgentVault, buildProxyEnv } from "@infisical/agent-vault-sdk";

const av = new AgentVault({
  token: "YOUR_TOKEN",
  address: "http://localhost:14321",
});
const session = await av
  .vault("default")
  .sessions.create({ vaultRole: "proxy" });

// certPath is where you'll mount the CA certificate inside the sandbox.
const certPath = "/etc/ssl/agent-vault-ca.pem";

// env: { HTTPS_PROXY, NO_PROXY, NODE_USE_ENV_PROXY, SSL_CERT_FILE,
//         NODE_EXTRA_CA_CERTS, REQUESTS_CA_BUNDLE, CURL_CA_BUNDLE,
//         GIT_SSL_CAINFO, DENO_CERT }
const env = buildProxyEnv(session.containerConfig!, certPath);
const caCert = session.containerConfig!.caCertificate;

// Pass `env` as environment variables and mount `caCert` at `certPath`
// in your sandbox — Docker, Daytona, E2B, Firecracker, or any other runtime.
// Once configured, the agent inside just calls APIs normally:
//   fetch("https://api.github.com/...") — no SDK, no credentials needed.
```

See the [TypeScript SDK README](https://github.com/Infisical/agent-vault/blob/main/sdks/sdk-typescript/README.md) for full documentation.

## Development

[](#development)

```
make build      # Build frontend + Go binary
make test       # Run tests
make web-dev    # Vite dev server with hot reload (port 5173)
make dev        # Go + Vite dev servers with hot reload
make docker     # Build Docker image
```

## Open-source vs. paid

[](#open-source-vs-paid)

This repo available under the [MIT expat license](https://github.com/Infisical/infisical/blob/main/LICENSE), with the exception of the `ee` directory which will contain premium enterprise features requiring a Infisical license.

If you are interested in Infisical or exploring a more commercial path for Agent Vault, take a look at [our website](https://infisical.com/) or [book a meeting with us](https://infisical.cal.com/vlad/infisical-demo).

## Contributing

[](#contributing)

Whether it's big or small, we love contributions. Agent Vault follows the same contribution guidelines as Infisical.

Check out our guide to see how to [get started](https://infisical.com/docs/contributing/getting-started).

Not sure where to get started? You can:

- Join our [Slack](https://infisical.com/slack), and ask us any questions there.

## We are hiring!

[](#we-are-hiring)

If you're reading this, there is a strong chance you like the products we created.

You might also make a great addition to our team. We're growing fast and would love for you to [join us](https://infisical.com/careers).

* * *

> **Preview.** Agent Vault is in active development and the API is subject to change. Please review the [security documentation](https://docs.agent-vault.dev/learn/security) before deploying.

---

## [HN-TITLE] 14. Astronomers find the edge of the Milky Way

- **Source**: [https://skyandtelescope.org/astronomy-news/astronomers-find-the-edge-of-the-milky-way/](https://skyandtelescope.org/astronomy-news/astronomers-find-the-edge-of-the-milky-way/)
- **Site**: skyandtelescope.org
- **Submitter**: bookofjoe (Hacker News)
- **Submitted**: 2026-04-23 18:11 UTC (Hacker News)
- **HN activity**: 89 points · [13 comments](https://news.ycombinator.com/item?id=47879239)

> scrape failed: http 403

---

## [HN-TITLE] 15. A programmable watch you can actually wear

- **Source**: [https://www.hackster.io/news/a-diy-watch-you-can-actually-wear-8f91c2dac682](https://www.hackster.io/news/a-diy-watch-you-can-actually-wear-8f91c2dac682)
- **Site**: Hackster.io
- **Author**: Nick BildFollow
- **Submitted**: 2026-04-21 08:52 UTC (Hacker News)
- **HN activity**: 146 points · [76 comments](https://news.ycombinator.com/item?id=47846307)
- **Length**: 461 words (~3 min read)
- **Language**: en

Driven by a desire to break free from walled gardens, many hardware hackers have designed their own smartwatches. Instead of proprietary hardware and software platforms, these devices typically use highly accessible components like ESP32 microcontrollers and custom-built firmware. So far, so good; however, commercial smartwatches still beat them in one very important way — durability. DIY solutions don’t hold up well (or at all) to the conditions — like rain — that we regularly run into in our everyday lives. This factor alone makes homebrew smartwatches more of a toy than anything practical.

But now, there is a new smartwatch developed by LILYGO called the T-Watch Ultra. It’s got about everything you would expect from a smartwatch (and a few extras) included onboard, and it can be programmed using common development platforms such as Arduino IDE and ESP-IDF. Beyond its internal specifications, the T-Watch Ultra is housed in an IP65-rated case, so you don’t need to be concerned about rain, spills, or dust while you are wearing it.

An overview of the features (📷: LILYGO)

At the core of the device is an ESP32-S3 from Espressif Systems, featuring a dual-core Tensilica LX7 CPU running at up to 240 MHz. With 16MB of flash and 8MB of PSRAM, the watch has significantly more memory than many hobbyist wearables, making it suitable for more complex applications, including edge AI tasks. The inclusion of vector instructions for AI acceleration further supports this functionality.

The display is a 2.01-inch AMOLED panel with a sharp 410×502 resolution and full capacitive touch support. Combined with a 1,100mAh battery — an upgrade over earlier models — this provides both improved usability and longer runtime.

In addition to Wi-Fi and Bluetooth 5.0 LE, the watch includes a Semtech SX1262 LoRa transceiver, enabling long-range, low-power communication. This opens the door to applications like Meshtastic nodes and off-grid messaging systems — capabilities rarely seen in smartwatches.

What's in the box (📷: LILYGO)

A u-blox MIA-M10Q GNSS module provides accurate location tracking, while a Bosch BHI260AP smart sensor enables motion-based AI features. Additional hardware includes NFC via an ST25R3916 chip, a real-time clock, a vibration motor driven by a DRV2605 controller, and a microSD card slot for expanded storage.

Audio support is handled through a built-in microphone and a MAX98357A amplifier, and power management is overseen by an AXP2101 PMU. The device also features a USB Type-C port for charging and programming, making development workflows straightforward.

With support for Arduino, MicroPython, and ESP-IDF — and an ecosystem of example code and libraries — the T-Watch Ultra makes development easy. LILYGO is [now taking pre-orders](https://lilygo.cc/en-us/products/t-watch-ultra) for $78.32, and the device should be available any day.

[](https://www.hackster.io/nickbild)

[Nick Bild](https://www.hackster.io/nickbild)

R&D, creativity, and building the next big thing you never knew you wanted are my specialties.

---

## [HN-TITLE] 16. Show HN: Honker – Postgres NOTIFY/LISTEN Semantics for SQLite

- **Source**: [https://github.com/russellromney/honker](https://github.com/russellromney/honker)
- **Site**: GitHub
- **Submitter**: russellthehippo (Hacker News)
- **Submitted**: 2026-04-23 11:53 UTC (Hacker News)
- **HN activity**: 239 points · [58 comments](https://news.ycombinator.com/item?id=47874647)
- **Length**: 2.7K words (~12 min read)
- **Language**: en

`honker` is a SQLite extension + language bindings that add Postgres-style `NOTIFY`/`LISTEN` semantics to SQLite, with built-in durable pub/sub, task queue, and event streams, without client polling or a daemon/broker. Any language that can `SELECT load_extension('honker')` gets the same features.

honker ships as a [Rust crate](https://crates.io/crates/honker) (`honker`, plus `honker-core`/`honker-extension`), a [SQLite loadable extension](#sqlite-extension-any-sqlite-39-client), and language packages: Python (`honker`), Node (`@russellthehippo/honker-node`), Bun (`@russellthehippo/honker-bun`), Ruby (`honker`), Go, Elixir, C++. The on-disk layout is defined once in Rust; every binding is a thin wrapper around the loadable extension.

`honker` works by replacing a polling interval with event notifications on SQLite's WAL file, achieving push semantics and enabling cross-process notifications with single-digit millisecond delivery.

> Experimental. API may change.

SQLite is increasingly the database for shipped projects. Those inevitably require pubsub and a task queue. The usual answer is "add Redis + Celery." That works, but it introduces a second datastore with its own backup story, a dual-write problem between your business table and the queue, and the operational overhead of running a broker.

honker takes the approach that if SQLite is the primary datastore, the queue should live in the same file. That means `INSERT INTO orders` and `queue.enqueue(...)` commit in the same transaction. Rollback drops both. The queue is just rows in a table with a partial index.

Prior art: [`pg_notify`](https://www.postgresql.org/docs/current/sql-notify.html) (fast triggers, no retry/visibility), [Huey](https://github.com/coleifer/huey) (SQLite-backed Python), [pg-boss](https://github.com/timgit/pg-boss) and [Oban](https://github.com/sorentwo/oban) (the Postgres-side gold standards we're chasing on SQLite). If you already run Postgres, use those, as they are excellent.

## At a glance

[](#at-a-glance)

```
import honker

db = honker.open("app.db")
emails = db.queue("emails")

# Enqueue
emails.enqueue({"to": "alice@example.com"})

# Consume (worker process)
async for job in emails.claim("worker-1"):
    send(job.payload)
    job.ack()
```

Any enqueue can be atomic with a business write. Rollback drops both.

```
with db.transaction() as tx:
    tx.execute("INSERT INTO orders (user_id) VALUES (?)", [42])
    emails.enqueue({"to": "alice@example.com"}, tx=tx)
```

## Features

[](#features)

Today:

- Notify/listen across processes on one `.db` file
- Work queues with retries, priority, delayed jobs, and a dead-letter table
- Any send can be atomic with your business write (commit together or roll back together)
- Single-digit millisecond cross-process reaction time, no polling
- Handler timeouts, declarative retries with exponential backoff
- Delayed jobs, task expiration, named locks, rate-limiting
- Crontab-style periodic tasks with a leader-elected scheduler
- Opt-in task result storage (`enqueue` returns an id, worker persists the return value, caller awaits `queue.wait_result(id)`)
- Durable streams with per-consumer offsets and configurable flush interval
- SQLite loadable extension so any SQLite client can read the same tables
- Bindings: Python, Node.js, Rust, Go, Ruby, Bun, Elixir

Deliberately not built: task pipelines/chains/groups/chords, multi-writer replication, workflow orchestration with DAGs.

## Quick start

[](#quick-start)

### Python: queue (durable at-least-once work)

[](#python-queue-durable-at-least-once-work)

```
pip install honker
```

```
import honker
db = honker.open("app.db")
emails = db.queue("emails")

with db.transaction() as tx:
    tx.execute("INSERT INTO orders (user_id) VALUES (?)", [42])
    emails.enqueue({"to": "alice@example.com"}, tx=tx)   # atomic with order

# Then in a worker, do: 
async for job in emails.claim("worker-1"):               # wakes on any WAL commit
    try:
        send(job.payload); job.ack()
    except Exception as e:
        job.retry(delay_s=60, error=str(e))
```

`claim()` is an async iterator. Each iteration is one `claim_batch(worker_id, 1)`. Wakes on any WAL commit, falls back to a 5 s paranoia poll only if the WAL watcher can't fire. For batched work, call `claim_batch(worker_id, n)` explicitly and ack with `queue.ack_batch(ids, worker_id)`. Defaults: visibility 300 s.

### Python: tasks (Huey-style decorators)

[](#python-tasks-huey-style-decorators)

If you want a function call to turn into an enqueued job without wrapping `queue.enqueue` by hand:

```
@emails.task(retries=3, timeout_s=30)
def send_email(to: str, subject: str) -> dict:
    ...
    return {"sent_at": time.time()}

# Caller
r = send_email("alice@example.com", "Hi")   # enqueues, returns a TaskResult
print(r.get(timeout=10))                    # blocks until worker runs it
```

Worker side, either in-process or as its own process:

```
python -m honker worker myapp.tasks:db --queue=emails --concurrency=4
```

Auto-name is `{module}.{qualname}` (Huey/Celery convention). Explicit names with `@emails.task(name="...")` are recommended in prod so renames don't orphan pending jobs. Periodic tasks use `@emails.periodic_task(crontab("0 3 * * *"))`. Full details in [`packages/honker/examples/tasks.py`](https://github.com/russellromney/honker/blob/main/packages/honker/examples/tasks.py).

### Python: stream (durable pub/sub)

[](#python-stream-durable-pubsub)

```
stream = db.stream("user-events")

with db.transaction() as tx:
    tx.execute("UPDATE users SET name=? WHERE id=?", [name, uid])
    stream.publish({"user_id": uid, "change": "name"}, tx=tx)

async for event in stream.subscribe(consumer="dashboard"):
    await push_to_browser(event)
```

Each named consumer tracks its own offset in the `_honker_stream_consumers` table. `subscribe` replays rows past the saved offset, then transitions to live delivery on WAL wake. The iterator auto-saves offset at most every 1000 events or every 1 second (whichever first) so a high-throughput stream doesn't hammer the single-writer slot. Override with `save_every_n=` / `save_every_s=`, or set both to 0 to disable auto-save and call `stream.save_offset(consumer, offset, tx=tx)` yourself (atomic with whatever you just did in that tx). At-least-once: a crash re-delivers in-flight events up to the last flushed offset.

### Python: notify (ephemeral pub/sub)

[](#python-notify-ephemeral-pubsub)

```
async for n in db.listen("orders"):
    print(n.channel, n.payload)

with db.transaction() as tx:
    tx.execute("INSERT INTO orders (id, total) VALUES (?, ?)", [42, 99.99])
    tx.notify("orders", {"id": 42})
```

Listeners attach at current `MAX(id)`; history is not replayed. Use `db.stream()` if you need durable replay. The notifications table is not auto-pruned. Call `db.prune_notifications(older_than_s=…, max_keep=…)` from a scheduled task. Task payloads have to be valid JSON so a Python writer and Node reader can share a channel.

### Node.js

[](#nodejs)

```
const { open } = require('@russellthehippo/honker-node');
const db = open('app.db');

// Atomic: business write + notify commit together
const tx = db.transaction();
tx.execute('INSERT INTO orders (id) VALUES (?)', [42]);
tx.notify('orders', { id: 42 });
tx.commit();

// Listen wakes on WAL commits, filters by channel
for await (const n of db.listen('orders')) {
  handle(n.payload);
}
```

### SQLite extension (any SQLite 3.9+ client)

[](#sqlite-extension-any-sqlite-39-client)

```
.load ./libhonker_ext
SELECT honker_bootstrap();
INSERT INTO _honker_live (queue, payload) VALUES ('emails', '{"to":"alice"}');
SELECT honker_claim_batch('emails', 'worker-1', 32, 300);    -- JSON array
SELECT honker_ack_batch('[1,2,3]', 'worker-1');              -- DELETEs; returns count
SELECT honker_sweep_expired('emails');                       -- count moved to dead
SELECT honker_lock_acquire('backup', 'me', 60);              -- 1 = got it, 0 = held
SELECT honker_lock_release('backup', 'me');                  -- 1 = released
SELECT honker_rate_limit_try('api', 10, 60);                 -- 1 = under, 0 = at limit
SELECT honker_rate_limit_sweep(3600);                        -- drop windows >1h old
SELECT honker_cron_next_after('0 3 * * *', unixepoch());     -- unix ts of next fire
SELECT honker_scheduler_register('nightly', 'backups',
  '0 3 * * *', '"go"', 0, NULL);                         -- register periodic task
SELECT honker_scheduler_tick(unixepoch());                   -- JSON: fires due
SELECT honker_scheduler_soonest();                           -- min next_fire_at
SELECT honker_scheduler_unregister('nightly');               -- 1 = deleted
SELECT honker_stream_publish('orders', 'k', '{"id":42}');    -- returns offset
SELECT honker_stream_read_since('orders', 0, 1000);          -- JSON array
SELECT honker_stream_save_offset('worker', 'orders', 42);    -- monotonic upsert
SELECT honker_stream_get_offset('worker', 'orders');         -- offset or 0
SELECT honker_result_save(42, '{"ok":true}', 3600);          -- save w/ 1h TTL
SELECT honker_result_get(42);                                -- value or NULL
SELECT honker_result_sweep();                                -- prune expired
SELECT notify('orders', '{"id":42}');
```

The extension shares `_honker_live`, `_honker_dead`, and `_honker_notifications` with the Python binding, so a Python worker can claim jobs any other language pushed via the extension. Schema compatibility is pinned by `tests/test_extension_interop.py`.

## Design

[](#design)

This repo includes the `honker` SQLite loadable extension and bindings for Python, Node, Rust, Go, Ruby, Bun, and Elixir.

For most applications, [SQLite alone is sufficient](https://www.epicweb.dev/why-you-should-probably-be-using-sqlite). There are already great libraries that leverage SQLite for durable messaging. [Huey](https://github.com/coleifer/huey) is the one honker draws the most from. This project is inspired by it and seeks to do something similar across languages and frameworks by moving package logic into a SQLite extension.

For Postgres-backed apps, [`pg_notify`](https://www.postgresql.org/docs/current/sql-notify.html) + [pg-boss](https://github.com/timgit/pg-boss) or [Oban](https://hexdocs.pm/oban/) is the equivalent. This library is for apps where SQLite is the primary datastore.

The extension has three primitives that tie it together: ephemeral pub/sub (`notify()`), durable pub/sub with per-consumer offsets (`stream()`), at-least-once work queue (`queue()`). All three are INSERTs inside your transaction, which lets a task "send" be atomic with your business write, and rollback drops everything.

The explicit goal is to do `NOTIFY`/`LISTEN` semantics without constant polling, to achieve single-digit ms reaction time. If you use your app's existing SQLite file containing business logic, it will notify workers on every WAL commit. This means that most triggers will not result in anything happening: instead, workers just read the message/queue with no result. This "overtriggering" is on purpose and is the tradeoff for push semantics and fast reaction time.

### WAL-only by design

[](#wal-only-by-design)

honker requires `journal_mode = WAL` on every database it manages. `honker_bootstrap()` refuses to run on a file-backed DB that isn't in WAL mode, and the language bindings set `PRAGMA journal_mode = WAL` in their default open path.

- Workers hold open read views (WAL subscription channels, listener iterators) for their whole lifetime. In DELETE / TRUNCATE modes, writers take an EXCLUSIVE lock; every active reader blocks until release. A single worker actively claiming would serialize every `enqueue()` / `notify()` in the system behind it. WAL lets readers and writers coexist.
- The `.db-wal` sidecar grows on every commit and only shrinks at checkpoint. Stat-polling it gives a monotonic, unambiguous change signal. The rollback-journal sidecar (`.db-journal`) in DELETE mode appears mid-transaction and vanishes on commit, making it a poor stat-poll target.
- With `wal_autocheckpoint = 10000`, WAL performs one fsync per 10k pages instead of per-commit. Most of the throughput win comes from that.

If you need a SQLite database that never enters WAL mode (e.g. for a backup target, or to avoid the `.db-wal` / `.db-shm` sidecars in a shared filesystem), honker is not the right tool. Use plain SQLite and live without the NOTIFY/LISTEN semantics.

The library/extension is a small coordination layer built on the properties of SQLite and single-server architecture.

- One `.db` + one `.db-wal` is the entire system. You get every benefit of SQLite (embedded, local, durable, snapshot-able) that your app already uses.
- WAL mode gives one writer and concurrent readers. Claim is one `UPDATE … RETURNING` via a partial index, ack is one `DELETE`.
- The WAL file grows on every commit, so `(size, mtime)` is the cross-process commit signal.
- SQLite has no wire protocol. Consumers must initiate reads; server-push is impossible. Wake signal = file change → `SELECT`.
- Transactions are cheap, so jobs, events, and notifications are rows in the caller's open `with db.transaction()` block in an "outbox"-type pattern.
- We use `stat(2)` cross-platform instead of the technically better `FSEvents`/`inotify`/`kqueue`. FSEvents drops same-process writes on macOS, meaning a listener and enqueuer in the same Python process would never see each other. `stat(2)` works identically on Linux/macOS/Windows at ~1 ms granularity for negligible CPU. Cost: ~0.5 ms of latency vs kernel notifications.
- Single machine, single writer. SQLite's locking is designed for a single host. Two servers writing one `.db` over NFS will corrupt it. Shard by file, or switch to Postgres.

## Architecture

[](#architecture)

### Wake path

[](#wake-path)

- One `stat(2)` thread per `Database`, polls `.db-wal` every 1 ms
- `(size, mtime)` change → fan out a tick to each subscriber's bounded channel
- Each subscriber runs `SELECT … WHERE id > last_seen` against a partial index, yields rows, returns to wait
- 100 subscribers = 1 stat thread
- Idle listeners run zero SQL queries

Idle cost is a single `stat(2)` per millisecond per database. Listener count scales for free because the wake signal is a file stat instead of a polling query.

`SharedWalWatcher` (in `honker-core`) owns the poll thread and fans out to N subscribers via bounded `SyncSender<()>` channels keyed by subscriber id. Each `db.wal_events()` call registers a subscriber and returns a handle whose `Drop` auto-unsubscribes, so a dropped listener causes the bridge thread's `rx.recv() -> Err` and exits cleanly.

### Queue schema

[](#queue-schema)

- `_honker_live`: pending + processing rows
- Partial index: `(queue, priority DESC, run_at, id) WHERE state IN ('pending','processing')`
- Claim = one `UPDATE … RETURNING` via that index
- Ack = one `DELETE`
- Retry-exhausted → `_honker_dead` (never scanned by claim path)

Partial-index on state means the claim hot path is bounded by the *working-set* size rather than the *history* size. A queue with 100k dead rows claims as fast as a queue with zero.

### Claim iterator

[](#claim-iterator)

- `async for job in q.claim(id)` yields one job at a time via `claim_batch(id, 1)`
- `Job.ack()` is one `DELETE` in its own transaction. Return is an honest bool: `True` iff the claim was still valid, `False` if the visibility window elapsed and another worker reclaimed.
- Wakes on WAL commit from any process; a 5 s paranoia poll is the only fallback.

For batched work, call `claim_batch(worker_id, n)` directly and ack with `queue.ack_batch(ids, worker_id)`. The library doesn't hide batching behind the iterator. The per-tx cost and the at-most-once visibility semantics are easier to reason about when the API doesn't try to be clever.

### Transactional coupling

[](#transactional-coupling)

- `notify()` is a SQL scalar function registered on the writer connection
- INSERTs into `_honker_notifications` under the caller's open tx
- `queue.enqueue(…, tx=tx)` and `stream.publish(…, tx=tx)` do the same
- Rollback drops the job/event/notification with the rest of the tx

This is the transactional outbox pattern, by default, without a library to install. Business write and side-effect enqueue commit or roll back together. There is no separate dispatch table and no separate dispatcher process: the side-effect row *is* the committed row, and any process watching the WAL picks it up within ~1 ms.

### Over-triggering quickly is better than over-triggering from polling

[](#over-triggering-quickly-is-better-than-over-triggering-from-polling)

- A WAL change wakes *every* subscriber on that `Database`, not just the ones whose channel committed
- Each wasted wake = one indexed SELECT (microseconds)
- A missed wake = a silent correctness bug

The library prefers waking ten listeners that don't care over missing one that does. Channel filtering happens in the `SELECT` path instead of the trigger notification. [Many small queries are efficient in SQLite](https://www.sqlite.org/np1queryprob.html).

### Retention

[](#retention)

- Queue jobs persist until ack; retry-exhausted rows move to `_honker_dead`
- Stream events persist; each named consumer tracks its own offset
- Notify is fire-and-forget and not auto-pruned

The caller chooses retention per primitive. `db.prune_notifications(older_than_s=…, max_keep=…)` is a tool you invoke. This keeps retention policy visible in the caller's code instead of inherited from a library default.

## Crash recovery

[](#crash-recovery)

- Rollback drops jobs/events/notifications with your business write (SQLite ACID).
- SIGKILL mid-tx is safe. WAL rollback on next open leaves no stale state. Verified in `tests/test_crash_recovery.py` (subprocess killed pre-COMMIT, `PRAGMA integrity_check == 'ok'`, fresh notifies still flow).
- If a worker crashes mid-job, the claim expires after `visibility_timeout_s` (default 300 s) and another worker reclaims. `attempts` increments. After `max_attempts` (default 3), the row moves to `_honker_dead`.
- Listeners offline during a prune miss the pruned events. For durable replay, use `db.stream()`, which tracks per-consumer offsets.

## Wiring into your web framework

[](#wiring-into-your-web-framework)

Honker ships no framework plugins. API is small and the integration is a few lines of glue:

```
# FastAPI: enqueue in a request, run workers via lifespan.
@app.on_event("startup")
async def _start_workers():
    async def worker_loop():
        async for job in db.queue("emails").claim("worker"):
            await honker._worker.run_task(
                job, send_email, timeout=30, retries=3, backoff=2.0
            )
    app.state._worker = asyncio.create_task(worker_loop())

@app.post("/orders")
async def create_order(order: dict):
    with db.transaction() as tx:
        tx.execute("INSERT INTO orders (user_id) VALUES (?)", [order["user_id"]])
        db.queue("emails").enqueue({"to": order["email"]}, tx=tx)
    return {"ok": True}
```

SSE endpoints are ~30 lines of `async def stream(...): yield f"data: ...\n\n"` over `db.listen(channel)` or `db.stream(name).subscribe(...)`. For Django/Flask, run the worker as a dedicated CLI process (same pattern as Celery/RQ).

## Performance

[](#performance)

Handles thousands of messages per second on a modern laptop, with cross-process wake latency bounded by the 1 ms stat-poll cadence (~1–2 ms median on M-series). Run `bench/wake_latency_bench.py` and `bench/real_bench.py` to measure on your hardware.

## Development

[](#development)

Layout:

```
honker-core/              # Rust rlib shared across all bindings (in-tree, published on crates.io)
honker-extension/         # SQLite loadable extension (cdylib, published on crates.io)
packages/
  honker/                 # Python package (PyO3 cdylib + Queue/Stream/Outbox/Scheduler)
  honker-node/            # napi-rs Node.js binding           [git submodule]
  honker-rs/              # ergonomic Rust wrapper            [git submodule]
  honker-go/              # Go binding                        [git submodule]
  honker-ruby/            # Ruby binding                      [git submodule]
  honker-bun/             # Bun binding                       [git submodule]
  honker-ex/              # Elixir binding                    [git submodule]
  honker-cpp/             # C++ binding                       [git submodule]
tests/                    # integration tests (cross-package)
bench/                    # benches
site/                     # honker.dev (Astro)                [git submodule]
```

Each binding repo is published independently (PyPI / npm / crates.io / Hex / RubyGems) and pinned here as a git submodule; `honker-core` + `honker-extension` live in-tree since they're the shared foundation every binding depends on. Clone with `git clone --recursive` or run `git submodule update --init --recursive` after a normal clone.

```
make test                   # default: rust + python + node (fast, ~10s)
make test-python-slow       # soak + real-time cron tests (~2 min)
make test-all               # everything including slow marks

make build                  # PyO3 maturin develop + loadable extension

python bench/wake_latency_bench.py --samples 500
python bench/real_bench.py --workers 4 --enqueuers 2 --seconds 15
python bench/ext_bench.py
```

### Coverage

[](#coverage)

One-time: `make install-coverage-deps` (installs `coverage.py` + `cargo-llvm-cov`).

```
make coverage               # both HTML reports into coverage/
make coverage-python        # honker python paths
make coverage-rust          # honker-core Rust unit tests
```

Python coverage reflects the full honker test suite (~92% of `packages/honker/`). Rust coverage reflects only `cargo test`. Many `honker_ops.rs` paths (`honker_enqueue`, `honker_claim_batch`, etc.) are only exercised via the Python test suite and won't show up in the Rust report. Combined cross-language coverage is non-trivial (LLVM profile-data merging across PyO3 boundaries) and is deferred.

## License

[](#license)

Apache 2.0. See [LICENSE](https://github.com/russellromney/honker/blob/main/LICENSE).

---

## [HN-TITLE] 17. Incident with multple GitHub services

- **Source**: [https://www.githubstatus.com/incidents/myrbk7jvvs6p](https://www.githubstatus.com/incidents/myrbk7jvvs6p)
- **Site**: githubstatus.com
- **Submitter**: bwannasek (Hacker News)
- **Submitted**: 2026-04-23 16:21 UTC (Hacker News)
- **HN activity**: 218 points · [107 comments](https://news.ycombinator.com/item?id=47877644)
- **Length**: 169 words (~1 min read)
- **Language**: en

## Resolved

This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Posted Apr `23`, `2026` - `17:30` UTC

## Update

Webhooks is operating normally.

Posted Apr `23`, `2026` - `17:10` UTC

## Update

Many services are mitigated and are validating the remaining services.

Posted Apr `23`, `2026` - `17:04` UTC

## Update

The degradation affecting Actions and Copilot has been mitigated. We are monitoring to ensure stability.

Posted Apr `23`, `2026` - `17:03` UTC

## Update

We have identified the root problem and are working on mitigation.

Posted Apr `23`, `2026` - `16:52` UTC

## Update

Actions is experiencing degraded performance. We are continuing to investigate.

Posted Apr `23`, `2026` - `16:34` UTC

## Update

We are investigating multiple unavailable services.

Posted Apr `23`, `2026` - `16:19` UTC

## Investigating

We are investigating reports of degraded availability for Copilot and Webhooks

Posted Apr `23`, `2026` - `16:12` UTC

This incident affected: Webhooks, Actions, and Copilot.

---

## [HN-TITLE] 18. Used La Marzocco machines are coveted by cafe owners and collectors

- **Source**: [https://www.nytimes.com/2026/04/20/dining/la-marzocco-espresso-machine.html](https://www.nytimes.com/2026/04/20/dining/la-marzocco-espresso-machine.html)
- **Site**: nytimes.com
- **Submitter**: mitchbob (Hacker News)
- **Submitted**: 2026-04-21 03:17 UTC (Hacker News)
- **HN activity**: 43 points · [76 comments](https://news.ycombinator.com/item?id=47844085)

> scrape failed: http 403

---

## [HN-TITLE] 19. French government agency confirms breach as hacker offers to sell data

- **Source**: [https://www.bleepingcomputer.com/news/security/french-govt-agency-confirms-breach-as-hacker-offers-to-sell-data/](https://www.bleepingcomputer.com/news/security/french-govt-agency-confirms-breach-as-hacker-offers-to-sell-data/)
- **Site**: BleepingComputer
- **Author**: Bill Toulas
- **Published**: 2026-04-21
- **HN activity**: 361 points · [122 comments](https://news.ycombinator.com/item?id=47877366)
- **Length**: 483 words (~3 min read)
- **Language**: en-us

![French govt agency confirms breach as hacker offers to sell data](https://www.bleepstatic.com/content/hl-images/2026/04/21/Titres.jpg)

France Titres, the government agency in France for issuing and managing administrative documents has disclosed a data breach after a threat actor claimed the attack and stealing citizen data.

Also known as Agence nationale des titres sécurisés (ANTS), the administrative body operates under the French Ministry of the Interior, serving as the managing authority for official identity and registration documents in France. This includes driver’s licenses, national ID cards, passports, and immigration documents.

According to an announcement the agency published yesterday, the attack occurred last week, and while the investigation is still ongoing, several data types for an undisclosed number of individuals may have been exposed.

[![image](https://www.bleepstatic.com/c/a/as-tour-the-platform-970-x250.jpg)](https://www.adaptivesecurity.com/demo/security-awareness-training?utm_source=display_network&utm_medium=paid_display&utm_campaign=2026_04_display_bleepingcomputer&utm_id=701Rd00000fE8REIA0&utm_content=970x250)

“On Wednesday, April 15, 2026, the National Agency for Secure Documents (ANTS) detected a security incident that may involve the disclosure of data from individual and professional accounts on the ants.gouv.fr portal,” [reads ANTS’s announcement](https://ants.gouv.fr/toute-l-actualite/incident-de-securite-relatif-au-portail-antsgouvfr).

The types of data that may have been exposed are:

- Login ID
- Full name
- Email address
- Date of birth
- Unique account identifier
- Postal address (for some)
- Place of birth (for some)
- Phone number (for some)

ANTS stated that it is currently in the process of notifying those identified as impacted.

The agency noted that the exposed information does not allow unauthorized access to its electronic portals. However, the same data can be used in phishing and social engineering attacks.

“No action is required from users. However, they are advised to remain highly vigilant regarding any suspicious or unusual messages they may receive (SMS, phone calls, emails, etc.) that appear to come from ANTS,” the agency warned.

ANTS has notified the data protection authority (CNIL), the Paris Public Prosecutor, and has also involved the national cybersecurity agency (ANSSI) in the response effort. The agency warned that the sale or dissemination of the data is illegal.

### 19 million records claimed stolen

On April 16, a threat actor using the moniker ‘breach3d’ claimed the attack on hacker forums claimed the attack on ANTS, alleging to be holding up to 19 million records.

The threat actor claims that the stolen data contains full names, contact details, birth data, home addresses, account metadata, and gender and civil status.

The data has been offered for sale for an undisclosed amount, so it has not been broadly leaked yet.

ANTS saus that user do not need to take any action but recommends exercising "extreme caution" about suspicious or unusual communication over SMS, voice, and emails appearing to come from the agency.

BleepingComputer has contacted ANTS to ask about the threat actor’s allegations, but we have not received a response as of publishing.

[![article image](https://www.bleepstatic.com/c/p/autonomous-validation2.jpg)](https://hubs.li/Q04crVgD0)

## [99% of What Mythos Found Is Still Unpatched.](https://hubs.li/Q04crVgD0)

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what's exploitable, proves controls hold, and closes the remediation loop.

[Claim Your Spot](https://hubs.li/Q04crVgD0)

---

## [HN-TITLE] 20. I spent years trying to make CSS states predictable

- **Source**: [https://tenphi.me/blog/why-i-spent-years-trying-to-make-css-states-predictable/](https://tenphi.me/blog/why-i-spent-years-trying-to-make-css-states-predictable/)
- **Site**: Andrey Yamanov
- **Author**: Andrey Yamanov
- **Published**: 2026-04-23
- **HN activity**: 53 points · [17 comments](https://news.ycombinator.com/item?id=47875025)
- **Length**: 1.5K words (~7 min read)
- **Language**: en

Have you ever changed the order of two CSS rules and broken a component without changing the logic?

```
.btn:hover     { background: dodgerblue; }
.btn[disabled] { background: gray; }
```

Both selectors have specificity `(0, 1, 1)`. When a button is both hovered and disabled, the browser falls back to source order. If the `:hover` rule comes last, the disabled button turns blue. If the `[disabled]` rule comes last, it stays gray.

That sounds small, but it points to a bigger problem: component state in CSS often works by overlap.

As long as a component has only one or two states, that overlap feels manageable. Once you add `:hover`, `:active`, `disabled`, dark mode, breakpoints, data attributes, container queries, and overrides, it stops feeling manageable very quickly. You are no longer just writing styles. You are maintaining a resolution system in your head.

And that showed up not only as accidental conflicts, but as a growing difficulty in customizing existing components safely as real requirements piled up.

That was the problem I kept running into while building component systems. Not on toy examples, but on real buttons, inputs, panels, dropdowns, and design-system primitives. The hardest part was not writing the first version of a component. It was extending it later without reopening the entire state-resolution problem.

At some point I stopped asking, “How do I write this selector?” and started asking a better question:

**What if component state could be expressed declaratively, while the compiler handled the selector logic needed to make it deterministic?**

That question eventually became [Tasty](https://tasty.style).

## The idea in one minute

Instead of writing selectors that compete through cascade and specificity, I wanted to describe a property’s possible states as a map:

```
import { tasty } from '@tenphi/tasty';

const Button = tasty({
  as: 'button',
  styles: {
    fill: {
      '': '#primary',
      ':hover': '#primary-hover',
      ':active': '#primary-pressed',
      '[disabled]': '#surface',
    },
  },
});
```

Applied in order of priority, this means:

- when disabled use `#surface`
- otherwise, on active use `#primary-pressed`
- otherwise, on hover use `#primary-hover`
- otherwise use `#primary`

The important part is what happens next.

Tasty compiles that state map into selectors that cannot overlap:

```
/* [disabled] wins outright */
.t0[disabled]                                { background: var(--surface-color); }
/* :active is excluded when disabled */
.t0:active:not([disabled])                   { background: var(--primary-pressed-color); }
/* :hover is excluded when :active or disabled */
.t0:hover:not(:active):not([disabled])       { background: var(--primary-hover-color); }
/* default is excluded when anything above matches */
.t0:not(:hover):not(:active):not([disabled]) { background: var(--primary-color); }
```

Now there is no argument for the cascade to settle. No two branches can match at the same time.

And the real payoff comes later. Extending or changing this map is far easier than reopening the equivalent selector logic in traditional CSS.

That is the whole idea:

**If the author already defined the priority, the generated selectors should make that priority unambiguous.**

## Why this matters more than the button example suggests

A hovered disabled button is just the easiest way to see the problem. The real pain starts when states intersect in less obvious ways.

Maybe dark mode can come from a root attribute, or from `prefers-color-scheme`, or from both. Maybe spacing changes inside a narrow container, but only on tablet widths. Maybe a destructive variant behaves differently on hover but not when loading. Maybe a parent theme toggles a child override.

Each one of those rules is understandable in isolation. The difficult part is the interaction surface between them.

That interaction surface is where CSS starts feeling fragile. Small edits can change which branches overlap. A harmless refactor can turn into a source-order bug. Extending an existing component can mean reopening selector logic you thought was already settled.

I wanted a model where adding a new state did not mean mentally re-deriving the whole selector matrix.

## Why it took so long

The core idea is simple. Turning it into a real tool was the hard part.

Getting from “this works for simple state conditions” to “this can support real-world component systems” took several years and hundreds of iterations.

The hard part was never producing one clever selector. The hard part was building a system that stayed coherent when all of these showed up together:

- pseudo-classes like `:hover` and `:active`
- attributes, boolean modifiers, and value-based modifiers
- root-level state
- media queries
- container queries
- nested and compound selectors
- extending styles and overriding them safely
- typed APIs on top of the styling model

Every time the model got broader, I had to check whether the original idea still held up. Sometimes it did. Sometimes it very much did not.

There were stretches where I had to break the DSL, rethink how states should be represented internally, and rebuild large parts of the compiler to preserve the same promise: if the author defines priority, the generated selectors should make that priority unambiguous.

Some of the difficulty was technical. Some of it was conceptual.

The technical side was about parsing, normalization, selector generation, caching, extension rules, and making the output fast enough to be practical.

The conceptual side was harder. I had to keep deciding what Tasty actually was.

Was it a nicer CSS object format? An atomic CSS generator? A design-system language? A compiler for stateful component styles? In practice it kept becoming all of those at once, which meant the boundaries had to be redrawn again and again before the whole thing felt internally consistent.

For a long time I honestly did not know whether the idea could scale cleanly enough to justify the effort. It worked in pockets early. Turning it into something I could trust across a design system was the long part.

And this is not just an experiment in the abstract. Tasty has powered [Cube UI Kit](https://github.com/cube-js/cube-ui-kit) from the beginning. That system now spans 100+ components and powers [Cube Cloud](https://cube.dev/product/cube-cloud), a real enterprise product. Early versions were absolutely experimental internally. But the model earned its shape under production pressure and team feedback.

## The part I care about most

I do not think “mutually exclusive selectors” are interesting because they are clever.

I think they are interesting because they remove a category of ambiguity that should not be the author’s job in the first place.

When I style a component, I want to describe what it should look like in each meaningful state. I do not want to manually encode the browser’s tie-breaker logic every time those states intersect.

That is the payoff Tasty is chasing:

- predictable component behavior
- fewer accidental regressions from source order
- easier extension of existing components
- a styling model that gets more valuable as the design system gets more complex

If you are styling a small landing page, this is probably too much machinery. Plain CSS is often the right answer.

But if you are building components that need to survive years of iteration, variant growth, theme expansion, and multiple authors, predictability starts compounding in a very practical way.

## A slightly bigger example

Here is the same idea with a few more moving parts:

```
const Panel = tasty({
  styles: {
    flow: {
      '': 'column',
      '@media(w >= 768px)': 'row',
    },
    fill: {
      '': '#surface',
      'theme=danger & :hover': '#danger-hover',
      '@root(schema=dark)': '#surface-dark',
    },
    padding: {
      '': '4x',
      '@(sidebar, w < 300px)': '2x',
    },
  },
});
```

This is the point where I find the model becomes more useful than ordinary selector authoring.

Three properties, each with a different set of concerns — media queries, container queries, modifiers, root state, pseudo-classes — and the author never has to think about how they interact with each other. The compiler already knows.

## What this post is, and what it is not

This is not the full Tasty tour.

Tasty also has typed component APIs, sub-elements, SSR integrations, zero-runtime extraction, editor tooling, linting, tokens, recipes, and more. Those all matter, and they are part of why the tool is useful in practice.

But they are downstream of the main idea.

The main idea is still this:

**component states should be easy to describe and hard to make ambiguous.**

That sentence took years to turn into a tool I was comfortable releasing.

## If this resonates

You can try Tasty in the browser with the [playground](https://tasty.style/playground), or read the [docs](https://tasty.style/docs) if you want the full language and feature set.

If you do try it, I would genuinely love feedback. The most useful feedback is rarely “this is cool.” It is usually something more specific:

- where the model clicked immediately
- where it felt unfamiliar
- where naming was confusing
- where the docs skipped a mental step
- where the abstraction solved a real problem, or failed to

That kind of feedback has shaped the project from the beginning, and it still does. If something feels confusing, awkward, or missing, the best place to share it is [GitHub Issues](https://github.com/tenphi/tasty/issues).

If you made it all the way to the end, thank you for reading. This one means a lot to me, because it is really about a problem I spent years trying to solve.

**Links**: [Docs](https://tasty.style) | [Playground](https://tasty.style/playground) | [GitHub](https://github.com/tenphi/tasty) | [npm](https://www.npmjs.com/package/@tenphi/tasty)

---

## [HN-TITLE] 21. Arch Linux Now Has a Bit-for-Bit Reproducible Docker Image

- **Source**: [https://antiz.fr/blog/archlinux-now-has-a-reproducible-docker-image/](https://antiz.fr/blog/archlinux-now-has-a-reproducible-docker-image/)
- **Site**: Robin Candau
- **Author**: Robin Candau
- **Published**: 2026-04-21
- **HN activity**: 317 points · [106 comments](https://news.ycombinator.com/item?id=47871519)
- **Length**: 515 words (~3 min read)
- **Language**: en

- Tue, Apr 21, 2026
- 3-minute read

As a follow-up to the [similar milestone reached for our WSL image a few months ago](https://antiz.fr/blog/the-archlinux-wsl-image-is-now-reproducible/), I’m happy to share that Arch Linux now has a bit-for-bit reproducible Docker image!

This bit-for-bit reproducible image is distributed under a new [“repro” tag](https://hub.docker.com/layers/archlinux/archlinux/repro).  
The reason for this is due to one *noticeable* caveat: to ensure reproducibility, the pacman keys have to be stripped from the image, meaning that pacman is not usable *out of the box* in this image. While waiting to find a suitable solution to this technical constraint, we are therefore providing this reproducible image under a dedicated tag as a first milestone.

In practice, that means that users will need to (re)generate the pacman keyring in the container before being able to install and update packages via `pacman`, by running: `pacman-key --init && pacman-key --populate archlinux` (whether interactively at first start or from a `RUN` statement in a Dockerfile if using this image as base).  
Distrobox users can run this as a pre-init hook: `distrobox create -n arch-repro -i docker.io/archlinux/archlinux:repro --pre-init-hooks "pacman-key --init && pacman-key --populate archlinux"`

The bit-for-bit reproducibility of the image is confirmed by digest equality across builds (via `podman inspect --format '{{.Digest}}' <image>`) and by using [diffoci](https://github.com/reproducible-containers/diffoci) to compare builds.  
Documentation to reproduce this Docker image is available [here](https://gitlab.archlinux.org/archlinux/archlinux-docker/-/blob/master/REPRO.md).

Building the base rootFS for the Docker image in a deterministic way was the main challenge, but it reuses [the same process as for our WSL image](https://gitlab.archlinux.org/archlinux/archlinux-wsl/-/commit/7c0340e26358048f3f8ee03b3ab3aea666751712) (as both share the same rootFS build system).

The main Docker-specific adjustments include (see also the related `diffoci` reports):

- Set `SOURCE_DATE_EPOCH` and honor it in the `org.opencontainers.image.created` LABEL in the Dockerfile

```
TYPE    NAME                  INPUT-0    INPUT-1
Cfg     ctx:/config/config    ?          ?
```

- Remove the ldconfig auxiliary cache file (which introduces non-determinism) from the built image in the Dockerfile:

```
TYPE    NAME                            INPUT-0                                                             INPUT-1
File    var/cache/ldconfig/aux-cache    656b08db599dbbd9eb0ec663172392023285ed6598f74a55326a3d95cdd5f5d0    ffee92304701425a85c2aff3ade5668e64bf0cc381cfe0a5cd3c0f4935114195
```

- Normalize timestamps during `docker build` / `podman build` using the `--source-date-epoch=$SOURCE_DATE_EPOCH` and `--rewrite-timestamp` options:

```
TYPE    NAME                 INPUT-0                          INPUT-1
File    etc/                 2026-03-31 07:57:46 +0000 UTC    2026-03-31 07:59:21 +0000 UTC
File    etc/ld.so.cache      2026-03-31 07:57:46 +0000 UTC    2026-03-31 07:59:21 +0000 UTC
File    etc/os-release       2026-03-31 07:57:46 +0000 UTC    2026-03-31 07:59:21 +0000 UTC
File    sys/                 2026-03-31 07:57:46 +0000 UTC    2026-03-31 07:59:21 +0000 UTC
File    var/cache/           2026-03-31 07:57:46 +0000 UTC    2026-03-31 07:59:21 +0000 UTC
File    var/cache/ldconfig/  2026-03-31 07:57:46 +0000 UTC    2026-03-31 07:59:21 +0000 UTC
File    proc/                2026-03-31 07:57:46 +0000 UTC    2026-03-31 07:59:21 +0000 UTC
File    dev/                 2026-03-31 07:57:46 +0000 UTC    2026-03-31 07:59:21 +0000 UTC
```

You can check [the related change set in our archlinux-docker repository](https://gitlab.archlinux.org/archlinux/archlinux-docker/-/merge_requests/96/diffs) for more details.  
Thanks to [Mark](https://hegreberg.io/) for his help on that front!

This represents yet another meaningful achievement regarding our general “reproducible builds” efforts and I’m already looking forward to the next step! 🤗

For what it’s worth, I’m eventually considering setting up a rebuilder for this Docker image (as well as for [the WSL image](https://gitlab.archlinux.org/archlinux/archlinux-wsl/-/blob/main/REPRO.md) and future eventual reproducible images) on my server in order to periodically / automatically rebuild the latest image available, verify it’s reproducibility status and share build logs / results publicly somewhere (if I find the time to get to it 👼).

---

## [HN-TITLE] 22. Advanced Packaging Limits Come into Focus

- **Source**: [https://semiengineering.com/advanced-packaging-limits-come-into-focus/](https://semiengineering.com/advanced-packaging-limits-come-into-focus/)
- **Site**: Semiconductor Engineering
- **Author**: Gregory Haley
- **Published**: 2026-03-19
- **HN activity**: 31 points · [5 comments](https://news.ycombinator.com/item?id=47849628)
- **Length**: 3.1K words (~14 min read)
- **Language**: en-US

**Key Takeaways:**

- Packaging is now a performance variable. Substrate, bonding, and process sequence determine what can be built at scale.
- Warpage underlies most advanced packaging failures and gets harder to control as package sizes grow.
- Every proposed solution, such as glass, panel processing, and backside power, solves one problem while creating another.

* * *

Moore’s Law has shifted toward advanced packaging over the past few years, but the limits of that approach are just now coming into focus.

AI and HPC designs are growing larger and more complex, pushing the next barriers toward package mechanics and process control rather than interconnect density alone. Warpage, glass fragility, hybrid-bond yield, temporary bonding variation, and substrate limitations are becoming increasingly difficult to manage as structures get thinner, larger, and more heterogeneous.

These issues were a recurring theme at this year’s iMAPS conference, and have cropped up in recent interviews, all pointing to the same conclusion — packaging is entering a phase in which mechanical and process-control problems are complicating continued scaling.

That matters because packaging now sits much closer to the center of system performance. It no longer makes sense to talk about the architecture of advanced AI systems as if the package were a passive shell wrapped around the real innovation. Power delivery, thermals, interconnect density, substrate behavior, and process sequence all affect what can be built and what can be manufactured economically.

“What really drives performance today is not really the number of flops, the teraflops, or the petaflops per GPU, but rather the system architecture and the system performance as a whole,” said Sandeep Razdan, director of the Advanced Technology Group at NVIDIA, during his keynote at iMAPS.

Once system architecture becomes the performance driver, packaging stops being a downstream implementation detail and becomes part of the performance equation. The substrate, the carrier, the bonding interface, the thermal path, and even the order in which process steps are performed all matter more.

Those elements are deeply connected. Warpage affects chucking and alignment. Alignment affects bonding yield. Glass can improve flatness and dimensional stability, but it also introduces brittleness and different failure modes. Thinning for backside processing depends on temporary bonding materials, grinding uniformity, and clean debonding. Even substrate shortages are only partly a supply problem. They also reflect broader uncertainty, about which platforms can still scale mechanically, electrically, and economically for advanced AI packages.

**Warpage moves to center stage**  
Warpage may be the most useful place to start, because it sits beneath so many of the other problems. It’s not just a nuisance that shows up late in assembly. More often, it is the visible result of deeper material and structural imbalances built into the stack from the beginning. Those imbalances become more severe as package sizes grow, as more silicon is placed on top of organic materials, and as more layers with different thermal and mechanical behavior are pushed through increasingly complex process flows.

“Panel warpage is fundamentally driven by thermo-mechanical CTE mismatch and stiffness imbalances across the stack,” said Hamed Gholami Derami, strategic technologist for advanced semiconductor packaging at [Brewer Science](https://semiengineering.com/entities/brewer-science/). “There are several different types of polymers with different glass transition temperatures used in the same stack. Going above the Tg (glass transition temperature) of any of these materials causes a sharp drop in modulus and an increase in CTE (coefficient of thermal expansion), which increases warpage. Other factors that affect the panel warpage are layer thickness (direct correlation), cure shrinkage of polymers (causes residual stress and increases warpage), and copper/metal density in the stack (more copper leads to more warpage).”

What this means is advanced packages are no longer relatively simple structures made from a narrow set of materials with reasonably predictable interactions. They are mechanically asymmetrical systems. Different layers expand, soften, shrink, and store stress differently. A stack may seem stable at one temperature and become unstable at another. A cure step that improves one material can distort another. A copper-rich region that improves electrical performance can alter the stiffness balance and increase deformation. This becomes much more consequential when the package gets larger, and the alignment budgets tighten.

“In the packaging world, it’s the worst of all worlds,” said Mike Kelly, vice president, chiplets/FCBGA integration at [Amkor](https://semiengineering.com/entities/amkor-technology/). “You start with those organic substrates with high CTE, and then you’re putting lots of low-CTE silicon on top. So it’s imbalanced, and when it heats up it’s going to be anything but flat.”

This is why panel-scale discussions and glass discussions regularly overlap at conferences. As module sizes increase, wafer-scale economics and yield become less compelling, prompting greater interest in panel-scale processing.

“Glass is a totally different material than silicon, with a totally different manufacturing process,” said Lang Lin, principal product manager at [Synopsys](https://semiengineering.com/entities/synopsys-inc/)**.** “The larger the glass panel you’re trying to make, the more warpage you will see. Today we talk about micrometers of warpage, but with glass it could be even larger. Warpage and residual stress in semiconductor packaging processes involving glass panels are cumulative.”

That concern showed up repeatedly in iMAPS presentations, whether the immediate subject was fan-out, glass carriers, or more advanced die stacking. At larger sizes and finer pitches, a slight bow that once might have been corrected through process adjustment can cascade into alignment problems, handling difficulties, and lower yield.

“We do a certain level of modeling to model the warpage beforehand, and then there are certain levers you can pull to control the warpage,” said Knowlton Olmstead, senior manager in the Wafer Services Business Unit at Amkor. “Some warpage can be tolerated during the assembly process, but if the warpage is too high it can lead to issues.”

Warpage is not merely a simulation output or a materials-science abstraction. At some point, it becomes a simple question of whether the structure can still be held, aligned, and processed repeatably.

**Glass solves some problems but creates others**  
Warpage is one of the big reasons glass keeps surfacing as a panel option in advanced packaging flows. On paper, it offers several attractive properties. It is flat, dimensionally stable, and can be matched much more closely to silicon than many organic materials can. In carrier form, it also creates useful optical options for debonding and alignment.

“Glass is very stable and very level,” said Wiwy Wudjud, engineering program manager at [ASE](https://semiengineering.com/entities/ase/). “It matches very closely to the CTE of silicon wafers. That’s why, using a glass carrier, we can reduce the warpage significantly in the process.”

A flatter structure is easier to bond accurately. A closer thermal match to silicon reduces one of the major sources of stress. For fine-pitch processes, both can directly improve alignment accuracy and process repeatability. Glass also offers transparency, which makes it attractive for optical alignment and for carrier applications that rely on UV or laser debonding.

But glass does not eliminate mechanical problems so much as shift them. While it reduces warpage, it introduces a more brittle material with different failure modes and much less tolerance for mishandling. As glass carriers get larger and are used more extensively in advanced packaging flows, edge damage, chipping, microcracks, and process-induced defects become harder to ignore.

“A glass carrier is no longer an alternative material,” said Wudjud in an iMAPS presentation. “It offers many benefits, but glass is inherently brittle in nature, which introduces reliability concerns, especially cracking and microcracking at the edge of the wafer, which is the weakest point.”

Materials can be flat, stable, and thermally attractive while still failing in ways that are difficult to detect early. Edge damage, microcracks, and cumulative handling defects matter much more when the material has a low tolerance for local damage. The problem becomes even more serious if carriers are reclaimed and reused, because small defects can propagate over time, reducing toughness before a more obvious failure occurs.

ASE focused on that issue in a presentation at iMAPS, emphasizing that edge-related damage in glass is not always captured well by conventional methods. The company even developed a pendulum impact test to evaluate edge toughness under conditions that more closely simulate real handling and packaging stresses.

“The weakest point is at the edge,” said Wudjud. “Failure in brittle materials like glass quickly initiates there, and conventional tests do not fully capture the edge-related damage or real handling conditions.”

**Hybrid bonding gets harder as pitch shrinks**  
Hybrid bonding often gets framed as the next logical step in density scaling, and in many ways it is. It offers the interconnect density and electrical performance needed for tighter die-to-die integration, especially as AI and HPC architectures continue to push for more bandwidth in less space. But the manufacturing challenges are changing as the pitch shrinks. At larger pitches, yield is still heavily influenced by defects and contamination. At smaller pitches, stress begins to dominate in ways that are less visible and often harder to control.

“For pitch sizes above 5 microns, the yield is mostly determined by defects,” said Brewer Science’s Derami. “However, as we shrink the pitch size, we gradually transition to a stress-driven regime, where below a 2 to 3 micron pitch, the yield is primarily stress-driven. This is mostly due to higher copper density at lower pitch sizes, which increases mechanical stress due to copper expansion and dielectric constraints.”

That distinction matters because it changes the dynamics of hybrid bonding. It is still true that contamination and topography control are critical, but once copper density increases and the interface becomes more mechanically constrained, the package can encounter a different class of problems. Stress becomes a part of the dominant failure physics, meaning it is no longer just a secondary concern riding behind cleanliness. As a result, improvements in defect control may no longer be sufficient to maintain yield as pitches continue to shrink.

“Copper hybrid bonding is super sensitive to any kind of particulate contamination because it’s essentially a glass-to-glass interface,” said Kelly. “There are no organics for compliance, so it only takes one nano-sized particle, and you basically lift the glass off and mess up a whole bunch of units on the wafer.”

In a more compliant structure, a small local defect may be partly absorbed or tolerated. In copper hybrid bonding, that tolerance is much lower. The challenge is not only to keep the surfaces clean, but to also manage planarity, oxide and copper topography, annealing behavior, and the mechanical interaction of a denser interconnect structure.

“When you look at the IC architecture side of things, this is where we start to get into hybrid bonding, because it’s required,” said Mark Gerber, group director for IC packaging and product management at [Cadence](https://semiengineering.com/entities/cadence-design-systems/), in a presentation at iMAPS. “You have to have hybrid bonding, and the primary driver for that is the timing considerations. When you’re doing silicon design and integration on the different IP blocks, speed and the timing sensitivity of these is very, very critical.”

![](https://i0.wp.com/semiengineering.com/wp-content/uploads/2026/03/Greg-Fig-1.jpg?resize=655%2C273&ssl=1)

**Fig. 1: Cadence’s Mark Gerber discusses 3D die/wafer stacking at iMAPS. Source: Gregory Haley/Semiconductor Engineering**

Hybrid bonding is not being pursued because it is easy. It is being pursued because more traditional interconnect schemes increasingly fall short in the face of bandwidth, latency, and power demands. As a result, packaging engineers are being pushed toward a process that becomes more sensitive in two directions at once. It remains highly vulnerable to contamination, while also becoming more vulnerable to stress as pitches shrink. The engineering burden is shifting from solving a single dominant problem to solving several tightly coupled problems simultaneously.

That also helps explain why simulation and process co-optimization are taking on a larger role. Companies need to model warpage and stress before manufacturing failures show up in yield, a point that applies especially to hybrid bonding, where small geometric or mechanical variations can propagate into larger integration problems downstream.

**Backside handling becomes part of the precision budget**  
The move toward thinner, denser, higher-performance structures that render hybrid bonding attractive also makes backside handling harder. As dies are thinned more aggressively, the support material beneath them becomes part of the precision budget. Grinding, temporary bonding, debonding, and cleaning are no longer secondary steps that can tolerate broad process variation.

“As devices get thinner, the grinding process becomes more critical and more challenging,” said Derami. “The total thickness variation of the temporary bonding materials directly affects the quality and uniformity of the thinned device and should be low enough to allow for extreme thinning, especially for HBM DRAM dies.”

Temporary bonding materials used to be discussed more as enabling layers, helpful but largely in the background. As device thickness keeps shrinking, that is no longer the case. If the temporary bonding layer varies too much in thickness, the grinding result will vary with it. That variation then affects downstream alignment, mechanical stability, and yield. The carrier and adhesive system are helping to define the precision limits, not simply facilitating the process.

Advanced packaging no longer consists of a series of independent unit processes that can each be optimized in isolation. It is becoming a cumulative mechanical history. Stress introduced in one step affects the margin available in the next. A slight positional shift after one process can narrow alignment tolerance in the next. A warpage problem that seems manageable early can become much harder to correct later, after additional layers and thermal excursions have been added.

“Each step will introduce some kind of stress into the system,” said Synopsys’ Lin. “You have to make sure each step does not create too much stress so the next step can proceed.”

Backside processing offers a clever routing innovation, but it also creates a manufacturing burden. It changes how device structures are supported, cleaned, aligned, and kept intact. Exposed or thinned silicon can help with thermal path design, but it also makes the package more mechanically imbalanced and difficult to manage during later steps.

“With backside power, you put a carrier chip on top because you end up thinning the bulk silicon down to something like five microns,” said Amkor’s Kelly. “You’re almost removing it all, and then you bring power and I/O out the same side, but it’s the opposite side we’re used to.”

Residue and contamination make that burden heavier. Temporary bonding layers can leave residues after debonding, and if cleaning is not done properly, those residues can introduce downstream problems. The physical act of thinning is only part of the challenge. The assembly also has to emerge from the support and debonding sequence clean enough to continue through the rest of the process without adding new yield limiters.

**Substrate shortages are really substrate limits**  
Substrate shortages have been discussed for years as a supply-chain problem, and that remains part of the story, but the issue is now larger than mere availability. Advanced packaging is also pressing against the limits of what traditional substrate platforms can do gracefully as modules grow in size, power, and complexity.

“Everybody’s chasing that technology, but there’s just not enough 200-millimeter substrates around,” said Joe Roybal, senior vice president and general manager of the mainstream business unit at Amkor.

Demand remains high, and capacity does not always line up cleanly with what advanced package programs need. Package size is growing faster than confidence in the mechanical and economic margins of existing approaches.

“As the module size keeps increasing, you cannot fit a lot of units in a wafer, and the cost and the yield numbers don’t make sense at a wafer scale,” said Poulomi Mukherjee, process integration engineer at Applied Materials, in an iMAPS presentation. “If you want to keep up with the demand, we have to move to a higher form factor, which is the panel scale processing.”

**![](https://i0.wp.com/semiengineering.com/wp-content/uploads/2026/03/Greg-Fig-2.jpg?resize=826%2C314&ssl=1)**

**Fig. 2: Poulomi Mukherjee of Applied Materials discusses glass substrate challenges. Source: Gregory Haley/iMAPS**

That is one reason glass, panel processing, and alternative substrate concepts keep resurfacing in the same conversations. The industry is looking for a platform that can support larger modules, tighter integration, and more difficult thermal and power-delivery requirements without collapsing under its own mechanical complexity. The problem is that each proposed solution solves one class of issues while exposing another. Panel processing may improve economics, but it amplifies warpage and cumulative stress. And backside approaches may improve electrical performance, but they require more aggressive thinning and tight process control.

It is also clear that adoption of new platforms will not be uniform across applications. The enthusiasm around glass at iMAPS was driven largely by AI, HPC, and advanced integration discussions, but that does not mean every market is ready to make the same move. “I don’t see glass happening in automotive,” said Amkor’s Roybal.

Automotive packaging has very different qualification, reliability, and cost expectations than AI accelerators or bleeding-edge HPC modules. In the auto market, proven package types and long-term reliability tend to carry more weight than the promise of a new substrate platform.

**Conclusion**  
The clearest lesson from this year’s packaging discussions is that the next stage of scaling will depend less on any single breakthrough than on whether the whole process stack can be made stable enough to scale. Warpage affects alignment and handling. Handling affects crack formation and edge damage. Thinning affects uniformity, stress, and contamination risk. Hybrid bonding improves density and bandwidth, but it is highly sensitive to both particles and stress as pitch shrinks. What used to look like separate issues are now co-dependent parts of the same manufacturing problem.

The industry’s roadblocks no longer look purely electrical. Engineers certainly can envision more advanced package architectures, creating architectures that can be built repeatably, cleanly, and economically enough to move into sustained production is a challenge. The real constraint is process integration discipline across materials, mechanical behavior, thermal history, and yield management.

That challenge is already reshaping how experts talk about the field. The move to larger modules and tighter die-to-die integration is forcing a more holistic view in which substrate choice, carrier strategy, panel flatness, copper density, debonding cleanliness, and process sequence are all considered together. It is no longer enough to solve one problem locally if the solution creates a larger mechanical penalty two steps later. Scaling increasingly depends on anticipating how the whole structure will behave before the process window closes.

* * *

**Related Articles**  
[Making Hybrid Bonding Better](https://semiengineering.com/making-hybrid-bonding-better/)  
Why this technology is so essential for multi-die assemblies, and how it can be improved.  
[Reliability Risks Shift To The Materials Stack](https://semiengineering.com/reliability-risks-shift-to-the-materials-stack/)  
How polymer behavior, panel mechanics, and thermal coupling affect reliability in 3D integration.  
[Ensuring Reliability Becomes Harder In Multi-Die Assemblies](https://semiengineering.com/reliability-risks-shift-to-the-materials-stack/)  
Materials interactions over long-term use play an increasingly important role.

---

## [HN-TITLE] 23. Girl, 10, finds rare Mexican axolotl under Welsh bridge

- **Source**: [https://www.bbc.com/news/articles/c9d4zgnqpqeo](https://www.bbc.com/news/articles/c9d4zgnqpqeo)
- **Site**: BBC News
- **Author**: Oscar Edwards
- **Published**: 2026-04-23
- **HN activity**: 185 points · [150 comments](https://news.ycombinator.com/item?id=47880189)
- **Length**: 952 words (~5 min read)
- **Language**: en-GB

## Girl, 10, finds rare Mexican axolotl under Welsh bridge

12 hours ago

Oscar Edwards,BBC Walesand

Niki Cardwell

Dippy the axolotl has found a new home in Leicester

A nature-loving 10-year-old girl who found an endangered amphibian under a bridge has left her mum in "shock, surprise and disbelief".

Melanie Hill said her daughter, Evie, discovered the nine-inch Mexican axolotl as they spent the day near the River Ogmore in Bridgend.

She said Evie was "always finding things" like newts and bugs, but said the axolotl discovery was a surprise.

It is the first documented discovery of an axolotl in the wild in the UK with only [50 to 1,000](https://www.conservation.org/learning/axolotl-conservation) left globally, according to experts.

Axolotls as pets have seen a surge in popularity in recent years after they were introduced to video games such as Minecraft and Roblox.

More stories like this

Evie spotted the axolotl nestled in the rocks after lifting up a discarded mat in the shallows of the River Ogmore.

She was playing in the water under the "Dipping Bridge" at the entrance to Merthyr Mawr village when she noticed the creature had damage to its tail and stomach.

"I went down to the bank and there was this axolotl there," said Evie. "I caught it and brought it back."

Melanie said they were touring Wales in a camper van at the time and had seen people recommending the beauty spot online.

"The kids were down at the water having a nose and suddenly everything changed.

"You can imagine my surprise, I couldn't believe it," she added.

The family decided to cut their holiday short to take the axolotl back to their home in Leicester, naming it Dippy as a tribute to where Evie found it.

Chris Newman, director of the National Centre for Reptile Welfare, said Evie probably saved Dippy's life.

![](https://static.files.bbci.co.uk/bbcdotcom/web/20260409-151157-6d668e92bf-web-3.1.0-1/grey-placeholder.png)![Melanie Hill Evie holding the axolotl in a container filled with water](https://ichef.bbci.co.uk/news/480/cpsprodpb/97c5/live/ac5dc860-3eeb-11f1-bd52-e755d604ece4.jpg.webp)Melanie Hill

Evie said everyone at school finds her new pet "fascinating"

The find was initially not a surprise for Melanie, who said her daughter has a fascination with nature.

But it quickly dawned on her that this was not a typical find on a day trip to south Wales.

"I've been telling Evie all this time that those creatures she watches on YouTube, they're not real.

"Here I am with one in my kitchen," she said.

Melanie said she did not realise axolotls "could grow that big". They can reach 12 inches (30cm) in length but on average, [grow to about 9 inches](https://animals.sandiegozoo.org/animals/axolotl) (23cm), according to experts.

![](https://static.files.bbci.co.uk/bbcdotcom/web/20260409-151157-6d668e92bf-web-3.1.0-1/grey-placeholder.png)![Satellite-style map of south Wales showing the coastline and surrounding countryside. A red label marks “The Dipping Bridge” inland between the coastal settlements of Merthyr Mawr and Bridgend, west of Cardiff and north of Barry. An inset outline map of Wales in the top right highlights the location area,.](https://ichef.bbci.co.uk/news/480/cpsprodpb/eab4/live/b45eefa0-3f20-11f1-b55d-0f258dce1735.jpg.webp)

Melanie said they had "spent hours" researching ways to keep the axolotl healthy and that they had "no regrets" about bringing it home.

"We've got a much bigger tank and we plan to get that set up so it can be transferred," she added.

After seeking expert advice, the family has been told they can keep the axolotl.

Dippy has also been a big hit at Evie's school.

"Everybody at school is fascinated about the story of Dippy," she said.

"I think it's really interesting."

## What is an axolotl?

The axolotl is a type of salamander that does not go through metamorphosis to become an adult, according to the [Natural History Museum](https://www.nhm.ac.uk/discover/axolotls-amphibians-that-never-grow-up.html).

Salamanders are amphibians that, like frogs and newts, start off living in water.

Typically, this type of creature will adapt as it ages, replacing water-breathing gills for air-breathing lungs that enable them to live on land.

But axolotls never make this transition, retaining their frilly external gills and living in the water for their entire lifecycle.

Like many species of salamander, they have the remarkable ability to regenerate parts of their bodies. including limbs, eyes and even parts of their brains.

## 'Challenging' to look after

There has been a surge in keeping axolotls as pets in recent years due to games like Minecraft and Roblox, which feature them.

The RSPCA said [this was a concern](https://www.bbc.co.uk/newsround/60156366) as people underestimated how difficult they are to look after, meaning some owners were unable to care for the amphibian properly.

Chris Newman, the National Centre for Reptile Welfare (NCRW) director, said the manner in which Dippy was found suggested its owner had released it due to a "change in circumstances".

"First of all, it's illegal to release a non-native species into the wild - and it's not good from a welfare point of view either," he added.

Experts have warned axolotls should never be bought on impulse as they can "very challenging" to look after.

This is because they have the same environmental, dietary and behavioural needs in captivity as they do in the wild.

![](https://static.files.bbci.co.uk/bbcdotcom/web/20260409-151157-6d668e92bf-web-3.1.0-1/grey-placeholder.png)![Melanie Hill Evie in the River Ogmore under a bridge ](https://ichef.bbci.co.uk/news/480/cpsprodpb/9d42/live/668b7e40-3efa-11f1-95e9-c9f2031e3375.jpg.webp)Melanie Hill

Evie made the discovery near the "Dipping Bridge" in Merthyr Mawr

## What should you do with an endangered animal?

Axolotls used to be found in abundance in Mexico but urban expansion and the decline of the chinampas - agricultural islands - have [drastically reduced](https://www.bbc.co.uk/news/articles/cm2xr2jzelyo) their habitat.

They have flourished in captivity and are commonly used as aquarium pets, zoo attractions and even feature on Mexican currency. But in the wild they are dangerously close to vanishing forever.

Discoveries like Dippy should be reported to the government through organisations such as the NCRW.

Newman said there were no recorded sightings of Mexican axolotls in the UK or the world, adding that Evie probably saved its life.

"This is a quite a unique situation, and I think the young female has a keen eye to actually spot it," he said.

"I think she did remarkably to find him."

Without her help, Newman said the axolotl had little chance of living very long, so she "did him a real favour" by catching him.

"That itself is no mean feat," he said. "They're quite slippery, so I think she did really well."

More top stories

Related internet links

---

## [HN-TITLE] 24. Alberta startup sells no-tech tractors for half price

- **Source**: [https://wheelfront.com/this-alberta-startup-sells-no-tech-tractors-for-half-price/](https://wheelfront.com/this-alberta-startup-sells-no-tech-tractors-for-half-price/)
- **Site**: Wheel Front
- **Author**: Wheel Front Team
- **Published**: 2026-04-20
- **HN activity**: 2155 points · [738 comments](https://news.ycombinator.com/item?id=47865868)
- **Length**: 650 words (~3 min read)
- **Language**: en-US

[Home](https://wheelfront.com/) • [Automotive News](https://wheelfront.com/automotive-news/) • This Alberta Startup Sells No-Tech Tractors for Half Price

Automotive News

![](data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIxOTIwIiBoZWlnaHQ9IjEwODAiIHZpZXdCb3g9IjAgMCAxOTIwIDEwODAiPjxyZWN0IHdpZHRoPSIxMDAlIiBoZWlnaHQ9IjEwMCUiIHN0eWxlPSJmaWxsOiNjZmQ0ZGI7ZmlsbC1vcGFjaXR5OiAwLjE7Ii8+PC9zdmc+)

![Avatar photo](data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4MCIgaGVpZ2h0PSI4MCIgdmlld0JveD0iMCAwIDgwIDgwIj48cmVjdCB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBzdHlsZT0iZmlsbDojY2ZkNGRiO2ZpbGwtb3BhY2l0eTogMC4xOyIvPjwvc3ZnPg==)

Stay connected via Google News

Follow us for the latest travel updates and guides.

[![Add as preferred source on Google](data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMzgiIGhlaWdodD0iMTA3IiB2aWV3Qm94PSIwIDAgMzM4IDEwNyI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgc3R5bGU9ImZpbGw6I2NmZDRkYjtmaWxsLW9wYWNpdHk6IDAuMTsiLz48L3N2Zz4=)](https://www.google.com/preferences/source?q=wheelfront.com)

Four hundred inquiries from American farmers poured in after a single interview. Not for a John Deere. Not for a Case IH. For a tractor built in Alberta with a remanufactured 1990s diesel engine and zero electronics.

Ursa Ag, a small Canadian manufacturer, is assembling tractors powered by 12-valve Cummins engines — the same mechanically injected workhorses that [powered combines and pickup](https://wheelfront.com/toyota-plans-to-unveil-new-compact-pickup-for-2027-hybrid-power-meets-versatile-design/ "Toyota Plans to Unveil New Compact Pickup for 2027: Hybrid Power Meets Versatile Design") trucks decades ago — and selling them for roughly half the price of comparable machines from established brands. The 150-horsepower model starts at $129,900 CAD, about $95,000 USD. The range-topping 260-hp version runs $199,900 CAD, around $146,000.

Try finding a similarly powered John Deere for that money.

Owner Doug Wilson isn’t pretending this is cutting-edge technology. That’s the entire point. The 150-hp and 180-hp models use remanufactured 5.9-liter Cummins engines, while the 260-hp gets an 8.3-liter unit.

All are fed by Bosch P-pumps — purely mechanical fuel injection, no ECU, no proprietary software handshake required. The cabs are sourced externally and stripped to essentials: an air ride seat, mechanically connected controls, and nothing resembling a touchscreen.

This plays directly into a fight that has been simmering for years. John Deere’s right-to-repair battles became a national story when farmers discovered they couldn’t fix their own equipment without dealer-authorized software. Lawsuits followed, then legislation.

![](data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIxMjAwIiBoZWlnaHQ9IjY3NSIgdmlld0JveD0iMCAwIDEyMDAgNjc1Ij48cmVjdCB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBzdHlsZT0iZmlsbDojY2ZkNGRiO2ZpbGwtb3BhY2l0eTogMC4xOyIvPjwvc3ZnPg==)

Deere eventually made concessions, but the damage was done. A generation of farmers learned exactly how much control they’d surrendered by buying machines loaded with proprietary code.

Wilson saw the gap and drove a tractor through it. The 12-valve Cummins is arguably the most widely understood diesel engine in North America. Every independent shop, every shade-tree mechanic with a set of wrenches, [every farmer who grew up turning](https://wheelfront.com/volvo-turns-every-ex30-into-a-portable-power-station-with-a-single-software-push/ "Volvo Turns Every EX30 Into a Portable Power Station With a Single Software Push") bolts has encountered one.

Parts sit on shelves in thousands of stores. Downtime — the thing that actually costs a farmer money during planting or harvest — shrinks dramatically when you don’t need a factory technician with a laptop to diagnose a fuel delivery problem.

Ursa Ag’s dealer network remains tiny, and the company sells direct. Wilson admitted they haven’t scaled up distribution because they can’t keep shelves stocked as it stands. He says 2026 production will exceed the company’s entire cumulative output, which is a bold claim from a small operation, and whether they can actually deliver is the single biggest question hanging over this story.

The U.S. market is where things get interesting. Ursa Ag has no American distributors yet, though Wilson says that’s likely to change. The easiest answer is yes, we can [ship to the United](https://wheelfront.com/stellantis-ships-1-4m-units-in-q1-now-prove-it-paid-off/ "Stellantis Ships 1.4M Units in Q1. Now Prove It Paid Off.") States,” he told reporters.

![](data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIxMjAwIiBoZWlnaHQ9IjY3NSIgdmlld0JveD0iMCAwIDEyMDAgNjc1Ij48cmVjdCB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBzdHlsZT0iZmlsbDojY2ZkNGRiO2ZpbGwtb3BhY2l0eTogMC4xOyIvPjwvc3ZnPg==)

Those 400 American inquiries after one Farms.com segment suggest the appetite is real. Farmers who have been buying 30-year-old equipment to avoid modern complexity now have a new alternative — a machine with fresh sheet metal, a warranty, and an engine philosophy rooted firmly in the past.

There’s a reason the used tractor market has been so robust. Plenty of operators looked at a $300,000 machine full of sensors and software and decided a well-maintained older unit was the smarter bet. Ursa Ag is manufacturing that bet from scratch.

Whether a small Alberta company can scale fast enough to meet demand from an entire continent is another matter. The big manufacturers have supply chains, dealer networks, and financing arms that took decades to build. Wilson has remanufactured Cummins engines and a value proposition that resonates with anyone who has ever waited three days for a dealer tech to show up with a diagnostic cable.

The farm equipment industry spent 20 years adding complexity and cost. Ursa Ag is wagering that a significant number of farmers never wanted any of it.

Stay connected via Google News

Follow us for the latest travel updates and guides.

[![Add as preferred source on Google](data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMzgiIGhlaWdodD0iMTA3IiB2aWV3Qm94PSIwIDAgMzM4IDEwNyI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgc3R5bGU9ImZpbGw6I2NmZDRkYjtmaWxsLW9wYWNpdHk6IDAuMTsiLz48L3N2Zz4=)](https://www.google.com/preferences/source?q=wheelfront.com)

---

## [HN-TITLE] 25. Writing a C Compiler, in Zig (2025)

- **Source**: [https://ar-ms.me/thoughts/c-compiler-1-zig/](https://ar-ms.me/thoughts/c-compiler-1-zig/)
- **Site**: Abdul Rahman Sibahi
- **Submitter**: tosh (Hacker News)
- **Submitted**: 2026-04-23 09:20 UTC (Hacker News)
- **HN activity**: 141 points · [41 comments](https://news.ycombinator.com/item?id=47873694)
- **Length**: 111 words (~1 min read)

[⏏️](https://ar-ms.me/)

First foray into the Zig programming language

2025-06-17

This is a series of articles I wrote while writing [`paella`](https://github.com/asibahi/paella), following Nora Sandler's [Writing a C Compiler](https://norasandler.com/2022/03/29/Write-a-C-Compiler-the-Book.html). It was both an exercise to learn Zig and a way to waste time instead of looking for work, as I am currently "between jobs". I did not edit them as I collect them here outside of fixing some broken links.

- [Chapter 1: Intro](https://ar-ms.me/paella/c1/)
- [Chapter 2: Unary](https://ar-ms.me/paella/c2/)
- [Chapter 3: Binary](https://ar-ms.me/paella/c3/)
- [Chapter 4: Logic](https://ar-ms.me/paella/c4/)
- [Chapter 5: Variables](https://ar-ms.me/paella/c5/)
- [Chapter 6: Conditions](https://ar-ms.me/paella/c6/)
- [Chapter 7: Blocks](https://ar-ms.me/paella/c7/)
- [Chapter 8: Loops](https://ar-ms.me/paella/c8/)
- [Chapter 9: Functions](https://ar-ms.me/paella/c9/)
- [Chapter 10: Linking](https://ar-ms.me/paella/c10/)

If/when I continue with the book, I shall post the following writeups here.

---

## [HN-TITLE] 26. Using the internet like it's 1999

- **Source**: [https://joshblais.com/blog/using-the-internet-like-its-1999/](https://joshblais.com/blog/using-the-internet-like-its-1999/)
- **Site**: joshblais.com
- **Author**: Joshua Blais
- **Published**: 2026-04-23
- **HN activity**: 107 points · [68 comments](https://news.ycombinator.com/item?id=47881198)
- **Length**: 2.1K words (~10 min read)
- **Language**: en

![netscape browser](https://cella.b-cdn.net/joshblais/netscape.jpeg)

If you only use social media and video hosting frontends - getting fed by algorithms and visiting the same 5 sites everyday on constant [doomscroll](https://en.wikipedia.org/wiki/Doomscrolling), then the [internet has never been alive for you](https://en.wikipedia.org/wiki/Dead_Internet_theory). That experience is perhaps ~3-5% of what the internet could be.

For the vast majority of people, yes - the internet is dying: living inside an algorithmically controlled echochamber that they will never get out of, they live and die by what they are “supposed to see”. But, it does not have to be like this.

With the influx of slop that will be created (and [already](https://www.bbc.co.uk/news/articles/c9wx2dz2v44o) [has](https://en.wikipedia.org/wiki/AI_slop) [been](https://www.theguardian.com/technology/2025/aug/11/cat-soap-operas-and-babies-trapped-in-space-the-ai-slop-taking-over-youtube) [created](https://www.pcmag.com/news/over-21-of-youtube-is-now-ai-slop-says-report)) with LLMs, there is an ever increasing signal to noise on these platforms. This obviously means that we will see less depth of content, less interesting information, and less of the **human** - all of these are not positive things in any regard.

> I had the displeasure of scrolling the tiktok feed on desktop for 30 seconds the other day, and it is a wonder to me how some of us have any attention span left at all. The content was designed to literally suck your soul from your body. AI generated “fruit love island” - It was too much for me. I shook my head and closed the browser tab.

We can use the internet as it was actually intended to be used: go to the protocol layer to interact with the data at it’s source. Throw off the facade of the modern social platform, and we start to see that freedom of information is within grasp.

**The only way to actually use the internet in a way that is going to be beneficial to you is to disregard much of it**. Using technologies from yesteryear, we can solve the problems we face today on the modern advertisement riddled, javascript focused, LLM slop, distracting, pointless, attention-seeking, corporate hellscape that is the web.

**I believe the time is now (and has always been) to use the internet like it’s 1999.**

## 1999[#](#1999)

In 1999, the internet was figuring itself out. There was no social media, no algorithm, hell, Google was just starting up. [Only about 4% of the world’s population was online](https://ourworldindata.org/grapher/share-of-individuals-using-the-internet) (compared to almost 75% today). But, I am not going to suggest we all log off and touch grass (though we should be doing more of that!) My thesis is that we must return to being [citizens of the web](https://en.wikipedia.org/wiki/Netizen), instead of users in some database - we must reclaim agency over our attention, and the technologies presented in the 90s and early 2000s allow us to do just this.

This was by constraint more than by design, but the idea behind how the internet should be used is what we are looking to re-instill. The HTTP, XMPP/IRC, email (SMTP) etc. protocols are genuinely **good**, hence their staying power. The perversion of the protocol is what we are directly assaulting here, the frontend and platform portion of the upper [layer 7](https://www.cloudflare.com/en-gb/learning/ddos/what-is-layer-7/) (ironic Cloudflare link, as their monopoly is also against the principles we will discuss). The browser used to work for *you* instead of actively subverting your security and privacy with hundreds of tracking cookies and scripts on every page load.

The internet was (and never stopped being) [a series of tubes](https://en.wikipedia.org/wiki/Series_of_tubes) that transmits data. That data is accessible and transparent to anyone, and the way we ingest, manipulate, and work with said data is that which we can change to benefit the individual. Let’s discuss.

I have [largely embraced RSS feeds](https://joshblais.com/blog/a-fully-soverign-feed-system/) as the only way to follow blogs/news/video creators/etc. as I don’t want an algorithm to feed me content. I want to make the decision for myself as to what it is that I actually care to consume, and that should not be content that is meant to make the platform the most amount of ad revenue by emotionally manipulating the viewer into spending more time scrolling.

Nor should anything I look at be LLM generated slop: the moment I find something that crosses my desk which starts with “[it’s not this, it’s THIS](https://www.reddit.com/r/ChatGPT/comments/1sqqrjw/the_its_not_just_a_this_its_a_that_sentence/)”, I immediately click off and move on. I want real people, real creators, and real content in my feed, not LLM slop. I have found no better way to “curate reality” better than this.

If you only take one suggestion from this article, this would be it: Setup [miniflux](https://miniflux.app/), find feeds of creators and persons you enjoy following, add their feeds to miniflux, and sit back and relax as the content now comes to you.

## IRC and XMPP[#](#irc-and-xmpp)

We have a [budding community on IRC](https://joshblais.com/community/) that I think is far more interesting than most online communities I have seen simply because of the (small) barrier to entry that is IRC. [Internet Relay Chat](https://en.wikipedia.org/wiki/IRC) has been around since the late 80s, and it is still a protocol which is simply plain text - meaning higher signal to noise (see the pattern?) than a platform that allows images, video, “upvotes”, and the like.

XMPP is an enhancement on IRC in more modern ways, and is the protocol on which many of the major chat applications are built. But, it is best when you host it yourself for you and your friends to participate in group chats and direct peer to peer conversation. Using [OMEMO encryption](https://xmpp.org/extensions/xep-0384.html) (support now in [jabber.el](https://thanosapollo.org/projects/jabber/) in emacs!) allows you to have end-to-end encryption between parties, and encryption lives on the server, so even hosts can’t really see the conversation. Nice.

*Note on Element/Matrix: I don’t recommend using the Matrix protocol. It doesn’t solve anything over and beyond XMPP/IRC and I don’t personally trust it. Plus electron app - no thanks*

## Search engines[#](#search-engines)

You can negate much of the slop du jour by [using your own search engine](https://searx.github.io/searx/), as well as using my [small guide on how to use search engines](https://joshblais.com/blog/using-search-engines-properly/). They are still powerful, they still get you the information you need, but you cannot use them how the 99% does. You have to actually search with intention, using them methodically and professionally. You will not get good results from “learn go programming” but will get much better results from “before&lt;2025&gt; net/http go language”. Ask better questions, get better results.

## Archiving[#](#archiving)

A large problem with the internet has always been [link rot](https://en.wikipedia.org/wiki/Link_rot) - where a bookmark or link that you liked is gone tomorrow because of one of various reasons. You can and should download useful information locally to keep for posterity. I have a [function in my emacs configuration to do just this](https://github.com/joshuablais/studium-emacs/blob/main/lisp/custom/download-media.el), shipping content to a syncthing controlled directory, to push content across my devices, including my phone ([which doesn’t have a browser](https://joshblais.com/blog/what-is-on-my-phone-in-2026/)). You can also use the [Internet Archive’s link tool](https://chromewebstore.google.com/detail/wayback-machine/fpnmgdkabkmnadcjpehmlllkndpkmiak) for creating backups that will live on their servers.

## Email[#](#email)

When people DM me on various platforms, I generally just tell them to email me. I know it is annoying for most, but the reason is well merited: by chatting on platforms, you and I do not own the conversation. Worse, that conversation is likely being monitored and parsed so that we can be encouraged to consoom product at a later date. I’d rather just talk to you directly.

Email is a point of contact that is not being farmed for keywords by platforms to then serve us ads (you’re not using gmail, right?). Those that know me, have my email or phone number, those that don’t, could very easily. But - the friction point of writing an email and sending it is too much for many people, and is a natural filter.

[PGP](https://www.openpgp.org/) is a great way to make sure your email is read only by those that you intend to read it. Use it.

You can find my public key [here](https://joshblais.com/contact).

## Push only - POSSE method[#](#push-only---posse-method)

Most people consume feeds on social media, and while I would rather not use socials at all, the fact of the matter is that we can spread the good word via social media, using it as a push platform, not to pull. So, I use APIs and tools to get content out on social media platforms. [I don’t consume social media](https://joshblais.com/blog/how-to-use-social-media/), nor do I spend more than ~5 minutes per week on it (only answering DMs by giving out my email/phone mostly).

There is a tenant of the [IndieWeb](https://indieweb.org/) called [POSSE](https://indieweb.org/POSSE) - You own the content as it is on your own platform, and then you ship it to other locations to increase your reach. I would recommend doing this.

## Gopher/Gemini[#](#gophergemini)

In addition to the IndieWeb, we can look to the [SmolWeb](https://smolweb.org/) for some inspiration as to how to use the internet. Both protocols are tremendously light and focus on text as the primitive for all communication. [Gemini](https://en.wikipedia.org/wiki/Gemini_%28protocol%29) is newer and a bit of a middle finger to the modern web, whereas [Gopher](https://en.wikipedia.org/wiki/Gopher_%28protocol%29) is the old guard. While I would agree with some of the sentiment that [Gemini is solutionism](https://xn--gckvb8fzb.com/gemini-is-solutionism-at-its-worst/), it is still interesting to see what can be done when we take text and make it the focus of a platform.

However, I think that [http](https://en.wikipedia.org/wiki/HTTP) is not the villain, not by a long shot - it is just how we treat it. Instead of bloating up Chrome tabs to hundreds of megs (that is more than some linux distributions), we could be expanding upon it and building out something that focuses on the good that the web can do. So, while a fun aside, I don’t spend a ton of time on Gopher or Gemini these days.

## General internet tips[#](#general-internet-tips)

On your router, you can and should [setup](https://blocklistproject.github.io/Lists/adguard/porn-ags.txt) [blocklists](https://adguardteam.github.io/AdGuardSDNSFilter/Filters/filter.txt) for various malicious and nefarious domains, advertisements, [adult content](https://nsfw.oisd.nl/), etc. This is not “1999-esque” in practice, but is a requirement for the modern web.

I recommend [using a text only browser](https://joshblais.com/blog/emacs-as-my-browser/), but if you do use a regular browser, then [disabling javascript](https://disable-javascript.org/) and using [ublock origin](https://ublockorigin.com/) are both recommended mitigations.

[Don’t use social media as a consumer](https://joshblais.com/blog/how-to-use-social-media/), don’t argue with people online, and generally seek out information and interesting people which leads us to…

## Embrace the Human[#](#embrace-the-human)

Finally, I only want to promote, consume, and talk with real humans. Using the internet as if it were the 90s or early 00s means focusing on the human, because nowadays the internet is not real, it is a figment of our collective imaginations as to what we think is real. It is an ugly place if we are not careful to be deeply intentional with that which we watch, read, and listen to.

But, I am still a [bloomer](https://knowyourmeme.com/memes/bloomer) when it comes to the internet at large, but we have been doing our very best collectively to make it worse since the inception (give or take) of Facebook.

Authenticity is in short supply and seems to be the only way forward, for so much of what we see is manufactured, tailored, and designed to show something that doesn’t exist. Imperfection is the mark of the human, the spelling mistakes, the last minute word addition because you misspoke on a video, it’s all more **real** because of this. We can strive to glorify the Creator with creation, and that will always be more enjoyable than the sterile veil over what could be authentic.

## Conclusion[#](#conclusion)

The internet as it was conceived was perhaps humanity’s greatest achievement and has created so much good. It has taken people out of poverty, it has given us information on topics of any and all kinds. I would not be the person I am today without it, as I have made great friendships and seen what community can do, and helped to (hopefully) create value for thousands of people that read these words daily or watch my videos.

That doesn’t negate the fact that the internet is also a great double edged sword: while you can learn anything, you can also be taken over by meaningless, infinite distraction, manipulated into seeing the world in certain ways, and lose your humanity if you are not careful. We took a wrong turn by locking ourselves into content silos and embracing comfort instead of seeking truth, and it will not end well unless we do a hard u-turn to authenticity and sovereignty. As we continue in this perpetual lockstep to making the internet a worse place, I will be, hopefully with a few of you, using the internet as if it were 1999.

How are you using the internet as if it were a more sane time in history? Post a comment or send me an email.

As always, God bless, and until next time.

If you enjoyed this post, consider [Supporting my work](https://joshblais.com/support/), [Checking out my book](https://mountainthebook.com), [Working with me](https://joshblais.com/work-with-me), or sending me an [Email](mailto:josh@joshblais.com) to tell me what you think..

---

## [HN-TITLE] 27. 2026 Ruby on Rails Community Survey

- **Source**: [https://railsdeveloper.com/survey/](https://railsdeveloper.com/survey/)
- **Site**: Planet Argon
- **Author**: Planet Argon team
- **Submitted**: 2026-04-24 03:00 UTC (Hacker News)
- **HN activity**: 3 points · [0 comments](https://news.ycombinator.com/item?id=47884967)
- **Length**: 85 words (~1 min read)
- **Language**: en

In 2024, 2,709 Rails developers shared how they actually work: the tools they trust, the teams they're on, where they're deploying, and what's keeping them up at night.

For 2026, we're going deeper, including how AI is fitting into Rails workflows, and whether it actually is. Your perspective helps complete that picture. The more voices in the data, the more useful it is for everyone who reads the results, including you. And we publish everything, free, for the whole community.

Ten minutes. Anonymous. Worth it.

---

## [HN-TITLE] 28. WireGuard for Windows Reaches v1.0

- **Source**: [https://lists.zx2c4.com/pipermail/wireguard/2026-April/009580.html](https://lists.zx2c4.com/pipermail/wireguard/2026-April/009580.html)
- **Site**: lists.zx2c4.com
- **Submitter**: zx2c4 (Hacker News)
- **Submitted**: 2026-04-21 21:26 UTC (Hacker News)
- **HN activity**: 121 points · [7 comments](https://news.ycombinator.com/item?id=47854710)
- **Length**: 1.4K words (~6 min read)

**Jason A. Donenfeld** [Jason at zx2c4.com](mailto:wireguard%40lists.zx2c4.com?Subject=Re%3A%20%5BANNOUNCE%5D%20WireGuard%20for%20Windows%20and%20WireGuardNT%2C%20Version%201.0&In-Reply-To=%3CCAHmME9pDd2JMcEuSgOKpXPhUB8FSO%2BrNJdTkXRzpLhK1_xW9Cg%40mail.gmail.com%3E "[ANNOUNCE] WireGuard for Windows and WireGuardNT, Version 1.0")  
*Sat Apr 18 16:23:52 UTC 2026*

- Previous message (by thread): [WireGuard Windows 0.6.1 - Timeline of issues (tunnels lost & import still broken)](https://lists.zx2c4.com/pipermail/wireguard/2026-April/009581.html)
- **Messages sorted by:** [\[ date \]](https://lists.zx2c4.com/pipermail/wireguard/2026-April/date.html#9580) [\[ thread \]](https://lists.zx2c4.com/pipermail/wireguard/2026-April/thread.html#9580) [\[ subject \]](https://lists.zx2c4.com/pipermail/wireguard/2026-April/subject.html#9580) [\[ author \]](https://lists.zx2c4.com/pipermail/wireguard/2026-April/author.html#9580)

* * *

```
Hey again,

I’m happy to announce the v1.0 release of WireGuardNT and WireGuard
for Windows. The final “1.0 blockers” have been completed at last, and
I’m quite happy to have reached this milestone. It should now be
available from the built-in updater. And you can download it fresh
from:

- https://download.wireguard.com/windows-client/wireguard-installer.exe
- https://www.wireguard.com/install/

And to learn more about each of these two Windows projects:
- https://git.zx2c4.com/wireguard-windows/about/
- https://git.zx2c4.com/wireguard-nt/about/

Before I say more, I wanted to note that the WireGuard Project runs on
support from large companies and individuals alike. You can help out
at: https://www.wireguard.com/donations/ . If your company uses
WireGuard, consider talking to your employer about becoming a large
donor and appearing on that page. If you use a VPN from a VPN
provider, consider writing to them to suggest they donate to the
project. It does make a difference and is the reason the project is
able to live on.

The 1.0 release of WireGuardNT is a pile of bug fixes, after having
done a big read through of the source code and countless hours of new
testing. But it also has two big improvements, which have long been
considered release blockers for me.

Firstly, 1.0 now makes use of
NdisWdfGetAdapterContextFromAdapterHandle(). WireGuard’s IOCTL works
by piggybacking on the NDIS device node, so that it inherits NDIS’
setup and permissions. Each IOCTL is thus passed through the device’s
“functional device object”. There’s no documented function to go from
a pointer in the functional device object to the WireGuard-specific
state allocated. The functional device object’s DeviceExtension field
points to the NDIS_MINIPORT_BLOCK structure, which itself has a
pointer to the WireGuard-specific state. But that latter pointer is at
a potentially unstable offset, as it’s not within the documented part
of NDIS_MINIPORT_BLOCK. So, previously, I was using the “Reserved”
member of the functional device object to stuff a pointer in, but who
knows when that was to be used by something, a ticking time bomb.
Fortunately, every Windows 10 version since the first one has the
NdisWdfGetAdapterContextFromAdapterHandle() function, originally added
for NetAdapterCx, which means it’s not going away any time soon and
its behavior won’t change. This function simply goes to the right
offset in NDIS_MINIPORT_BLOCK where the driver-specific state is
stored. Put together, we get this handy function:

static WG_DEVICE *
WgDeviceFromFdo(_In_ DEVICE_OBJECT *DeviceObject)
{
    if (DeviceObject->DeviceType != FILE_DEVICE_PHYSICAL_NETCARD ||
!DeviceObject->DeviceExtension)
        return NULL;
    return NdisWdfGetAdapterContextFromAdapterHandle(DeviceObject->DeviceExtension);
}

This seems to work well and will hopefully ensure reliability into the future.

The second big 1.0 blocker that’s been solved is proper MTU change
notifications. As you may or may not know, WireGuard pads packets to
the nearest 16 bytes, but only up to the MTU of the interface, in
order to protect against traffic analysis attacks. This means the
WireGuard driver needs to know its own MTU. On Linux, we have full
access to this information, as its considered a property of the
network interface itself, so we can extract it trivially with
`skb->dev->mtu`, and do various calculations. But on Windows, the MTU
is a combined property of the network adapter’s minimum and maximum,
the tcp/ip interface’s selected MTU, which splits into v4 and v6
cases, and the same split cases for the tcp/ip interface’s
subinterfaces. This is sort of complicated, but I guess it fit a
device model that at one point made sense. The driver is responsible
for controlling the adapter’s minimum and maximum MTU. PowerShell’s
Set-NetIpInterface will change the interface-level MTU (via
SetIpInterfaceEntry()), while netsh.exe will change the
subinterface-level MTU; both of these wind up affecting the other in
subtle ways, and the net result is the same.

Typically, the normal way of getting notifications about these
changes, from userspace or from kernel space, is with
NotifyIpInterfaceChange(), which calls a callback function with
MibParameterNotification when something has changed. But, the callback
never fires for MTU changes! That’s the only one missing. The struct
the callback receives has a field for the MTU, but still, it’s never
fired. Somebody on the relevant team at Microsoft told me in 2021,
“this is a plain oversight and should be fixed,” and somebody else
mentioned backporting the fix to the 2019 release. But for whatever
reason, this never happened, and now it’s 2026. In the interim period,
I had a really horrific, but still stable, workaround: I started a
thread, and every 3 seconds I called GetIpInterfaceEntry() on the LUID
of every running WireGuard adapter. You heard that right… I just
polled with a sleep. Gross dot net. But it was the only documented way
of doing this! At the same time, I wrote a little program I could run
on each new release of Windows to see at which point they fixed the
bug, so that I could adjust the version check to avoid the poll loop
on old versions.

Unfortunately, the bug never got fixed. But I didn’t quite feel
comfortable shipping a 1.0 with such a distasteful workaround. So I
get to work… All userspace updates to the MTU go through a file called
\Device\Nsi. The NSI driver is then responsible for dispatching this
out to the various interfaces, and also keeping current with the
various changes in the various interfaces. After attaching to
\Device\Nsi using the standard NT filter-style pattern with
IoAttachDeviceToDeviceStack(), I then intercept the
IOCTL_NSI_SET_ALL_PARAMETERS message that I reverse engineered,
looking at the NSI_SET_ALL_PARAMETERS struct, matching on object types
NlInterfaceObject and NlSubInterfaceObject, and reading out the NlMTU
parameter from NSI_IP_INTERFACE_RW and NSI_IP_SUBINTERFACE_RW. The
parts of these structures we care about seem extremely stable. It
appears to work well, and now the WireGuard driver can adapt to new
MTU changes instantly, rather than within 3 seconds. And there’s no
ugly polling loop. You can peruse this code in driver/nsi.c and
driver/undocumented.h if you’re curious.

That’s a lot of work – it’s a whole separate .c file in the repo – for
just getting access to one value. But that’s how things go, and it’s
information that simply must be had in order to implement WireGuard
properly.

Finally, there are a bunch of other little changes and fixes and now
we compile in C23 mode, so we have access to the typeof() keyword. We
also in theory could move to using alignas(n) instead of
__declspec(align(n)), but C standard alignas() doesn’t work on types,
only members of structs and on variables, which makes it sort of
uglier to use. If you want a struct to always be aligned, then you put
the alignas(n) on the first member. I find this awkward, so we’re
sticking with __declspec(align(n)), which also seems pretty close to
gcc’s __attribute__((aligned(n))) (which is how Linux defines its
__aligned(n) macro).

On the WireGuard for Windows front – WireGuardNT, just discussed, is
the bundled driver component of that – there are 42 bug and
correctness fixes of various varieties. And then there’s one nice
improvement for older versions of Windows 10. Windows 10 1809 added
support for SetInterfaceDnsSettings(), for setting the system DNS
server programatically. Before that, the only documented way was to
shell out to netsh.exe, which is what we did. It was pretty ugly, and
the way of doing that involved some really gnarly parsing.
Fortunately, newer Windows doesn’t need to do this. But it occurred to
me – since these older versions of Windows are essentially complete, I
can just reverse engineer what netsh.exe is doing under the hood, and
then do that myself, and not worry about that ever changing, since
that’s only a fallback path used for these old Windows versions. It
turns out to be pretty easy – set two variables in a normal part of
the registry and send ControlService(SERVICE_CONTROL_PARAMCHANGE) to
the Dnscache service. Easy peasy.

Anyway, please let me know how it goes and if you encounter any issues.

Jason
```

* * *

- Previous message (by thread): [WireGuard Windows 0.6.1 - Timeline of issues (tunnels lost & import still broken)](https://lists.zx2c4.com/pipermail/wireguard/2026-April/009581.html)
- **Messages sorted by:** [\[ date \]](https://lists.zx2c4.com/pipermail/wireguard/2026-April/date.html#9580) [\[ thread \]](https://lists.zx2c4.com/pipermail/wireguard/2026-April/thread.html#9580) [\[ subject \]](https://lists.zx2c4.com/pipermail/wireguard/2026-April/subject.html#9580) [\[ author \]](https://lists.zx2c4.com/pipermail/wireguard/2026-April/author.html#9580)

* * *

[More information about the WireGuard mailing list](https://lists.zx2c4.com/mailman/listinfo/wireguard)

---

## [HN-TITLE] 29. Palantir employees are starting to wonder if they're the bad guys

- **Source**: [https://www.wired.com/story/palantir-employees-are-starting-to-wonder-if-theyre-the-bad-guys/](https://www.wired.com/story/palantir-employees-are-starting-to-wonder-if-theyre-the-bad-guys/)
- **Site**: WIRED
- **Author**: Makena Kelly
- **Published**: 2026-04-23
- **HN activity**: 776 points · [527 comments](https://news.ycombinator.com/item?id=47878633)
- **Length**: 1.9K words (~9 min read)
- **Language**: en-US

It took just a few months of President Donald Trump’s second term for [Palantir](https://www.wired.com/story/palantir-what-the-company-does/) employees to question their company’s [commitments to civil liberties](https://www.wired.com/story/palantir-ice-dhs-alex-pretti-killing-workers-slack-minneapolis/). Last fall, Palantir seemed to become [the technological backbone](https://www.wired.com/story/ice-palantir-immigrationos/) of Trump’s immigration enforcement machinery, providing software identifying, tracking, and helping deport immigrants on behalf of the Department of Homeland Security (DHS), when current and former employees started ringing the alarm.

Around that time, two former employees reconnected by phone. Right as they picked up the call, one of them asked, “Are you tracking Palantir’s descent into fascism?”

“That was their greeting,” the other former employee says. “There’s this feeling not of ‘Oh, this is unpopular and hard,’ but, ‘This feels wrong.’”

Palantir was founded—with initial venture capital investment from the CIA—at a moment of national consensus following the September 11, 2001 attacks, when many saw fighting terrorism abroad as the most critical mission facing the US. The company, which was cofounded by tech billionaire Peter Thiel, sells software that acts as a high-powered [data aggregation and analysis tool](https://www.wired.com/story/palantir-what-the-company-does/) powering everything from private businesses to the US military’s targeting systems.

For the last 20 years, employees could accept the intense external criticism and awkward conversations with family and friends about working for a company named after J. R. R. Tolkien’s corrupting all-seeing orb. But a year into Trump’s second term, as Palantir deepens its relationship with an administration many workers fear is wreaking havoc at home, employees are finally raising these concerns internally, as the US’s war on immigrants, war in Iran, and even company-released manifestos has forced them to rethink the role they play in it all.

“We hire the best and brightest talent to help defend America and its allies and to build and deploy our software to help governments and businesses around the world. Palantir is no monolith of belief, nor should we be,” a Palantir spokesperson said in a statement. “We all pride ourselves on a culture of fierce internal dialogue and even disagreement over the complex areas we work on. That has been true from our founding and remains true today.”

**Got a Tip?**Are you a current or former government employee who wants to talk about what's happening? We'd like to hear from you. Using a nonwork phone or computer, contact the reporters securely on Signal at makenakelly.32.

“The broad story of Palantir as told to itself and to employees was that coming out of 9/11 we knew that there was going to be this big push for safety, and we were worried that that safety might infringe on civil liberties,” one former employee tells WIRED. “And now the threat’s coming from within. I think there's a bit of an identity crisis and a bit of a challenge. We were supposed to be the ones who were preventing a lot of these abuses. Now we're not preventing them. We seem to be enabling them.”

Palantir has always had a secretive reputation, forbidding employees from speaking to the press and requiring alumni to sign [non-disparagement agreements](https://www.npr.org/2025/05/05/nx-s1-5387514/palantir-workers-letter-trump). But throughout the company’s history, management has always at least appeared to be open to engagement and internal criticism, multiple employees say. Over the last year, however, much of that feedback has been met by philosophical soliloquies and redirection. “It’s never been really that people are afraid of speaking up against Karp. It’s more a question of what it would do, if anything,” one current employee tells WIRED.

While internal tensions within Palantir have grown over the last year, they reached a boiling point in January after the violent killing of [Alex Pretti](https://www.wired.com/story/the-instant-smear-campaign-against-border-patrol-shooting-victim-alex-pretti/), a nurse who was shot and killed by federal agents during protests against Immigration and Customs Enforcement (ICE) in Minneapolis. Employees from across the company commented in a Slack thread dedicated to the news demanding more information about the company’s relationship with ICE from management and CEO Alex Karp.

“Our involvement with ice has been internally swept under the rug under Trump2 too much,” one person wrote in a Slack message [WIRED reported at the time](https://www.wired.com/story/palantir-ice-dhs-alex-pretti-killing-workers-slack-minneapolis/). “We need an understanding of our involvement here.”

Around this time, Palantir started wiping Slack conversations after seven days in at least one channel where most of the internal debate takes place, #palantir-in-the-news. Because the decision wasn’t formally announced before the policy rolled out, one worker who noticed the deletions asked in the channel why the company was removing “relevant internal discourse on current events.”

A member of Palantir’s cybersecurity team responded, writing that the decision was made in response to leaks.

This period led Palantir management to release an updated wiki, or a collection of blog posts explaining the ICE contract, where the company defended its work with DHS. Management wrote that the technology the company provides “is making a difference in mitigating risks while enabling targeted outcomes.”

Palantir management ran defense by holding a handful of AMA (ask me anything) forums across the company with leadership like chief technology officer Shyam Sankar and members of its privacy and civil liberties (PCL) teams.

At least one of these AMAs was organized independently of PCL leadership by two team leads, including one who worked directly on the ICE contract for a period of time. “This was very rogue,” a PCL employee who worked on the ICE contract said in a February AMA, a recording of which was obtained by WIRED. “Courtney \[Bowman, head of the privacy and civil liberties team] doesn’t know that I’m spending three hours this week talking to IMPLs \[Palantir terminology for its client-facing product teams], but I think this is the only real way to start going in the right direction.”

Throughout the lengthy call, employees working on a variety of Palantir’s defense projects posed hard questions. Could ICE agents delete audit logs in Palantir’s software? Could agents create harmful workflows on their own without the company’s help? What is the most malicious thing that could come out of this work?

Answering these questions, the PCL employee who worked on the ICE contract said that “a sufficiently malicious customer is, like, basically impossible to prevent at the moment” and could only be controlled through “auditing to prove what happened” and legal action after the fact if the customer breached the company’s contract.

At one point during the call, one of the employees tried to level with the group, explaining that Palantir’s work with ICE was a priority for Karp and something that likely wouldn’t change any time soon.

“Karp really wants to do this and continuously wants this,” they said. “We’re largely at the role of trying to give him suggestions and trying to redirect him, but it was largely unsuccessful and we seem to be on a very sharp path of continuing to expand this workflow.”

Around the time of these forums, Karp sat down for a prerecorded interview with Bowman, seemingly to discuss Palantir’s contracts with ICE, but refused to broach the topic directly. Instead, Karp suggested that employees interested in the work sign [nondisclosure agreements](https://www.wired.com/story/palantir-ceo-alex-karp-employee-questions-on-ice/) before receiving more detailed information.

Then came [the deadly February 28 missile strike](https://www.nytimes.com/2026/03/05/world/middleeast/iran-school-us-strikes-naval-base.html) on an Iranian elementary school on the first full day of the Trump administration and Israel’s war in Iran. The US is the only known country in the conflict to use that specific type of missile. More than 120 children were killed when a Tomahawk missile struck the school, kicking off a series of investigations that concluded that the US was responsible and that surveillance tools like Palantir’s Maven system [had been used](https://www.washingtonpost.com/technology/2026/03/04/anthropic-ai-iran-campaign/) during that day’s strikes. For a company full of employees already reeling over its work with ICE, possible involvement in the death of children was a breaking point.

“I guess the root of what I'm asking is … were we involved, and are doing anything to stop a repeat if we were,” one employee asked in the Palantir news Slack channel. Some employees posed similar questions in the thread, while others criticized them for discussing what could be considered classified information in a Slack channel open to the entire company. The investigation is [ongoing](https://www.nytimes.com/2026/04/10/world/middleeast/iran-us-missle-strike-civilians-lamerd.html).

The Palantir spokesperson said the company was “proud” to support the US military “across Democratic and Republican administrations.”

In March, [Karp gave an interview to CNBC](https://www.theguardian.com/commentisfree/2026/mar/14/palantir-ai-marco-rubio-afghanistan-katy-perry) claiming that AI could undermine the power of “humanities-trained—largely Democratic—voters” and increase the power of working-class male voters. While [critics reacted](https://donmoynihan.substack.com/p/palantir-wants-power-without-accountability) to the piece, calling the statements concerning, so did employees internally: “Is it true that AI disruption is going to disproportionately negatively affect women and people who vote Democrat? and if it is, why are we cool with that?” one worker asked on Slack in a channel dedicated to news about Palantir.

Palantir’s leadership incensed workers yet again this week after the company posted [a Saturday afternoon manifesto](https://x.com/PalantirTech/status/2045574398573453312?s=20) reducing Karp’s recent book, *The Technological Republic*, to 22 points. The post—which includes many of Karp’s long-standing beliefs on how Silicon Valley could better serve US national interests—goes as far as suggesting that the US should consider reinstating the draft. Critics called the manifesto [fascist](https://bsky.app/profile/gilduran.com/post/3mjwqsyj54s2a).

Internally, the post alarmed some workers who huddled in a Slack thread on Monday morning, questioning leadership over its decision to post it in the first place.

“I’m curious why this had to be posted. Especially on the company account. On the practical level every time stuff like that gets posted it gets harder for us to sell the software outside of the US (for sure in the current political climate), and I doubt we need this in the US?” wrote one frustrated employee. The message received more than 50 “+1” emojis.

“Wether \[sic] we acknowledge it or not, this impacts us all personally,” another worker wrote on Monday. “I’ve already had multiple friends reach out and ask what the hell did we post.” This message received nearly two dozen “+1” emoji reactions.

“Yeah it turns out that short-form summaries of the book’s long-form ideas are easy to misrepresent. It’s like we taped a ‘kick me’ sign on our own backs,” a third worker wrote. “I hope no one who decided to put this out is surprised that we are, in fact, getting kicked.”

These conversations involving shame and uncertainty from workers have seemingly popped up in internal channels whenever Palantir has been in the news over the last year. “I think the only thing not different is a lot of folks are still incredibly wary about leaks and talking to the press,” one current employee tells WIRED, describing how the internal company culture has evolved over the last year.

All of this dissent doesn’t seem to bother Karp, who recently told workers that the company is [“behind the curve internally”](https://www.wired.com/story/palantir-ceo-alex-karp-employee-questions-on-ice/) when it comes to popularity. Here, he’s been consistent; in March 2024 Karp told [a CNBC reporter](https://www.cnbc.com/2024/03/13/palantir-ceo-says-outspoken-pro-israel-views-led-employees-to-leave-.html) that “if you have a position that does not cost you ever to lose an employee, it’s not a position.”

But for employees, the culture shift feels intentional. “I don’t want to assert that I have knowledge of what’s going on in their internal mind,” one former worker tells WIRED. “But maybe it's gotten to a place where encouraging independent thought and questioning leads to some bad conclusions.”

---

## [HN-TITLE] 30. Isopods of the world

- **Source**: [https://isopod.site/](https://isopod.site/)
- **Site**: Isopod Site - All About Terrestrial Isopods
- **Submitter**: debesyla (Hacker News)
- **Submitted**: 2026-04-20 20:56 UTC (Hacker News)
- **HN activity**: 156 points · [58 comments](https://news.ycombinator.com/item?id=47840520)
- **Length**: 318 words (~2 min read)
- **Language**: en-US

Explore Their Beauty Up Close

## Isopods of the World

[All Isopods](https://isopod.site/isopod/)

## Isopod Anatomy

Isopods are relatively poorly studied compared to other invertebrates. This leads to many misidentified species in the isopod keeping hobby. Isopod Site aims to give an introduction to proper isopod identification and this starts from understanding the basic anatomy of isopods.

[Isopod Anatomy](https://isopod.site/isopod-anatomy-and-biology/)

Porcellio echinatus

![Porcellio echinatus](https://cdn.isopod.site/2022/02/P1154733b.jpg)

#### Identification

Isopods are identified by referencing peer-reviewed scientific literature rather than relying on superficial similarities. This is the recommended way for any identification of wildlife.

#### Photography

A lot of time went into photographing the isopods on this website with the aim of documenting the key characters of each species.

#### Isopods as Pets

Isopods are considerably low-maintenance pets. It is getting increasingly popular so this site aims to include common topics related to isopod-keeping.

Connect With Me

## Taxonomic Discussions

If you see any incorrect identifications, please feel free to [contact me](https://isopod.site/contact/). The placements are obviously not perfect as they were based on photos alone, so I would be more than happy to go into a deeper discussion on this.

## Selective Breeding for Isopod Morphs

For advanced isopod keepers, selective breeding is an important part of the hobby. Occasionally, a unique morph may occur in the colony's offspring. These individuals with unique traits may then be separated from the main colony to boost that trait in a new lineage.

[Selective Breeding](https://isopod.site/selective-breeding-for-isopod-morphs/)

"Merulanella" sp. "Ember Bee"

![P6021386](https://cdn.isopod.site/2023/06/P6021386.jpg)

About the Photos

## Macro Photography

Most photos were taken with an [Olympus E-M10 mark 4](https://amzn.to/2KxYw1i), [Laowa 50mm 2:1](https://www.macrodojo.com/product/laowa-50mm-f-2-8-2x-ultra-macro-apo-lens-for-micro-four-thirds/) and a single flash with a DIY diffuser. A lot of time and money goes into photographing the isopods so all photos are copyrighted. Copyright agents will automatically detect usage on social media and websites before taking legal action for any unauthorised use. It is nothing personal! Please refrain from using or uploading any photo from this site without permission.

![Laureola sp. Ivory Spiky](https://cdn.isopod.site/2022/02/P1256519.jpg)

## Isopod Listing

Just some random isopods from the database...

[![Armadillidae - Cubaris sp. Amber Ducky](https://cdn.isopod.site/2022/10/PA193178b-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-amber-ducky/)

[![Armadillidae - Cubaris sp. Amber Panda](https://cdn.isopod.site/2022/10/P9280216x-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-amber-panda/)

[![Armadillidae - Cubaris sp. Apricot](https://cdn.isopod.site/2022/10/P9279733-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-apricot/)

[![P5057995](https://cdn.isopod.site/2023/05/P5057995-1024x769.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-blonde-ducky/)

[![Armadillidae - Cubaris sp. Blue Pigeon](https://cdn.isopod.site/2022/10/PB237918-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-blue-pigeon/)

[![Armadillidae - Cubaris sp. Blue Whale](https://cdn.isopod.site/2025/12/PC178275-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-blue-whale/)

[![Armadillidae - Cubaris sp. Bumblebee](https://cdn.isopod.site/2022/10/P9280251x-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-bumblebee/)

[![Armadillidae - Cubaris sp. Cappucino](https://cdn.isopod.site/2022/10/PA010774x-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-cappucino/)

[![Armadillidae - Cubaris sp. Caramel Creme](https://cdn.isopod.site/2022/10/P9280492-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-caramel-creme/)

[![Armadillidae - Cubaris sp. Copper](https://cdn.isopod.site/2022/10/PA193108b-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-copper/)

[![Armadillidae - Cubaris sp. Emperor Bee](https://cdn.isopod.site/2022/10/PD090056-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-emperor-bee/)

[![Armadillidae - Cubaris sp. Firefly](https://cdn.isopod.site/2022/10/P9279818x-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-firefly/)

[![P5057930](https://cdn.isopod.site/2023/05/P5057930-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-flame-white-ducky/)

[![P5068072](https://cdn.isopod.site/2023/05/P5068072-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-giant-raichu/)

[![Armadillidae - Cubaris sp. Green Laser](https://cdn.isopod.site/2022/10/P9279667b-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-green-laser/)

[![Armadillidae - Cubaris sp. Happy Nun](https://cdn.isopod.site/2022/10/PB237826-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-happy-nun/)

[![Armadillidae - Cubaris sp. Hong Tiger](https://cdn.isopod.site/2022/10/PB166531b-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-hong-tiger/)

[![Armadillidae - Cubaris sp. Jupiter](https://cdn.isopod.site/2022/10/PB025197b-1024x768.jpg)](https://isopod.site/isopod/armadillidae-cubaris-sp-jupiter/)

