Hacker News Top 30 — 2026-04-26 Generated on 2026-04-26 03:16 UTC 1. Amateur armed with ChatGPT solves an Erdős problem Source: https://www.scientificamerican.com/article/amateur-armed-with-chatgpt-vibe-maths-a-60-year-old-problem/ Site: Scientific American Author: Joseph Howlett Published: 2026-04-24 HN activity: 78 points · 35 comments Length: 1.1K words (~5 min read) Language: en April 24, 2026 4 min read Add Us On GoogleAdd SciAm An amateur just solved a 60-year-old math problem—by asking AI A ChatGPT AI has proved a conjecture with a method no human had thought of. Experts believe it may have further uses By Joseph Howlett edited by Lee Billings Eugene Mymrin/Getty Images Liam Price just cracked a 60-year-old problem that world-class mathematicians have tried and failed to solve. He’s 23 years old and has no advanced mathematics training. What he does have is a ChatGPT Pro subscription, which gives him access to the latest large language models from OpenAI. Artificial intelligence has recently made headlines for solving a number of “Erdős problems,” conjectures left behind by the prolific mathematician Paul Erdős. But experts have warned that these problems are an imperfect benchmark of artificial intelligence’s mathematical prowess. They range dramatically in both significance and difficulty, and many AI solutions have turned out to be less original than they appeared. The new solution—which Price got in response to a single prompt to GPT-5.4 Pro and posted on www.erdosproblems.com, a website devoted to the Erdős problems, just over a week ago—is different. The problem it solves has eluded some prominent minds, bestowing it some esteem. And more importantly, the AI seems to have used a totally new method for problems of this kind. It’s too soon to say with certainty, but this LLM-conceived connection may be useful for broader applications—something hard to find among recently touted AI triumphs in math. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. “This one is a bit different because people did look at it, and the humans that looked at it just collectively made a slight wrong turn at move one,” says Terence Tao, a mathematician at the University of California, Los Angeles, who has become a prominent scorekeeper for AI’s push into his field. “What’s beginning to emerge is that the problem was maybe easier than expected, and it was like there was some kind of mental block.” The question Price solved—or prompted ChatGPT to solve—concerns special sets of whole numbers, where no number in the set can be evenly divided by any other. Erdős called these “primitive sets” because of their connection to similarly indivisible prime numbers. “A number is prime if it has no other divisors, and this is kind of generalizing that definition from an individual number to a collection of numbers,” says Jared Lichtman, a mathematician at Stanford University. Any set of prime numbers is automatically primitive, because primes have no factors (except themselves and the number one). Erdős also came up with the Erdős sum, a “score” you can calculate for any primitive set. He showed that the biggest the sum could be was about 1.6—and conjectured that this value must also hold for the (infinite) set of all prime numbers. Lichtman proved Erdős right as part of his doctoral thesis in 2022. Erdős also noticed that the score drops if all of a set’s numbers are large—the larger the numbers, the lower the score. He guessed that the lowest this score could be was exactly one, a limit that the score would approach as the set’s numbers approached infinity. Lichtman tried to prove this, too, but got stuck like everyone else before him. Price wasn’t aware of this history when he entered the problem into ChatGPT on an idle Monday afternoon. “I didn’t know what the problem was—I was just doing Erdős problems as I do sometimes, giving them to the AI and seeing what it can come up with,” he says. “And it came up with what looked like a right solution.” He sent it to his occasional collaborator Kevin Barreto, a second-year undergraduate in mathematics at the University of Cambridge. The duo had jump-started the AI-for-Erdős craze late last year by prompting a free version of ChatGPT with open problems chosen at random from the Erdős problems website. (An AI researcher subsequently gifted them each a ChatGPT Pro subscription to encourage their “vibe mathing.”) Reviewing Price’s message, Barreto realized what they had was special, and experts whom he notified quickly took notice. “There was kind of a standard sequence of moves that everyone who worked on the problem previously started by doing,” Tao says. The LLM took an entirely different route, using a formula that was well known in related parts of math, but which no one had thought to apply to this type of question. “The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Lichtman says. But now he and Tao have shortened the proof so that it better distills the LLM’s key insight. More importantly, they already see other potential applications of the AI’s cognitive leap. “We have discovered a new way to think about large numbers and their anatomy,” Tao says. “It’s a nice achievement. I think the jury is still out on the long-term significance.” Lichtman is hopeful because ChatGPT’s discovery validates a sense he’s had since graduate school. “I had the intuition that these problems were kind of clustered together and they had some kind of unifying feel to them,” he says. “And this new method is really confirming that intuition.” It’s Time to Stand Up for Science If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history. I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too. If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized. In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription. There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission. -------------------------------------------------------------------------------- 2. Why has there been so little progress on Alzheimer's disease? Source: https://freakonomics.com/podcast/why-has-there-been-so-little-progress-on-alzheimers-disease/ Site: freakonomics.com Submitter: chiefalchemist (Hacker News) Submitted: 2026-04-26 00:12 UTC (Hacker News) HN activity: 108 points · 46 comments Scrape failed: http 403 -------------------------------------------------------------------------------- 3. USB Cheat Sheet (2022) Source: https://fabiensanglard.net/usbcheat/index.html Site: fabiensanglard.net Submitter: gwerbret (Hacker News) Submitted: 2026-04-25 21:51 UTC (Hacker News) HN activity: 187 points · 47 comments Length: 569 words (~3 min read) May 05, 2022 USB Cheat Sheet I spend time investigating a non-existing bug today because I misunderstood a USB term. So I made myself a cheat sheet. Maybe it will save time to someone. Marketing Name Also Known As Signal Gbps Signal MiB/s Wires Cable USB 1.1 Full Speed 12 Mbps 1.5 MiB/s 4 4m USB 2.0 Hi-Speed 480 Mbps 60 MiB/s 4 4m SuperSpeed USB 5Gbps USB 3.0 USB 3.1 USB 3.2 USB 3.1 Gen 1 USB 3.2 Gen 1 5000 Mbps 625 MiB/s 8 3m SuperSpeedPlus USB 10Gbps USB 3.1 USB 3.2 USB 3.1 Gen 2 USB 3.2 Gen 2 10000 Mbps 1250 MiB/s 8 2m SuperSpeedPlus USB 20Gbps USB 3.2 USB 3.2 Gen 2x2 20000 Mbps 2500 MiB/s 12 1m USB4 20Gbps USB4 Gen 2×2 USB4 20000 Mbps 2500 MiB/s 12 0.8m USB4 40Gbps USB4 Gen 3×2 USB4 40000 Mbps 5000 MiB/s 12 0.8m Gen naming Convention, lanes, and Speed USB Gen A x B A = Generation B = Num lanes used Name Signal Sig Totala Encoding Effective bb Effective Bb Real Lifec USB 3.2 Gen 1x1 5,000 Mbps 5,000 Mbps 8b/10b 4,000 Mbps 500 MiB/s 400 MiB/s[1] USB 3.2 Gen 1x2 5,000 Mbps 10,000 Mbps 8b/10b 8,000 Mbps 1,000 MiB/s 800 MiB/s USB 3.2 Gen 2x1 10,000 Mbps 10,000 Mbps 128b/132b 9,696 Mbps 1,212 MiB/s 780 MiB/s[2] USB 3.2 Gen 2x2 10,000 Mbps 20,000 Mbps 128b/132b 19,392 Mbps 2,424 MiB/s 1,600 MiB/s[4] USB 4 Gen 2x2 10,000 Mbps 20,000 Mbps 128b/132b 19,392 Mbps 2,424 MiB/s 1,600 MiB/s USB 4 Gen 3x2 20,000 Mbps 40,000 Mbps 128b/132b 38,787 Mbps 4,848 MiB/s 2,700 MiB/s[5] Note: Multi-lanes systems, uses lane striping (on TX) and lane bonding (on RX). a - What they put on the box. b - Rate with encoding overhead. e.g, 8b/10b = 20%. c - Real life sequencial read rate. Cables 4 wires: PWR, GND, D+, D-. 8 wires: PWR, GND, D+, D-. RX+ , RX- , TX- , TX+. 12 wires: PWR, GND, D+, D-, RX1+, RX1-, RX2-, RX2+, TX1+, TX1-, TX2-, TX2+. Note: 1 USB lane = 1 twisted wire pair +/-. Note: 4 wires = 1 half-duplex lane, 8 wires = 2 lanes (one up, one down), and 12 wires = 4 lanes (two up, two down). USB-A/B: Connectors 4/8 wires Type-A 4-wires Type-A 8-wires Type-B 4-wires Type-B 8-wires USB-C: Connectors 12 wires Only the USB Type-C connector has enough pins to support two lanes. - CC1 and CC2 are downstream facing port (DFP) and upstream facing port (UFP) detection. Also used for power negotiation and alt mode switch. - SBU1 and SBU2 are secondary bus wires, for the DisplayPort AUX channel and hot plug detection (HPD). Charge rates / Cable types Specifications Max. Voltage Max. Current Max. Power USB 2.0 5V 500mA 2.5W USB 3.0 / USB3.1 5V 900mA 4.5W USB Battery Charging (BC) 1.2 5V 1.5A 7.5W USB-C Current Mode (non-PD) 5V 3A 15W USB-C / Power Delivery (PD 1/2) 20V 5A 100W USB-C PD 3.1 (EPR) 48V 5A 240W Specifications USB 1.0 (Jan, 1996). USB 1.1 (Sep, 1998). USB 2.0 (Apr, 2000). USB 3.0 (Nov, 2008). USB 3.1 (Jul, 2013). USB 3.2 (Sep, 2017). USB 4.0 (Aug, 2019). References ^ [1] Universal Serial Bus Revision 3.0 Specification ^ [2] Real-world USB 3.2 Gen 2 Performance ^ [3] USB 3.1 Tested: Performance ^ [4] World’s First USB 3.2 Demonstration | Synopsys ^ [5] USB4.0 M.2 NVMe Enclosure Review * -------------------------------------------------------------------------------- 4. The Free Universal Construction Kit Source: https://fffff.at/free-universal-construction-kit/ Site: F.A.T. Author: Released by   fffffat Published: 2012-03-19 HN activity: 284 points · 55 comments Length: 2.7K words (~12 min read) Language: en-US Ever wanted to connect your Legos and Tinkertoys together? Now you can — and much more. Announcing the Free Universal Construction Kit: a set of adapters for complete interoperability between 10 popular construction toys. Fig. 1. The Free Universal Construction Kit. Overview Motivation Download Implementation Legal and Commercial Implications License and Disclaimers Credits, Contact and Acknowledgements Keywords Overview Video by Riley Harmon for F.A.T. Lab + Sy-Lab. F.A.T. Lab and Sy-Lab are pleased to present the Free Universal Construction Kit: a matrix of nearly 80 adapter bricks that enable complete interoperability between ten* popular children’s construction toys. By allowing any piece to join to any other, the Kit encourages totally new forms of intercourse between otherwise closed systems—enabling radically hybrid constructive play, the creation of previously impossible designs, and ultimately, more creative opportunities for kids. As with other grassroots interoperability remedies, the Free Universal Construction Kit implements proprietary protocols in order to provide a public service unmet—or unmeetable—by corporate interests. The Free Universal Construction Kit offers adapters between Lego, Duplo, Fischertechnik, Gears! Gears! Gears!, K’Nex, Krinkles (Bristle Blocks), Lincoln Logs, Tinkertoys, Zome, and Zoob. Our adapters can be downloaded from Thingiverse.com and other sharing sites as a set of 3D models in .STL format, suitable for reproduction by personal manufacturing devices like the Makerbot (an inexpensive, open-source 3D printer). Motivation Our kids are already doing it! And when we were growing up, ourselves, we did it too—or we tried to, anyway. Connecting our toys together. Because: what if we want to make a construction which is half-Tinkertoys, half-K’Nex? Why shouldn’t we be able to? We dreamed about this possibility years ago, when we were small, and we knew then, as we know now, that we’d need some adapters to help. The advent of low-cost 3D printing has made such adapters possible, and with it, a vast new set of combinatorial possibilities for children’s creative construction toys. Opening doors to new creative worlds is one major reason we created the Free Universal Construction Kit. Another is that we believe expertise shouldn’t be disposable — and that childrens’ hard-won creative fluency with their toys shouldn’t become obsolete each Christmas. By allowing different toy systems to work together, the Free Universal Construction Kit makes possible new forms of “forward compatibility”, extending the value of these systems across the life of a child. Thus, with the Kit’s adapters, playsets like Krinkles (often enjoyed by toddlers) can still retain their use-value for older children using Lego, and for even older tweens using Zome. The Kit offers a “best of all worlds” approach to play and learning that combines the advantages of each toy system. We selected construction sets for inclusion based on their significant level of market penetration, as well as for the diversity of features they brought to the Kit’s collection. Some of the supported construction systems, for example, offer great mechanical strength, or the ability to build at large scales; others offer the means to design kinetic movements; and still others permit the creation of a wide range of crystallographic geometries and symmetries. Using these classic toys as a foundation, the Free Universal Construction Kit offers a “meta-mashup system” ideally provisioned for the creation of transgressive architecture and chimeric readymades. Finally, in producing the Free Universal Construction Kit, we hope to demonstrate a model of reverse engineering as a civic activity: a creative process in which anyone can develop the necessary pieces to bridge the limitations presented by mass-produced commercial artifacts. We hope that the Kit will not only prompt people to create new designs, but more importantly, to reflect on our relationship with material mass-culture—and the rapidly-evolving ways in which we can better adapt it to our imaginations. Download The Free Universal Construction Kit 3D models are freely available in .STL format from three locations: Individual adapters from the Free Universal Construction Kit may be downloaded from Thingiverse.com — the world’s foremost website dedicated to the free sharing and remixing of user-created digital design files. The complete Free Universal Construction Kit can also be downloaded in its entirety*, as a 29MB .zip archive from the F.A.T. Lab web site, here. Note: all units are in inches. We expect the Kit to be available shortly from The Pirate Bay, as a torrent in TPB’s new "physibles" (physical downloadables) channel. In addition to the Kit itself, we also offer for download this attractive B1 poster (4.5MB PDF, in two versions: gray background / white background). Figure 2. The Free Universal Construction Kit adapter matrix. (PDFs: Gray, White) We (F.A.T. Lab and Sy-Lab) neither sell nor distribute physical copies of the Free Universal Construction Kit. Please do not ask us to do so. Individuals seeking their own physical copies of the Kit, in whole or in part, are encouraged to download our files and reproduce them with open-hardware desktop 3D printers like the Makerbot, RepRap, Ultimaker, or Printrbot. Alternatively, copies for private use may be available from a personal fabrication service bureau; for awesome service, international/anywhere shipping and quick turnaround, we highly recommend Ponoko.com for personalized 3D printing in a wide variety of materials. Shapeways and QuickParts are good, too. You may also find a 3D printer in the architecture, industrial design, and/or mechanical engineering departments of your local university. Please note that our license for the Free Universal Construction Kit prohibits commercial use of these designs in mass production; note, however that we encourage individuals to contract with fabrication service bureaus for the creation of personal copies. For more information, see our license and disclaimers, below. Implementation The Free Universal Construction Kit comprises nearly 80 two-way adapters. These allow each of the different construction toys (Lego, Tinkertoy, Fischertechnik etc.) to interface with any of the other supported systems. Prior to modeling, the dimensions of the various toy connectors were reverse-engineered with an optical comparator fitted with a digital read-out accurate to less than one ten-thousandth of an inch (0.0001in., or 2.54 microns). Figure 3. A Bristle Block being measured in the optical comparator. The resulting precision ensures that the Free Universal Construction Kit “actually works”, enabling tight snap-fits between custom and commercial components. Figure 4. The Kit in use, connecting four different systems together. Below is a partial gallery of assorted Kit adapters, respectively compatible with (clockwise from top left): Lego, Zoob, Tinkertoys, and Gears! Gears! Gears!. Click on the images for higher-resolution photographs: In addition to its many one-to-one adapters, the Free Universal Construction Kit also includes a special fist-sized Universal Adapter Brick which provides connectivity between all of the supported construction systems: Fig. 9. The Universal Adapter Brick. Producing physical prints from our provided 3D models prompts certain fabrication considerations. According to Wikipedia, the precision of Lego pieces is less than 10 microns. As of early 2012, however, standard Makerbot printers have an XY resolution of 100 microns (0.1mm) and a default layer thickness of 360 microns (0.36mm). We thus caution that fabrication of the Free Universal Construction Kit with current (2012-era) solutions for DIY 3D printing, such as the Makerbot, Printrbot or RepRap, may lack the precision required for reliable or satisfactory coupling with standard commercial pieces. A great deal depends on how well-tuned the printer is; thus, your mileage may vary. In any case, we expect this situation will improve gradually, but inexorably, in tandem with improvements to these vibrantly evolving fabrication platforms. The artist’s proof shown here was created in a UV-cured white resin using a commercial-grade Objet (“polyjet”) 3D printer, which has a horizontal resolution of 42 microns, and a layer thickness of 16 microns. Ponoko.com and other private fabrication services offer printing from Objet machines and other high-resolution devices. Legal and Commercial Implications Consider the frustrating experience of purchasing a new computer (a Mac, say) and discovering that it will not play your aunt’s Windows Media video of your little cousins. Likewise, imagine your aunt’s corresponding annoyance when she finds that her PC will not play the Apple Quicktime video you sent her of your cats. This humiliating little episode isn’t an accident; it’s just a skirmish in a never-ending battle between giant commercial entities, played out, thousands of times every day, in exactly such micro-punishments to customers like you. If you’re well-informed, you may happen to know about VLC — a free, open-source video player, developed by independent hackers as a grassroots remedy for exactly this problem. Until the advent of ubiquitous 3D printing, software remedies like VLC weren’t readily available for hardware products, like toys. That’s changing. Today’s manufacturers have little or no intrinsic motivation to make their products compatible with anyone else’s. Indeed—despite obvious benefits to users everywhere—the implementation of cross-brand interoperability can be nearly impossible, given the tangled restrictions of patents, design rights, and trademarks involved in doing so. So we stepped up. The Free Universal Construction Kit is the VLC of children’s playsets. As we can see from the example above, interoperability is a question of power and market dominance. Most market leaders regard interoperability as an anti-competitive nuisance, a regulatory check on their ambition, or a concession to the whining of lesser players. Quite simply, interoperability is the request of the disenfranchised. And which end-user, in so many ways, is less enfranchised than a preliterate child? The simple fact is that no toy company would ever make the Free Universal Construction Kit. Instead, each construction toy wants (and indeed, pretends) to be your only playset. Within this worldview, the other manufacturers’ construction sets are just so many elephants in the room, competing for your attention on the shelves of Toys-R-Us. No longer. The Free Universal Construction Kit presents what no manufacturer could: a remedy providing extensible, post-facto syntactic interoperability for construction toys. Let the fun begin! Some may express concern that the Free Universal Construction Kit infringes such corporate prerogatives as copyright, design right, trade dress, trademarks or patents of the supported toy systems. We encourage those eager to enforce these rights to please think of the children (or perhaps the Streisand effect) — and we assert that the home printing of the Free Universal Construction Kit constitutes protected fair use. Simon Bradshaw et al., writing in “The Intellectual Property Implications of Low-Cost 3D Printing”, conclude that the public is legally allowed to make 3D prints that mate with proprietary parts, especially in cases (the “Must Fit Exception”) where a piece’s shape “is determined by the need to connect to or fit into or around another product”: “Even where a registered design is copied via a 3D printer this would not be an infringement if it were done ‘privately and for purposes which are not commercial’. Both criteria must be met; it is insufficient that copying is not done for profit. Purely personal use of a 3D printer to make items will thus not infringe a registered design.” *In fact, the Free Universal Construction Kit deliberately avoids patent infringement. Part of our strategy for doing so is our choice to support older (“classic”) playsets: of the ten toy systems supported by the Kit, eight are no longer protected by active (20-year) patents. To take a few examples: Lego was patented in 1958; Lincoln Logs, in 1920; and Tinkertoys, in 1932. There are, however, two instances in which toy systems nominally supported by the Kit are still protected (as of this writing) by active patents: Zoob (patented 1996) and ZomeTool (patented 2002). For the Zoob and Zome systems, please note that we have delayed the release of pertinent adapter models until December 2016 and November 2022, respectively. The Free Universal Construction Kit is simply one “toy” illustration of a coming grassroots revolution, in which everyday people can—with desktop tools—overcome arbitrary restrictions in mass-manufactured physical culture. The burgeoning possibility of freely shared downloadable adapters has significant implications for industries where the attempt to create “technological lock-in” is a common business practice. For more on this subject, and the legal horizons of reproducing commercial products with home fabrication systems, please see: Bradshaw, Simon; A. Bowyer and P. Haufe, “The Intellectual Property Implications of Low-Cost 3D Printing”, 7:1 SCRIPTed 5, 2010. de Bruijn, Erik. “Fab It Yourself: Adapters & Consumer Lock-In”. Blog.erikdebruijn.nl, 13 September 2010. Hanna, Peter. “The next Napster? Copyright questions as 3D printing comes of age”. Arstechnica.com, April 2011. Ross, Valerie. “Can You Patent a Shape? 3D Printing on Collision Course With Intellectual Property Law”. Discover Magazine, 7 April 2011. Weinberg, Michael. “3D Printing Settlers of Catan is Probably Not Illegal: Is This a Problem?”. PublicKnowledge.org, 28 January 2011. Weinberg, Michael. “It Will Be Awesome if They Don’t Screw it Up: 3D Printing, Intellectual Property, and the Fight Over the Next Great Disruptive Technology”. PublicKnowledge.org, 10 November 2010. In addition to the writers above, we tip our hats to Thingiverse user Zydac, whose related project (a Duplo-to-Brio track adapter) led us to these legal writings; to Andrew Plumb (Clothbot) who has probed the legal and practical implications of Lego-compatible bricks for some time; and to Daan van den Berg, who has explored 3D-printed remixes of branded forms as a mode of critical artistic practice. License and Disclaimers The Free Universal Construction Kit and its associated media are licensed under and subject to the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License (http://creativecommons.org/licenses/by-nc-sa/3.0/legalcode). The official URL for the Free Universal Construction Kit is https://fffff.at/free-universal-construction-kit. You are free to copy, distribute and transmit the Kit, and to remix and/or adapt the Kit; in doing so, you must attribute the Kit to “F.A.T. Lab and Sy-Lab”, and include a link to the project using the URL above. We especially welcome extensions to the Kit which provide compatibility with as-yet-unsupported play systems. Please note that extensions to the Kit require the same or similar license. You may not use the Kit in commercial mass production; however, we permit individuals to contract with fabrication service bureaus (e.g. Ponoko, Shapeways, etc.) for personal copies. Lego®, Duplo®, Fischertechnik®, Gears! Gears! Gears!®, K’Nex®, Krinkles®, Bristle Blocks®, Lincoln Logs®, Tinkertoys®, Zome®, ZomeTool® and Zoob® are trademarks of their respective owners. The Free Universal Construction Kit is not associated or affiliated with, or endorsed, sponsored, certified or approved by, any of the foregoing owners or their respective products. We are not a commercial company; we are artists, hackers and activists. The Kit is not a product; it is a provocation. F.A.T. Lab and Sy-Lab, in cooperation with Adapterz LLC, (1) perform solely the service of publishing the Free Universal Construction Kit, (2) do not participate in any production, public manufacture or sale of the items displayed here, and (3) offer no opinion, warranty or representation as to the safety, quality or functionality of the Kit. The F.A.T. Lab, Sy-Lab and Adapterz LLC therefore offer no warranty of any kind, express or implied. Please cite the Free Universal Construction Kit, and/or this article, as follows: Free Art and Technology [F.A.T.] Lab and Sy-Lab. “The Free Universal Construction Kit.” Fffff.at, 20 March 2012. . WARNING: CHOKING HAZARD! Small parts. Not for children under 3 years. Credits, Contact and Acknowledgements For press or other inquiries about the Free Universal Construction Kit, please contact info@adapterz.org. The Kit was conceived and developed by the F.A.T. (Free Art and Technology) Lab in collaboration with Sy-Lab, and is represented, for legal purposes, by Adapterz, LLC. The Kit’s “advertisement” video was created by Riley Harmon. The creators express gratitude to: our families; our lawyers; the children appearing in our demonstration video, and their families; Jean Aw, Eric Brockmeyer, David Familian, Andy Flowers, Michael Joaquin Grey, Mark Gross, Riley Harmon, Marcie and Lawrence Hayhurst, Allie Oswell, Eric Paulos, Bre Pettis, Kent Sheely, Michael Weinberg, and the STUDIO for Creative Inquiry. The Kit files are sportingly hosted by Thingiverse.com. Keywords Toys, kits, constructions sets, construction toys, construction systems, Lego, Duplo, Fischertechnik, Gears! Gears! Gears!, K’Nex, Krinkles, Bristle Blocks, Lincoln Logs, Tinkertoys, Zome, ZomeTool, Zoob, Constructivist learning, play, connectors, adaptors, adapter piece, adapter brick, adapters, universal translator, gender changer, modularity, interoperability, interoperability remedy, compatibility layer, technological lock-in, post-facto plug-and-play syntactic interoperability, shim, computer aided design, 3D models, STL files, physibles, rapid prototyping, 3D printing, Makerbot, RepRap, Printrbot, Thingiverse, Ponoko, F.A.T. Lab, Sy-Lab, fair use, remix, hybrid, mashup. The commons and the public good are continually threatened by narrow interests seeking private gain. Please continue to support and protect the free, open, and non-proprietary exchange and development of ideas and information online. -------------------------------------------------------------------------------- 5. Flickr: The first and last great photo platform Source: https://petapixel.com/2026/04/22/flickr-the-first-and-last-great-photo-platform/ Site: PetaPixel Author: Guest Author Published: 2026-04-22 HN activity: 57 points · 27 comments Length: 2.9K words (~13 min read) Language: en As the global population of photographers swells, so do their digital libraries, leaving everyone with the same question: where and how to share their best work. Flickr was among the first online communities designed to address that dilemma, and it remains one of the best. Some demand sweeping overhauls or argue the price isn’t justified. However, Flickr’s refusal to chase fleeting trends—opting instead for iterative improvements—is actually one of its greatest strengths. And while its annual Pro subscription is on the pricier side, ultimately, the benefits continue to outweigh the costs. Editor’s Note: This article was written largely as a rebuttal to Matt Payne’s January 2026 article, Empty Promises: A Deep Dive into Flickr Pro for 2026. It is worth familiarizing yourself with that perspective before diving into Mr. Weinstein’s response below. A Brief History Launched in 2004 with an iconically missing vowel, Flickr pioneered the Web 2.0 era of social photo sharing before enduring a decade of minor and cosmetic changes amid corporate stasis under Yahoo. In 2013, Yahoo made a splashy announcement that it was refreshing the user interface and would offer all users one terabyte of free photo space. But the longer Yahoo held onto Flickr, the more the platform’s continued existence was in question. After years of neglect, SmugMug acquired the platform in 2018. Don MacAskill, SmugMug’s CEO, said “[w]e’ll work very hard to not ruin Flickr. After successfully not ruining it, we’ll work even hard[er] to make it better than its already awesome self,” and “Flickr’s community is unique in the world and on the Internet. That’s where we’d like to invest.” So, what are the results of those investments, and is Flickr Pro still worth it? Flickr in 2026 The Social Core In stark contrast to the majority of photo-focused services, Flickr remains primarily a simple photo-sharing website where one can find friends and view their work in a clean, chronological stream. While the platform supports video, the feature feels like a quiet afterthought—a logical choice for a site built by and for photography enthusiasts. There is simply no chance that Flickr will suddenly pivot to video to chase short-form trends. Groups & Discovery Flickr groups exist for countless topics, including street photography. The heart of the Flickr community lies in its Groups, many of which cater to highly specific niches that you won’t find elsewhere. These range from technical communities focused on specific lenses, camera bodies, or brands, to aesthetic enclaves for analog purists, black-and-white enthusiasts, and quirkier corners like Stick Figures in Peril. Metadata & Organization Flickr’s EXIF data and geotags help users see where and how photos were taken. The platform’s utility is bolstered by its robust handling of tags and geotagging, allowing for a level of searchability that modern social media often lacks. Users can manage their libraries through Sets, Galleries, and Albums, making it easy to organize thousands of images by subject matter, location, person, or era. Flickr preserves and displays comprehensive EXIF data, including detailed camera and lens information for every shot. Integration & Syndication Flickr also retains its early web roots: every user has an RSS feed, and the site maintains open APIs and makes it simple to create embeds for other websites—a lingering reminder of the flexible features that made early Flickr such a vital tool for bloggers and curators. Explore Explore has the potential to bring thousands of viewers to a photo. Of course, there’s also Explore, Flickr’s way of highlighting 500 photos each day. When a photo is selected for Explore—driven by an inscrutable, often mercurial algorithm—it typically receives thousands of views and a surge of engagement. Pro Benefits In 2026, the leap from a free account to Flickr Pro primarily allows a user to present a long-term or large body of work publicly. The most immediate benefit is the removal of the 1,000-photo cap (which also limits free users to a mere 50 non-public photos), replaced by unlimited, full-resolution JPEG storage. For those who use Flickr as a portfolio, the Pro status also ensures an ad-free experience—not just for the photographer, but for anyone visiting their photostream, ensuring the work remains the sole focus without the distraction of third-party banners. Pro users also gain access to Advanced Stats, providing granular data on the sources of views and traffic, including which specific groups or tags are driving traffic. Pro members get a suite of partner perks, including savings on Adobe Creative Cloud, Blurb photo books, Phlearn memberships, and SmugMug plans, and a significant 5% off gear at KEH. Additionally, Pro members gain access to exclusive savings on a wide range of classes and education. These are, at best, fringe benefits, but a user who spends a bit under $2,000 at KEH in a year will have essentially justified the entire cost of the Pro membership through the discount. Why Flickr is Still Great in 2026 There are certainly cheaper ways in 2026 to host an ad-free, public portfolio on the open web. Yet, few to none meet those criteria while simultaneously offering an active, built-in community of dedicated photography enthusiasts seeking out high quality photography. I suspect that’s the value proposition that keeps many Flickr users paying for Pro in 2026, myself included. Other options are better positioned to present a professional photographer’s work to the world exactly as they want it seen. But Flickr Pro shouldn’t be confused with “Flickr for professionals,” just like the iPhone Pro isn’t intended for “professional smartphone users.” Most Flickr users are serious—or not-so-serious—hobbyists. But more generally, Flickr is great precisely because it isn’t trying to become the next Instagram, TikTok, crypto play, metaverse experiment, or AI training ground. While it’s always nice to have exposure on Flickr, the platform is largely devoid of the “influencers” who dominate other networks. In an era of algorithm-driven content, Flickr remains a sanctuary for photography enthusiasts who are genuinely excited to see what their peers are up to. The community remains very active; while you’ll encounter the occasional robotic “Great shot!” comment, the platform still fosters engaged discussion, honest feedback, and shared tips that are hard to find on more transactional social networks. If it feels like a ghost town, consider joining new groups and interacting with new users whose work you enjoy and might learn from. The robust tagging and geotagging systems make Flickr an underappreciated platform for location scouting. Before heading to a new area, a user can search within the area or for specific landmarks to see how a location looks at different times of day, in varying weather conditions, or across different seasons. Furthermore, the full EXIF data display makes Flickr a great place to learn. There is no better place to see what a different lens or camera body can produce in the hands of real photographers. Flickr makes it easy to assign a Creative Commons license to photos. One of Flickr’s most underrated power features is the Organize tool. It provides a high-level view of your entire library, allowing you to batch-edit titles, tags, and permissions with a simple drag-and-drop interface, ensuring every photo has the exact attributes you want it to. Flickr offers robust features to limit who sees your work, allowing you to hide specific photos from public searches while still sharing them with a select circle via private links. And it’s easy to change the license associated with photos in bulk, for instance to assign a Creative Commons license so others can share or reuse your work if you so choose. To support the sense of community, Flickr regularly hosts free photography competitions that celebrate its members’ talent, including the annual Your Best Shot contest and themed events like the World Photography Day Contest. Flickr often hands out prizes, big and small, in conjunction with popular photo-related brands. And photos entered into contests often get a boost in interaction from other participants—a nice consolation prize. Flickr organized photo walks for various anniversaries of the platform, including for its 10th anniversary. Flickr supports its community in the real world too. The site facilitates photo walks, sponsors Photoville in New York City, and maintains a presence at major photography gatherings. These events are excellent opportunities to meet like-minded photographers, swap stories about gear, and discover new subjects to shoot. I’ve personally met avid Flickr users in places like New York City, Atlanta, and London; it’s a true global network. While it’s a rarely used feature, if a photo uploaded to the site contains another Flickr member, you can tag that user directly, making it easy to keep track of friends and collaborators from real-world photowalks. The site is also heavily promoting MODE by Flickr, a three-day photography festival taking place in Minneapolis from September 18–20, 2026. Billed as a “photographer’s playground,” MODE is designed to bring the community away from their devices and into the physical world through workshops, darkroom sessions, and city-wide photowalks. At a minimum of $330 for admission, plus airfare to and lodging in Minnesota, MODE may prove to be a one-time experiment, but it’s a genuine effort to invigorate the community, which is worthy of praise. And while Explore is and has been algorithmically curated for years, the site is generally free of artificial intelligence, both with respect to the content users upload and useless features shoehorned into the service. Flickr’s Terms make clear that users own the copyright to their photos: You retain all intellectual property rights in and to any User Content you post, upload or otherwise make available through the Services, including the copyright in and to your photos and videos. SmugMug does not claim any ownership, right, title or interest in and to your User Content. While users grant SmugMug the right to reproduce users’ images to provide the service there’s little risk—at least under the current Terms—that Flickr will turn into an AI-focused platform, mining its users’ photos. Of course, third parties may take a different view and scrape the full Flickr corpus, but there’s only so much Flickr, like virtually every website operator, can do with respect to that scenario. While Flickr has dabbled in allowing users to license photos, commerce has never been the core element of the service. Today, rather than acting as a middleman for stock sales, as do many of its competitors, Flickr focuses on providing the infrastructure for photographers to manage their own destinies. Ultimately, Flickr’s greatest strength in 2026 is its refusal to pivot or sell out. It’s Not Perfect Tech Issues While Flickr has an impressive list of attributes, it is far from flawless. When SmugMug acquired the service and migrated its massive library to Amazon Web Services (AWS), the platform entered a period of relative instability. Even in 2026, users occasionally encounter the dreaded “bad panda”—Flickr’s internal parlance for a site error or outage—and intermittent slow-loading pages remain an unfortunate reality of the browsing experience. A fully functional platform is table stakes, especially for the price Pro users pay. Stagnant Community Hubs A meetup of New York City-based Flickr users. Flickr Groups used to feature robust conversations, but much of that energy has migrated to platforms like Reddit or Facebook. While many groups remain active—specifically those centered around local photography clubs, specific social organizations, and regional events—the broader “global” discussion feels quieter than it once was. Similarly, the internal FlickrMail messaging system has not seen a significant update in years; it lacks conveniences like multi-person threads or the ability to easily embed photos and map locations directly into a chat. The SmugMug management promised improvements to the community aspects of Flickr, and more is needed—beyond a pricey, experimental festival in Minnesota—before they can declare success on this front. Rusty Features Some of the site’s most beloved legacy features are beginning to show their age. The Camera Finder, for example, is still a useful resource for seeing trending gear, but it lacks granular data or the ability to filter in any useful way.It used to be possible to filter photos taken by a specific camera by genre (e.g., landscape, sports). Restoring this feature—and building out robust searchability by camera body, lens, and exact settings—would be a massive win for the community. The World Map lets users scout locations around the world well before arrival. The World Map could also use attention. While geotags are a fantastic resource, the World Map currently lacks the filtering and searchability that would make it a much more powerful and useful way to find photos with certain keywords at a specific place at a specific time. The “Interestingness” Algorithm The “Interestingness” algorithm—which powers the Explore page—can be enigmatic. While tastes vary, virtually everyone can agree that the algorithm sometimes rewards objectively mundane photos as more “interesting” than more captivating work. I suspect that the algorithm is tuned to reward certain user behaviors that Flickr considers desirable at the expense of showcasing truly “interesting” photos. While some users have long since learned to game the system, complaining about Explore is an old cliché—and it ultimately represents only a fraction of the platform’s value. Nonetheless, improvements would be welcome. Beyond JPEG Flickr supports photos with wide embedded color spaces, including ProPhoto RGB, so modern displays can show photos uploaded to Flickr with extremely rich colors. Flickr allows Pro users to showcase their work at full resolution, but as of 2026, JPEG is over 30 years old, and camera and display hardware has surpassed its limitations. While Flickr doesn’t overly compress photos and does support modern color profiles—allowing the service to take advantage of wide gamuts like Display P3 used by high-end smartphones and monitors—it still lacks native support for next-generation formats like JPEG XL, HEIC, or AVIF. These formats are increasingly supported and commonplace, offer better compression and greater bit depths, and adding them would significantly modernize the platform’s technical foundation. The Cost of Independence There is an old adage in tech: “If you’re not paying for the product, you are the product.” Through that lens, Flickr Pro users are definitively not the product. Currently, Flickr Pro costs $82 when billed once per year, which is a significant jump from its early days. To put that in perspective, 500px is $59.94 per year, and Glass, a recent entrant in the field sometimes considered Flickr’s closest competitor, costs roughly $40 per year. On the other hand, they lack the full feature set described above, and they don’t offer their Pro-level users an ad-free gallery space open to the public that doesn’t generate its profit by profiling its users for advertisers. A 100-Year Vision Hosting petabytes of high-resolution data is an expensive endeavor—Yahoo should have never offered terabytes of storage for free. MacAskill addressed this balance directly when speaking to the community about two years ago: “Flickr is the healthiest it’s ever been. More active users, more engagement, more connections, more revenue, more of everything – except people treating it like a photo dump’. Most importantly, our members are ecstatic about it, it’s now profitable and cash flow positive, so not in imminent danger (and we’re trying to build it, sustainably, for 100+ years). IMHO, it’s not nearly enough, yet, but the trajectory is awesome. It’s working. And it’s working without invading people’s privacy, unlike nearly every other social media platform.” He’s also been clear very recently that SmugMug is “not planning on selling Flickr.” Ultimately, while the site may feel rusty in a few places, its trajectory suggests a platform that is finally stable. For those who value privacy, a long-term home for their work, and an ad-free portfolio-like space, the Pro price tag is the cost of ensuring Flickr survives into the next decade and beyond. It’s not officially a part of Flickr, but the closely affiliated non-profit Flickr Foundation is working on projects like the Data Lifeboat, which aims to be a “user-friendly archiving solution to ensure memories on Flickr can be enjoyed by future generations, in easily browsable packages.” Flickr may seem like an anachronism in 2026, but the things that made it great decades ago continue to make it the best platform for sharing photos today. If you’re looking for the next big thing, Flickr may not be for you. Flickr is great because—in contrast to virtually all of its competitors—it offers the features photography enthusiasts care about while avoiding distractions and minimal monetization of its Pro users via advertising. It’s a community with virtual and real-world events. It’s a place to post and seek out your favorite photos. It’s a place to be inspired. Because it isn’t (currently) beholden to massive shareholder demands, it hasn’t needed to “move fast and break things.” Instead, it has moved deliberately, maintaining and improving the tools that matter. I expect to see more of that going forward and will willingly pay the (admittedly high) fee necessary to keep this little slice of the early, more pure web alive—not for the sake of nostalgia, but because things actually were better back when the web connected real people, and platforms didn’t aspire to take over the world. In short, if it’s not broken, why fix it? About the author: Brett Weinstein is an amateur photographer and will mark 20 years of Flickr membership this year. His work is featured in the Smithsonian National Museum of African American History and Culture, he was the Photography Editor at the Emory Wheel and the 2008 Southeast Journalism Conference Best Press Photographer, and his photos have been listed with Getty and featured in press and advertising. By day, he is a privacy and consumer protection lawyer. The opinions expressed above are solely those of the author. -------------------------------------------------------------------------------- 6. OpenAI Privacy Filter Source: https://openai.com/index/introducing-openai-privacy-filter/ Site: OpenAI Submitter: tanelpoder (Hacker News) Submitted: 2026-04-23 00:14 UTC (Hacker News) HN activity: 126 points · 20 comments Length: 1.4K words (~7 min read) Language: en-US Today we’re releasing OpenAI Privacy Filter, an open-weight model for detecting and redacting personally identifiable information (PII) in text. This release is part of our broader effort to support a more resilient software ecosystem by providing developers practical infrastructure for building with AI safely, including tools⁠ and models⁠ that make strong privacy and security protections easier to implement from the start. Privacy Filter is a small model with frontier personal data detection capability. It is designed for high-throughput privacy workflows, and is able to perform context-aware detection of PII in unstructured text. It can run locally, which means that PII can be masked or redacted without leaving your machine. It processes long inputs efficiently, making redaction decisions in a quick, single pass. At OpenAI, we use a fine-tuned version of Privacy Filter in our own privacy-preserving workflows. We developed Privacy Filter because we believe that with the latest AI capabilities, we could raise the standard for privacy beyond what was already on the market. The version of Privacy Filter we are releasing today achieves state-of-the-art performance on the PII-Masking-300k benchmark, when corrected for annotation issues we identified during evaluation. With this release, developers can run Privacy Filter in their own environments, fine tune it to their own use cases, and build stronger privacy protections into training, indexing, logging, and review pipelines. Privacy protection in modern AI systems depends on more than pattern matching. Traditional PII detection tools often rely on deterministic rules for formats like phone numbers and email addresses. They can work well for narrow cases, but they often miss more subtle personal information and struggle with context. Privacy Filter is built with deeper language and context awareness for more nuanced performance. By combining strong language understanding with a privacy-specific labeling system, it can detect a wider range of PII in unstructured text, including cases where the right decision depends on context. It can better distinguish between information that should be preserved because it is public, and information that should be masked or redacted because it relates to a private individual. The result is a model that is strong enough to deliver frontier-level privacy filtering performance. At the same time, the model is small enough to be run locally–meaning data that has yet to be filtered can remain on device, with less risk of exposure, rather than needing to be sent to a server for de-identification. Privacy Filter is a bidirectional token-classification model with span decoding. It begins from an autoregressive pretrained checkpoint and is then adapted into a token classifier over a fixed taxonomy of privacy labels. Instead of generating text token by token, it labels an input sequence in one pass and then decodes coherent spans with a constrained Viterbi procedure. This architecture gives Privacy Filter a few useful properties for production use: Fast and efficient: all tokens are labeled in a single forward pass. Context aware: the language prior enables PII spans to be detected based on surrounding context. Long-context: the released model supports up to 128,000 tokens of context. Configurable: developers can tune operating points to trade off recall and precision depending on their workflow. The released model has 1.5B total parameters with 50M active parameters. Privacy Filter predicts spans across eight categories: private_person private_address private_email private_phone private_url private_date account_number secret The account_number category helps mask a wide variety of account numbers, including banking info like credit card numbers and bank account numbers, while secret helps mask things like passwords and API keys. These labels are decoded with BIOES span tags, which helps produce cleaner and more coherent masking boundaries. Example input text Subject: Q2 Planning Follow-Up Hi Jordan, Thanks again for meeting earlier today. I wanted to follow up with the revised timeline for the Q2 rollout and confirm that the product launch is scheduled for September 18, 2026. For reference, the project file is listed under 4829-1037-5581. If anything changes on your side, feel free to reply here at maya.chen@example.com or call me at +1 (415) 555-0124. Best, Maya Chen Text after masking personal identifiers Subject: Q2 Planning Follow-Up Hi [PRIVATE_PERSON], Thanks again for meeting earlier today. I wanted to follow up with the revised timeline for the Q2 rollout and confirm that the product launch is scheduled for [PRIVATE_DATE]. For reference, the project file is listed under [ACCOUNT_NUMBER]. If anything changes on your side, feel free to reply here at [PRIVATE_EMAIL] or call me at [PRIVATE_PHONE]. Best, [PRIVATE_PERSON] We developed Privacy Filter in several stages. First, we built a privacy taxonomy that defines the types of spans the model should detect. This includes personal identifiers, contact details, addresses, private dates, many different kinds of account numbers such as credit and banking information, and secrets such as API keys and passwords. Second, we converted a pretrained language model into a bidirectional token classifier by replacing the language modeling head with a token-classification head and post-training it with a supervised classification objective. Third, we trained on a mixture of publicly available and synthetic data designed to capture both realistic text and difficult privacy patterns. In parts of the public data where labels were incomplete, we used model-assisted annotation and review to improve coverage. We also generated synthetic examples to increase diversity across formats, contexts, and privacy subtypes. At inference time, the model's token-level predictions are decoded into coherent spans using constrained sequence decoding. This approach preserves the broad language understanding of the pretrained model while specializing it for privacy detection. We evaluated Privacy Filter on standard benchmarks and on additional synthetic and chat-style evaluations designed to test harder, more context-sensitive cases. On the PII-Masking-300k⁠(opens in a new window) benchmark, Privacy Filter achieves an F1 score of 96% (94.04% precision and 98.04% recall). On a corrected version of the benchmark that accounts for dataset annotation issues identified during review, the F1 score is 97.43% (96.79% precision and 98.08% recall). We also found that the model can be adapted efficiently. Fine-tuning on even a small amount of data quickly improves accuracy on domain-specific tasks, increasing F1 score from 54% to 96% and approaches saturation on the domain-adaption benchmark we evaluated. Beyond benchmark performance, Privacy Filter is designed for practical privacy filtering in noisy, real-world text. That includes long documents, ambiguous references, mixed-format strings, and software-related secrets. The model card ⁠(opens in a new window)also reports targeted evaluation on secret detection in codebases and stress tests across multilingual, adversarial, and context-dependent examples. Privacy Filter is not an anonymization tool, a compliance certification, or a substitute for policy review in high-stakes settings. It is one component in a broader privacy-by-design system. Its behavior reflects the label taxonomy and decision boundaries it was trained on. Different organizations may want different detection or masking policies, and those policies may require in-domain evaluation or further fine-tuning. Performance may also vary across languages, scripts, naming conventions, and domains that differ from the training distribution. Like all models, Privacy Filter can make mistakes. It can miss uncommon identifiers or ambiguous private references, and it can over- or under-redact entities when context is limited, especially in short sequences. In high-sensitivity domains such as legal, medical, and financial workflows, human review and domain-specific evaluation and fine-tuning remain important. We are releasing OpenAI Privacy Filter to support stronger privacy protections across the ecosystem. The model is available today under the Apache 2.0 license on Hugging Face⁠(opens in a new window) and Github⁠(opens in a new window). It is intended for experimentation, customization, and commercial deployment, and it can be fine-tuned for different data distributions and privacy policies. Alongside the model, we are sharing documentation covering the model architecture, label taxonomy, decoding controls, intended use cases, evaluation setup, and known limitations, so teams can understand both what the model does well and where it should be used carefully. Privacy protection for AI systems is an ongoing effort across research, product design, evaluation, and deployment. Privacy Filter reflects one direction we believe is important: small, efficient models with frontier capability in narrowly defined tasks that matter for real-world AI systems. We are releasing it because we think privacy-preserving infrastructure should be easier to inspect, run, adapt, and improve. Our goal is for models to learn about the world, not about private individuals. Privacy Filter helps make that possible. We’re releasing this preview of Privacy Filter to receive feedback from the research and privacy community and iterate further on model performance. -------------------------------------------------------------------------------- 7. 1-Bit Hokusai's "The Great Wave" (2023) Source: https://www.hypertalking.com/2023/05/08/1-bit-pixel-art-of-hokusais-the-great-wave-off-kanagawa/ Site: hypertalking.com Submitter: stephen-hill (Hacker News) Submitted: 2026-04-22 13:46 UTC (Hacker News) HN activity: 530 points · 88 comments Length: 399 words (~2 min read) 5 years ago I started a now completely stalled project (fingers crossed I can figure out how to restart soon) to draw all of Hokusai’s 36 views of Mount Fuji as 1-bit pixel art. Why? I started this project for no other reason than I love to get into the ‘flow state’ from this kind of creative endeavour, and obviously I love to use old Macintosh computers. It feels very satisfying to get each pixel to fall into place, capturing both the original vision of Hokusai and the aesthetic that Susan Kare mastered early on with ‘the Japanese lady’. Kare’s picture of course starred on the cover of every box of MacPaint and you can still buy beautiful prints of it today, directly from her. Another challenging aspect of this project is to make sure the images are the original Macintosh screen resolution of 512 x 342 pixels. Why do I do this to myself?! Well, it just felt ‘right’ and I guess I’m a glutton for punishment when I want to make things feel authentic. How? The idea is to recreate every one of Hokusai’s woodcut prints from the series on an early black and white Macintosh, using contemporary hardware and software. I usually use either my Quadra 700 or PowerBook 100, mostly because those are my reliable and easy to access computers (that run System 7, my favourite and most familiar OS of that era). Software-wise I use Aldus SuperPaint 3.0, which is what my family had when I was a kid. Yes, I’d say that all of this is 99% nostalgia-driven… Anyway, @polyducks urged me to share at least the first of these (although this was actually the 2nd or 3rd of the series I tackled, not sure why I did them ‘out of order’), “The Great Wave off Kanagawa”. Took me a while to get around to it, but here it is: 01 of 36 views of Mt. Fuji by hypertalking This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please, if you reproduce this or post it anywhere be sure to credit me and link back to this website! Bonus! As a little extra, if you have a Macintosh with a 640 x 480 screen, you can download a version (PNG | PICT as compressed .zip) to use as a desktop pattern. I’ll aim to post more from this project soon. -------------------------------------------------------------------------------- 8. America's Geothermal Breakthrough Source: https://oilprice.com/Alternative-Energy/Geothermal-Energy/Americas-Geothermal-Breakthrough-Could-Unlock-a-150-Gigawatt-Energy-Revolution.html Site: OilPrice.com Author: Felicity Bradstock Published: 2026-04-25 HN activity: 81 points · 91 comments Length: 846 words (~4 min read) Language: en The United States’ geothermal energy sector has gradually expanded in recent years, as states look to diversify their energy mix. While several renewable energy sectors have struggled to stay afloat under the Trump administration, the government has continued to show support for geothermal energy development. Further, innovations in enhanced geothermal technologies are expected to support greater sectoral expansion in the coming years. Geothermal energy is generated by drilling into underground heat pockets in the Earth’s surface to access heat. The Earth’s core has a temperature of around 5,200°C, while rock and water in the Earth’s crust can reach temperatures of around 370°C. Energy operators typically drill reservoirs just a few miles underground to access thermal energy in the rocks, as well as warm water deposits. The heat is used to drive turbines to achieve carbon-free electricity production. California is home to 53 of the 99 U.S. geothermal power plants, while Nevada hosts 32 power plants, Oregon and Utah each have four plants, Hawaii and Alaska have two each, and Idaho and New Mexico each have one. Several companies are now building upon existing techniques for accessing geothermal resources by integrating enhanced geothermal systems (EGS) into operations. While conventional geothermal systems produce energy using hot water or steam, pumped from naturally occurring hydrothermal reservoirs trapped in rock formations underground, EGS use innovative drilling technologies, such as those used in fracking operations, to drill horizontally and create hydrothermal reservoirs where they don’t currently exist. EGS can help to further develop existing geothermal power generation sites and could support expansion to areas where geothermal resources cannot be easily accessed. The U.S. has a total summer capacity of around 2.7 GW of conventional geothermal power, contributing roughly 0.2 percent of the country’s summer production capacity. The U.S. Geological Survey estimates that the EGS offers 135 GW of clean energy production potential in the Great Basin of the U.S. Southwest alone, while other predictions suggest there could be up to 150 GW of production capacity. The first EGS power generator in the United States is currently under development and is expected to launch in 2026. The Houston-based startup Fervo Energy is leading the race to produce geothermal power in the U.S., with plans to continue expanding operations in the coming years. Fervo signed a three-year deal with the power generation technology firm Turboden America, which will provide the geothermal company with 1.75 GW of organic Rankine cycle turbine capacity for its new geothermal projects. Paolo Bertuzzi, the president of Turboden America, said in a statement, “Geothermal energy will be essential in stabilising a strained power grid with clean, firm energy, and Fervo has shown strong leadership in advancing the sector,” Bertuzzi added, “With this announcement, we are prepared to scale delivery in the U.S. market and add megawatts of new generation wherever and however they are required." Fervo will use the equipment to convert heat trapped underground into clean electricity to deliver power to the grid, as well as to run data centres. The company is currently developing the first 100 MW of its 500-MW Cape Station in Beaver County, Utah, which, once launched later this year, is expected to be the world’s largest EGS. The Cape Station is thought to have as much as 4.3 GW of geothermal energy capacity. The firm is also developing an EGS in Nevada at its Corsac Station, which is expected to provide 115 MW of clean electricity for Google and the utility NV Energy. In April, Fervo filed a registration statement with the U.S. Securities and Exchange Commission for a proposed initial public offering (IPO). The startup said that it plans to list its Class A common stock on the Nasdaq under the ticker symbol “FRVO.” The offering is now subject to market conditions and regulatory approval. This presents a major step forward for the U.S. geothermal energy sector. Fervo has said that it plans to expand its power plant portfolio significantly in the coming years, having leased almost 600,000 acres of public and private land in the U.S. West to date. The firm estimates that it has the potential to develop over 42 GW in total geothermal-energy capacity. Favourable federal energy policies from the Trump administration will support this expansion. Trump has shown support for geothermal energy projects in his second term in office, unlike for other renewable energy sources, and, in February, the U.S. Department of Energy announced a funding opportunity of $171.5 million to support next-generation geothermal field-scale tests for both electricity generation and exploration drilling, as part of President Trump’s Unleashing American Energy executive order. The United States has significant potential to expand its geothermal energy production by using EGS, which could help diversify the country’s energy mix, thereby supporting greater energy security by transitioning away from a reliance on fossil fuels and driving down consumer energy bills. By Felicity Bradstock for Oilprice.com More Top Reads From Oilprice.com Qatar’s $20 Billion LNG Blackout Forces Pakistan Back to the Spot Market Russian Oil Resumes Flowing to Slovakia via Druzhba After Three-Month Halt Europe's Rooftop Solar Orders Triple As Gas Prices Surge -------------------------------------------------------------------------------- 9. Using coding assistance tools to revive projects you never were going to finish Source: https://blog.matthewbrunelle.com/its-ok-to-use-coding-assistance-tools-to-revive-the-projects-you-never-were-going-to-finish/ Site: Matthew Brunelle's Blog Author: Matthew Brunelle Published: 2026-04-24 HN activity: 202 points · 118 comments Length: 1.4K words (~7 min read) Language: en Note: I initially drafted this before my last post on how Claude Code is getting worse. I'm putting it out now so I can reference it in a future post on OpenCode. As you can imagine my opinion on Claude Code has shifted since I wrote this. Long ago I attempted a personal project, but never finished due to life being busy. [1] Sort of like the Japanese word Tsundoku, for the pile of books you intend to eventually read one day. We all have these projects and they are good candidates for testing out AI coding assistance. After all, they were never going to get done anyway. The POC I put together was a shim between YouTube Music and the OpenSubsonic api. Explaining OpenSubsonic could be its own article, but for our purposes it's an API contract for nicely decoupling music streaming clients and servers. You can pick your own options for both. In my case I like Navidrome for the server, Feishin for desktop, and as I mentioned in my post on GrapheneOS, Symfonium for Android. Anyways, the shim made YouTube Music conform to the API so I could add it to any of my clients. Under the hood I used ytmusicapi for metadata lookup and programmatically called yt-dlp to stream the music. Getting basic streaming working was pretty simple. However, there was a long tail implementing all the endpoints in a conformant way. Then as always, there were new shiny projects that stole my attention away. Like that embedded rust location project I promise I'll finish at some point. Maybe. Luckily, nothing was really novel in that streaming project, and there is a clear spec to implement which is perfect for assisted coding. So a month and a half ago I thought I would test Claude Code with Opus 4.6 and see how it did implementing the project from scratch. After all, they gave me a free $50 in credit, so I might as well. The setup Since I had already written a proof of concept by hand, I had my own opinions about the implementation and laying all of that out beforehand constrained the tool in a nice way. I did the following: Created a uv project with fastapi, pydantic, ytmusicapi and yt-dlp as dependencies. Changed main.py to the example FastAPI main file. Dropped the openapi spec for OpenSubsonic in the folder. Added a brief description in a readme file: This project acts as a shim, exposing YouTube music as an opensubsonic client. It uses fastapi for its server with pydantic, ytmusicapi for metadata and yt-dlp for streaming." opensubsonic docs are available at: https://example.docsy.dev/docs/reference/ The openapi spec is in openapi.json. Added an empty TODO file. Generated a CLAUDE.md file using /init. I also often add a section like this to the CLAUDE.md file: ## Conventions - Methods should have type annotations for args and returns as well as docstrings. - Use Pydantic for data modeling. Use modern Pydantic V2 conventions. - Doc strings should use the Google style format with an args and returns sections. - Write unit tests with modern pytest style, eg top level methods using `assert` and fixtures. That's mostly based on past experience for what I have to repeatedly ask Claude Code not to do. I've bundled up this starting point into a git repository in case anyone else wants to try the experiment. Implementing the MVP With that setup done, I let Claude kick things off. The workflow I typically use is: Enter plan mode. Prompt for the next piece of work. After getting the initial plan, look for gaps / problems and ask follow up questions until I like the plan. Provide links to resources when Claude is off. Ask Claude to use the search tool to figure out what is idiomatic when there are multiple options and it is unclear to me which to take. Use "Accept and clear context". Repeat. The first prompt I used was: Have a look at the openapi.json file. This is a spec for the opensubsonic api. Implement an async fastapi server that stubs out all of the methods. There are both older xml endpoints and newer style json endpoints. You only need to handle the newer json endpoints. For this kind of change I like to clear context after implementing and then ask a follow up question: I implemented stubbed versions of all the methods specified in openapi.json. Double-check they are correct. Even with a spec, Claude Code makes mistakes the first time, but then will catch them (mostly) the second time through. Also, after implementing larger changes, I like to re-run /init to update the CLAUDE.md file to cover the new pieces. The next major prompt was: The methods for all endpoints are stubbed out now. I want to connect a subsonic client, search for a song, and stream it to the client. What is the minimum amount of functionality needed to implement that? Use ytmusicapi for searching YouTube music and yt-dlp for streaming. I got an implementation that looked reasonable pretty quickly, but fell over when trying to actually connect with Feishin. At that point I iterated by testing the client and handing the server request logs to Claude Code. Even with a spec there are details that are not spelled out clearly, like how endpoints may have a .view suffix that needs to be stripped. Every time there was an error I generated new unit tests to cover them. I was shocked to hear the audio streaming through feishin after only a couple of iterations. The main issues involved stubbed endpoints returning nothing. They mostly had to be updated to return empty, but correctly structured responses. Just getting an MVP is the easy part though. Not that far beyond what I implemented in my POC. Working through the long tail. The rest of the work was the less interesting, more drudgery parts to make the project actually usable. From the docs, OpenSubsonic has ~80 endpoints spread over 15 different categories. For the MVP use case I only had to support: getLicense, getUser, getGenres and getMusicDirectories with empty, but valid collections. getSong as a pass through that returned the ID in the query params and default values. search3 with a very basic ytmusicapi call. stream with a yt-dlp call wrapped in an asyncio.to_thread to extract the URL for the "bestaudio" format. getCoverArt with a call to yt-dlp to extract the cover image URL. To support the full functionality of a subsonic client I: Added simple in memory caching for ytmusicapi calls to avoid hitting usage limits. Used sqlite for storing music metadata and implemented all the endpoints in the browsing category. Even getTopSongs by querying for the top songs list. Saved the song to disk as it streamed to avoid redownloading songs. I had to have additional handling to clean up the incomplete file when a client disconnects from the stream endpoint before the file was fully downloaded. I knew all these things had to be done to make my own POC more usable, and I could have done them, but never did. At the same time, since I never planned to release anything I absolutely skipped the hard bits around authentication. All together I was able to get a working service that I could connect to from a subsonic client in a short evening. In the end I dubbed the project "Sub-standard". Is this good? I don't want to sound like an AI coding assist booster. I still have fears around deskilling from relying on these tools too much. That's why I still bang my head against the wall trying to learn Rust. In my mind there are different buckets for personal projects. One is things I do to learn and grow and the other is things I really wish existed. [2] This kind of project falls into the second bucket. Using AI coding assist to reify those projects is sort of a form of wish fulfillment. I never would have gotten to it, but now I can have the project. One less metaphorical book sitting unread on bookshelf. In the end I think the important thing is not whether you are doing projects in bucket 2, but whether you are also still doing the stretch projects in bucket 1. Or at least that is the excuse I tell myself. ↩︎ Also other buckets, I don't want to imply those are the only two. ↩︎ -------------------------------------------------------------------------------- 10. The Joy of Folding Bikes Source: https://blog.korny.info/2026/04/19/the-joy-of-folding-bikes Site: Korny's Blog Author: Korny Sietsma Published: 2026-04-18 HN activity: 97 points · 59 comments Length: 790 words (~4 min read) Language: en April 19, 2026 3 minute read I was chatting to a friend about my folding bike and I had the urge to write about it - because this falls in the category of “Things I wish I’d had decades ago”. And maybe I can encourage some others to try these wonderful devices. Note: I’m 3 months into a new job so blogging has taken a back seat to drinking from a firehose of new domain knowledge, new people, new tech. I’m still playing with AI-assisted coding, but at a slower pace - I do hope to blog more about this when things calm down. 12 years ago I started cycling in London, commuting by train, and I used the bicycle hire scheme mis-named at the time “Boris Bikes”. It was OK but a bit of a hassle - bikes were heavy, payment was fiddly, and often the hire racks would be empty in the morning and full in the evening. So I followed the advice of other commuters and got this beautiful device - it cost £1000 at the time, a fair bit of money, but on a Ride to Work scheme I could pay this weekly over a year, so it was £4 a week, pre-tax, which made it quite affordable. It’s a Brompton - and they are a marvellous brand, but I don’t want to just say “Get a Brompton” as I’m sure other brands must be competing in this space - and Bromptons are pricey. So do your own research. I also (after a couple of annoying flats) got puncture-proof Schwalbe Marathon Plus tyres - and I haven’t had a single puncture since. And like I said at the start - I so wish I’d had something like this years and years ago. So many years of commuting in Melbourne where I’d walk slowly to a station, or drive to a station and have to cram into busy parking. So many years where my bike would languish in a shed, probably with flat tyres because I only got it out on specific “exercise” attempts. The folding bike: Lives in my study. I have a nicer bike in the shed but almost never get it out because the bike in my study is so convenient. Can be carried in one hand - it’s heavy, about 12kg plus bags, but that’s ok for short distances. Can go on the train - this is the biggest benefit, commuting is so much easier when you can go cycle -> train -> cycle. Most trains, even ones with “no bikes” rules, allow them - they aren’t any bigger than a large suitcase. Never gets punctures Can go in the boot of the car easily - when I get the car serviced, I drive to the garage, then cycle home, and cycle back to the garage at the end of the day. Can be carried in to the office or cafes or shops - no locking it on the street; a big benefit in London where bike thieves are everywhere and tend to carry bolt cutters or angle grinders! I do have a lock - a folding ‘silver’ grade Abus Bordo lock that mounts on the bike. But I only really use it in my home town where thieves are much rarer, or on the very rare case where I want to go in a cafe and there isn’t room for the bike - but only if I can sit with the bike in eyeshot! I get it serviced every year or two. And after 11 years, it’s had nothing major go wrong - a few cable replacements and the like, but it still has the original frame, wheels, and gears. That’s pretty impressive for 11 years of commuting, though post-Covid I only tend to commute one day a week. For a lot of people this should be fairly should be simple economics. Our station parking is £10 a day - current Brompton prices start at £1400 - so even ignoring pre-tax schemes and savings in other transport like the underground, a Brompton would pay for itself in 140 working days, or 28 weeks for the poor folks still commuting every day. Plus I just love the freedom of cycling, and the exercise! #protip If cycling in one of the supported areas the free Cycle Streets app is marvellous. It uses Open StreetMap data so users can update it when roads change, and lets you choose quiet vs fast routes. People ask me if cycling in London is safe - it’s fine if you use an app like this to avoid the worst roads, and ride sensibly with a bit of care about passing trucks or busses, and (gasp) actually obey traffic signals. -------------------------------------------------------------------------------- 11. Math Is Hard – OpenBSD Stories Source: http://miod.online.fr/software/openbsd/stories/vaxfp.html Site: miod.online.fr Author: Miod Vallat Submitted: 2026-04-23 15:29 UTC (Hacker News) HN activity: 52 points · 1 comments Length: 3.8K words (~17 min read) Language: en When developing software to run in a Unix environment, you will often be able to use the same system features and benefit from good developer tools, regardless of the particular platform you're working on, as most processors will provide a rich instruction set and virtual memory, among other things. When you're on the other side of the fence, and working in the kernel, all the gory details which will heavily differ across platforms can no longer be ignored, and sometimes, the shortcomings of a given processor architecture can become a real pain in the arse. For example, if you have read the m88k saga, you might remember that the need, for the operating system exception handler, to perform all the pending load and stores before returning from exception processing, had been a source of problems for years. The 88100 processor is not the only processor which sometimes makes the kernel developer's life harder than it could have been. Let me tell you about a processor design choice which turned out to have a significant cost in the kernel (but in a rare situation.) The VAX architecture, introduced at the end of 1977, is one of the oldest 32-bit architectures. The architecture has a large instruction set, plenty of addressing modes, but nothing fancy: no out-of-order execution, no branch delay slots, no register renaming, no hyper threading, and even no cache memory on the earliest designs, which did not need any as they wouldn't run faster than the memory refresh cycle (back then, processors speeds were expressed as cycle times in micro- or nano-seconds, rather than megahertz: a 5MHz processor would be described as having a 200ns cycle time; in comparison, the memory refresh cycles would be around 120ns, and as progress were made, were slowly decreasing, with 100ns memory being common at the end of the 1980s, 80ns in the first half of the 1990s, 70ns and 60ns later on.) The exception model of the VAX was also quite simple, with the ``Exceptions and Interrupts'' chapter of the VAX Architecture Reference Manual being only 36 pages long in the first edition (and 43 in the second edition, mostly because of a slightly larger font rather than extra text.) Quoting from it: A trap is an exception that occurs at the end of the instruction that caused the exception. Therefore the PC saved on the stack is the address of the next instruction that would normally have been executed. [...] A fault is an exception that occurs during an instruction and that leaves the registers and memory in a consistent state such that elimination of the fault condition and restarting the instruction will give correct results. After an instruction faults, the PC saved on the stack points to the instruction that faulted. So far, this is textbook processor design. If the processor encounters a situation which is not recoverable (and will cause your process to be killed), it's a trap. If, however, there is a chance that some recovery action can be done and the offending instruction given another chance, then it's a fault. For example, accessing a memory page which is not mapped will cause a fault. If the address is legitimate, the appropriate page and its contents will be fetched from swap (or from the binary file you are running), and the operation can be restarted. If the address is not legitimate, then your process will be sent a SIGSEGV signal and die. Dividing by zero, on the other hand, is a trap. No matter what one may try to bend the laws of mathematics, there is no way for such a computation to ever deliver a meaningful result. Your process will be sent a SIGFPE (Floating-Point Exception) signal - even if this was an integer divide. (The siginfo_t extra information will let an hypothetical signal handler tell integer divide by zero (FPE_INTDIV) and floating-point divide by zero (FPE_FLTDIV) apart.) So far, so good - the VAX exception handler (trap() in sys/arch/vax/vax/trap.c) would let the VM system recover the missing page faults, and would send a SIGFPE signal down the throat of your process, for arithmetic traps. This code has been almost unchanged since 3BSD. Did you know? Back in 1980, the illegal instruction signal nowadays known as SIGILL was called SIGINS , SIGSEGV was called SIGSEG , SIGKILL was called SIGKIL , SIGFPE was called SIGFPT , SIGTERM was called SIGTRM , and roads were uphill both ways... Excerpt from 3BSD sys/h/param.h, dated january 5th, 1980: /* * signals * dont change */ #define NSIG 17 /* * No more than 16 signals (1-16) because they are * stored in bits in a word. */ #define SIGHUP 1 /* hangup */ #define SIGINT 2 /* interrupt (rubout) */ #define SIGQUIT 3 /* quit (FS) */ #define SIGINS 4 /* illegal instruction */ #define SIGTRC 5 /* trace or breakpoint */ #define SIGIOT 6 /* iot */ #define SIGEMT 7 /* emt */ #define SIGFPT 8 /* floating exception */ #define SIGKIL 9 /* kill, uncatchable termination */ #define SIGBUS 10 /* bus error */ #define SIGSEG 11 /* segmentation violation */ #define SIGSYS 12 /* bad system call */ #define SIGPIPE 13 /* end of pipe */ #define SIGCLK 14 /* alarm clock */ #define SIGTRM 15 /* Catchable termination */ In late april 2002, Todd Miller, who was - among other things - taking care of Perl in the OpenBSD basesystem, tried the latest Perl snapshot which would eventually become Perl 5.8, and noticed it would fail to build on the i386 and vax platforms, because miniperl (a subset of Perl itself used during the build to produce various files needed by the full-blown Perl) would sometimes spin, apparently stuck but keeping the processor busy. Investigating, he managed to produce a standalone reproducer. Date: Tue, 30 Apr 2002 16:24:50 -0600 From: Todd C. Miller To: private OpenBSD mailinglist Subject: i386 divide by zero bug The following program hangs forever with: 29142 a.out PSIG SIGFPE caught handler=0x1 mask=0x0 addr=0x17ba trapno=8 Vax has similar behavior when you overflow a double. - todd #include #include #include int main(int argc, char **argv) { int i; signal(SIGFPE, SIG_IGN); i = 1 / 0; exit(0); } The i386 situation got taken care of quite quickly, but we were left with the Vax situation. On may 7th, there was this very terse, but to the point, status report on the OpenBSD developers chatroom. Todd, what about that SIGFPE stuff? What about it? It's still fucked as far as I know And that means that when perl gets updated, it won't work on vax... One week later, this was still pending... ok, so Todd, the new perl just wants a vax FPE fix eh? Yes. the correct behaviour should be? The problem is that when you try to ignore SIGFPE and an overflow occurs the kernel keeps delivering the signal and doesn't stop. It should not deliver the signal at all since it is ignored. and it should... do what? advance over the instruction I suppose. I guess. There are ways to tell the vax to ignore FPU exceptions but I didn't find any real info on it. The next day, I chimed in: I was thinking about the SIGFPE-in-a-loop problem and found this note: When we get an arithmetic fault of types 8,9,10. The PC is backed up to point at the instruction causing the fault. If we just send a SIGFPE and return, and there is no SIGFPE hander, the program goes into an infinite loop heh that might be what we are experiencing here I'll check with the VARM this evening (VARM here being the VAX Architecture Reference Manual.) This note was actually an excerpt from the Linux-vax project, as it was not dead yet at that time. This todolist is no longer online, but has been saved by the Wayback Machine. The complete text from which I quoted was: When we get an arithmetic fault of types 8,9,10. The PC is backed up to point at the instruction causing the fault. If we just send a SIGFPE and return, and there is no SIGFPE hander, the program goes into an infinite loop with the arith_fault handler, and the faulting instr. Should we a) try and advance PC, or b) send it a signal that kills it? After some tinkering, I had a crude diff which had a chance to solve the problem. Date: Wed, 15 May 2002 19:44:14 +0000 From: Miod Vallat To: Hugh Graham, Todd C. Miller Subject: the vax SIGFPE problem, WIP As told on ICB, I think I've found the reason behind the SIGFPE loop. Arithmetic fault can either be "traps", or restartable "faults". In the fault case, the frame pc points to the instruction that faulted, and not the following instruction, in case we could save the world and make it not fault again. Since we only deliver a signal in this case, it loops. The workaround is to skip to the next instruction. I cooked the following diff, but it's not finished compiling, so be careful, it might not be a bright idea, but I think you might have comments on the way I'm doing it... Oh, and ddb needs fixes to properly recognize two-byte opcodes, but this will be a later diff. Miod [...] The problem was indeed simple: if the arithmetic trap was a fault, as opposed to a trap, and the SIGFPE signal was ignored, then we had to resume process execution after the faulting instruction. But the VAX exception model does not give us the ability to return from exception and skip that instruction. So the kernel had to skip the instruction by itself. VAX instructions are of variable length, depending on the actual operands and addressing modes used. This meant that, in order to compute the correct instruction length, the kernel had to disassemble the instruction to skip. Which is no simple task since, when using some of the most insane addressing modes, a VAX instruction can span more than 16 bytes! The high-level logic was simple and easy to document: Index: vax/trap.c =================================================================== RCS file: /cvs/src/sys/arch/vax/vax/trap.c,v retrieving revision 1.22 diff -u -r1.22 trap.c --- vax/trap.c 2002/03/14 03:16:02 1.22 +++ vax/trap.c 2002/05/15 19:38:24 @@ -313,8 +313,25 @@ } if (trapsig) { sv.sival_ptr = (caddr_t)frame->pc; trapsignal(p, sig, frame->code, typ, sv); + + /* + * Arithmetic exceptions can be of two kinds: + * - traps (codes 1..7), where pc points to the + * next instruction to execute. + * - faults (codes 8..10), where pc points to the + * faulting instruction. + * In the latter case, we need to advance pc by ourselves + * to prevent a signal loop. + * + * XXX this is gross -- miod + */ + if (code == (T_ARITHFLT | T_USER) && frame->code >= 8) { + extern void *skip_opcode(void *); + + frame->pc = skip_opcode(frame->pc); + } } if (umode == 0) And all the gory details had to be put in that new skip_opcode function. About 6 hours later, I had an ugly workaround: I was reusing part of the disassembler code from the kernel debugger to parse the faulting instruction and compute its length. Date: Wed, 15 May 2002 21:28:53 +0000 From: Miod Vallat To: Hugh Graham, Todd C. Miller, Theo de Raadt Subject: working vax sigfpe diff As Hugh and Todd already know, the SIGFPE problem is very simple: Arithmetic fault can either be "traps", or restartable "faults". In the fault case, the frame pc points to the instruction that faulted, and not the following instruction, in case we could save the world and make it not fault again. Since we only deliver a signal in this case, it loops. The workaround is to skip to the next instruction. To do so, I'm borrowing some MD ddb code, hence a lot of ugly #ifdef to ensure that non-DDB kernel can have this fix and not bring too much stuff. Miod [...] The feedback I received was mostly negative - while everyone acknowledged that this diff was solving a real problem and that there was no easy way to skip an instruction but parse it to compute its length, everyone was also not wanting to involve the kernel debugger code for that, as we wanted to be able to build kernels without it, and also that particular task of computing an instruction length was not really part of the tasks of a debugger. So I reworked my changes to make the skip_opcode completely independent from the debugger code, but duplicating a few lines of code. Date: Thu, 16 May 2002 00:49:16 +0000 From: Miod Vallat To: Theo de Raadt, Hugh Graham, Todd C. Miller Subject: improved vax sigfpe diff with goodies This new diff: - does not interfere with ddb anymore, at the expense of a few lines in machdep.c - features my improved db_disasm that correctly recognizes two-byte opcodes. Builds with or without option DDB, passes the fpe regress test, no issues so far here. Comments? Miod [...] There were no objections to that new version of the diff, and it went in shortly after. Fix a long standing problem on vax: on "arithmetic fault" exceptions, we schedule a SIGFPE signal delivery to the faulting process. However, arithmetic faults come in two flavors: "traps" that are "regular" exceptions, and "faults" that are restartable exceptions. In the "fault" case, the frame pc points to the faulting instruction, instead of the next instruction, in case we could save the world by tweaking memory and make the instruction not fault again when restarted. In practice, this led to processes blocked in a SIGFPE loop madness. To avoid this, add a skip_opcode() routine to compute the address of the next opcode, effectively skipping the offending instruction ; this routine is a very stripped-down db_disasm(). While there, enhance the ddb disassembler to correctly recognize and disassemble two-byte opcodes. ok hugh@, deraadt@ This fix made its way into NetBSD 7 years later. However, two days later, Michael Hitch noticed a bug in this change and fixed it. On the vax, the trapsignal() call will change frame->sp to point to a callg on the user's stack that calls the user's signal handler, so do the skip_opcode() before calling trapsignal(). A floating point overflow no longer causes a signal loop. This should stop the native compile hangs trying to compile src/lib/libm/complex/catan.ln. This time, it was my turn to let this slip past my radar. I only carried the fix over to OpenBSD three years later, as the import of SQLite in the base system caused that bug to be triggered when building on vax. When handling SIGFPE, do the `advance pc if exception is a fault (as opposed to a trap)' dance before invoking trapsignal(), which will mess with the pc too. My bug initially, can't believe I never noticed; fixed first in NetBSD. This makes libsqlite3 build. So, all is well that ends well. But there remains an unanswered question: with BSD having been runinng on VAX hardware since 1979, how come this problem was not fixed until 2002? One possible reason is that few programs, if any, did ignore SIGFPE (or attempt to handle it), so when SIGFPE got delivered, these programs would be terminated immediately, without looping on the offending instruction. But I think the real reason is different. Mind you, in the early years of the VAX, these arithmetic faults did not exist - there were only arithmetic traps, where the exception already points to the next instruction. Which is something one can only know if either: one had been a Digital employee at that time. one had been a Digital customer who had a VAX system reworked by Digital technicians at that time. one has a 2nd edition VAX Architecture Reference Manual (first published in 1991) and paid careful attention to the note on page 257. I suppose very few of my readers will satisfy any of these three conditions, so I will explain. In the first edition of the VAX Architecture Reference Manual, on page 231, table 5.1 lists the Arithmetic Exception Type Codes: Table 5.1 Arithmetic Exception Type Codes Exception Type Mnemonic Decimal Hex Traps integer overflow SS$_INTOVF 1 1 integer divide-by-zero SS$_INTDIV 2 2 floating overflow SS$_FLTOVF 3 3 floating or decimal divide-by-zero SS$_FLTDIV 4 4 floating underflow SS$_FLTUND 5 5 decimal overflow SS$_DECOVF 6 6 subscript range SS$_SUBRNG 7 7 Faults floating overflow SS$_FLTOVF_F 8 8 floating divide-by-zero SS$_FLTDIV_F 9 9 floating underflow SS$_FLTUND_F 10 A ("decimal" in the exception types above refers to computations involving data in "packed decimal" format, with each half-byte (nibble) would store one decimal digit.) Note that the three fault conditions also exist as trap conditions. In fact, their descriptions are quite similar. For example: Floating Overflow Trap -- A floating overflow trap is an exception that indicates that the last instruction executed resulted in an exponent greater than the largest representable exponent for the data type after normalization and rounding. [...] Floating Overflow Fault -- A floating overflow fault is an exception that indicates that the last instruction executed resulted in an exponent greater than the largest representable exponent for the data type after normalization and rounding. [...] This confusion gets cleared in the second edition. The same table (now table 5.2, on page 255) only lists: Table 5.2 Arithmetic-Exception Type Codes Exception Type Mnemonic Decimal Hex Traps integer overflow SS$_INTOVF 1 1 integer divide-by-zero SS$_INTDIV 2 2 decimal divide-by-zero SS$_FLTDIV 4 4 decimal overflow SS$_DECOVF 6 6 subscript range SS$_SUBRNG 7 7 Faults floating overflow SS$_FLTOVF_F 8 8 floating divide-by-zero SS$_FLTDIV_F 9 9 floating underflow SS$_FLTUND_F 10 A Note how types 3 and 5 have disappeared, and description of type 4 no longer mentions floating-point. After the various traps and faults descriptions, the note finally gives us the clue: Note Floating overflow, floating underflow, and floating divide by zero were originally implemented as traps on the VAX-11/780, and had type codes 3. 4, and 5, respectively. The architecture was later modified to include only floating-point faults, and all VAX-11/780s were upgraded. Therefore, in the very beginning, when the only VAX systems were 11/780s, all arithmetic exceptions were traps, and could not get restarted. There was simply no need for the BSD kernel to skip the instruction, as the hardware had already done the work. The only other exceptions which were faults, not traps, were memory management exceptions, which would always behave as "either the operating system can fix the problem and restart the instruction, or this is a non-recoverable error and your program can't continue" (and if you ignore SIGSEGV, it is considered perfectly acceptable that your program spins in a SIGSEGV loop until you kill it.) When the architecture was changed to turn these into faults, I guess nobody paid enough attention to the consequences of that change to realize the need for the operating system to sometimes skip the faulting instruction, as one could not imagine software would mask SIGFPE or try to mess with the register values and restart the computation. It is also very likely that the consequences of this change were only considered from a VMS point of view, with Unix (well, BSD) being considered irrelevant by Digital at the time. But the net result is that the hardware does not provide any facility to get the address of the next instruction, in case a fault needs to be handled as a trap. After all, since the instruction has been executed, the address of the next one is known somewhere in the processor. I wish there had been a way to get that address easily (either from the trap frame or from a special processor register), as this would have made fixing the problem simpler. But this situation is rare enough that the cost of having the kernel do the work turned out to be acceptable. If you are a bit more curious about this, there are a few interesting documents which you can find on Bitsavers. The VAX-11 System Reference Manual, revision 5, dated february 1979, contains edit history information. The exceptions are described in chapter 6, and it seems that the change from traps to faults has been documented in the 6th revision of the chapter, dated 31-Jan-79. This implies the actual processor rework took place sometime earlier. Since the second VAX model, the VAX-11/750, was only announced in 1980, this is also consistent with all the mentions that only models 780 had the "every arithmetic exception is a trap" behaviour. In VAX/VMS Internals and Data Structures, for VMS 2.2, published in april 1981, page 2-12 (page 71 of the pdf file) also mentions that On the VAX-11/750, these three floating point exceptions are faults. On the VAX-11/780, they are traps. ...which hints that modification of the VAX-11/780 systems had not started, at the time of writing. Figuring out when VAX-11/780 installations started to be modified by Digital field engineers (and in which order) would be an interesting detective work, but I doubt the paperwork trail of these reworks is still existing somewhere, especially with Digital having been bought by Compaq and then later by HP. After all, we're talking about events having taken place about 45 years ago, which is an eternity, in computing times... The VAX-11 Architecture Reference Manual, dated 20 May, 1982, on page 6-6 (page 312 of the pdf file), when describing the FU bit in the PSW (Processor Status Word), which can be used to enable or disable floating-point underflow faults, mentions: On the original VAX-11/780 a trap occurs; on all other VAX Processors a fault occurs. Although the complete document is dated from 1982, this particular chapter is dated 12-Dec-80. The first edition of the published VAX Architecture Reference Manual, from 1987. This is the book on the left, on the picture above (from which I learned almost all of my VAX knowledge.) The DEC STD 032 Vax Architecture Standard document contains the same text as the second edition of the VAX Architecture Reference Manual, but typewritten and crude drawings, while the second edition book uses a nice, much more readable, font. The note about the 11/780 systems being modified can be found on page 5-12 (page 361 of the pdf file.) Also, in the few VMS release notes which can be found there on Bitsavers, there does not seem to be any mention of a 11/780 rework required (or advised). The VMS 1.5 release notes, dated february 1979, in section 4.3, refer to a "CVTTP FCO" (CVTTP being a VAX instruction processing decimal data, FCO being a Field Change Order, when a Field Engineer is required to apply hardware changes to the system) required for proper Cobol-74 operation; but this is not related to floating-point aritmetic exceptions, thus not the change discussed here. -------------------------------------------------------------------------------- 12. Optimizing Datalog for the GPU Source: https://dl.acm.org/doi/10.1145/3669940.3707274 Site: dl.acm.org Submitter: tosh (Hacker News) Submitted: 2026-04-23 13:34 UTC (Hacker News) HN activity: 26 points · 3 comments Scrape failed: http 403 -------------------------------------------------------------------------------- 13. New 10 GbE USB adapters are cooler, smaller, cheaper Source: https://www.jeffgeerling.com/blog/2026/new-10-gbe-usb-adapters-cooler-smaller-cheaper/ Site: Jeff Geerling Submitter: calcifer (Hacker News) Published: 2026-04-24 HN activity: 552 points · 324 comments Length: 876 words (~4 min read) For years, the best way to get 10 gigabit networking on laptops was to buy an expensive, large, and hot 10 GbE Thunderbolt adapter. With new RTL8159-based 10G USB 3.2 adapters coming onto the market, the bulky adapters might be a thing of the past. Just look at the size of the thing in comparison to my Thunderbolt adapters: 2.5G and even 5G USB adapters have been out for a while, but sometimes you need more bandwidth. The 10G adapter I'm testing is this $80 model from WisdPi. That's double the price of most 5G/2.5G adapters, but less than half what I paid for my Thunderbolt 10G adapters. If you need 10 gigs, this might be the best option, if you use RJ45 and not SFP+. If you don't need 10 gigs, a 2.5 or 5 Gbps adapter is still the best value. Also, you might not even get 10 Gbps with these new adapters, depending on your computer. I'll summarize why after the video: USB is fast frustrating I tested this adapter on four computers: Framework 13 with AMD Ryzen AI 5 340 (includes USB 4 / USB 3.2 Gen 2) MacBook Neo (USB 3.1 and USB 2.0) M4 MacBook Air (USB 4 / USB 3.1 Gen 2) Desktop with AMD Ryzen 7900x with B650 motherboard (USB 3.2 Gen 2x2) Getting those specific USB port specs is a bit of a chore (some websites don't even tell you if it's '3.2 Gen 2' or '3.0', and Windows itself only says "USB 3.0" when you plug in a USB 3.2 Gen 2x2 device like the 10 Gbps NIC!) I was only able to get full 10 Gbps speed (minus a little overhead) on the AMD Desktop, which has a single USB 3.2 Gen 2x2 port good for 20 Gbps of throughput. The other machines got around 6-7 Gbps: The Macs have the same per-port bandwidth (USB 3.1 Gen 2x1, for 10 Gbps), but the performance is consistently worse than the Framework. On the Macs, the adapter was correctly identified when I plugged it in, and worked straightaway, with no extra driver installation. The 'Hardware' tab in the Network settings incorrectly reported a connection speed of 2500Base-T. On Windows, the adapter was recognized when plugged in, but wouldn't connect to the network until I installed the latest Realtek driver, downloaded from their website. Bidirectional bandwidth testing offered an interesting contrast; the Macs both handled traffic symmetrically, while the Framework was wildly disparate. The desktop PC gave a full 9.5 Gbps down, and around 5 Gbps up. The main takeaway is this adapter only reaches its full potential if you have a USB 3.2 Gen 2 2x2 20 Gbps port. And considering the mess of USB naming over the past decade—and the fact Microsoft reports all USB 3.x connections as "3.0" in their Device Settings pane, good luck figuring out your own computer's support without glancing at spec sheets! A few computers I've seen actually label the USB port speed (e.g. '10' or '20'), but that seems fairly rare. Most manufacturers seem to follow Apple in eschewing labeling entirely! At least Apple has the negotiated port speed visible in the 'System Information' app—I couldn't find that detail anywhere on Windows. 5G and 2.5G a better value? With reduced speed due to inadequate USB port bandwidth, would a 2.5 Gbps or 5 Gbps adapter be a better value? Testing the WisdPi 5 Gbps adapter pictured above on my M4 Air, it got 4.6 Gbps. The 10 Gbps adapter is 1.4x faster, but for more than 2x the price ($30 vs $80). I think, if you already have a 10 Gbps network, you use RJ45 and not SFP+ connections, and you want a more compact adapter (compared to the bulky, hot Thunderbolt adapters), it's a good deal. But if you need that full 10 Gbps or SFP+ support, Thunderbolt adapters are still the best if you have Thunderbolt ports that don't support USB 3.2 Gen 2 2x2. If you don't need 10 Gbps, though, stick to 2.5 or 5 Gbps adapters—they are still the best value right now. Thermals and Power Draw I also checked thermals and power draw—though my tests are not comprehensive. Measuring the absolute power draw is difficult because my USB-C power measurement devices downgrade the connection speed to USB 2, which means I'm not testing at full performance. At the slower USB 2 speed, the adapter uses about 0.86 Watts of power. And it doesn't get that hot, which was surprising. All my Aquantia-based 10 gig adapters turn into little ovens. That's why they're so big: the enclosures are giant heatsinks. But the WisdPi only got up to 42.5°C after running a bidirectional iperf3 test for a few minutes. That's warm, but not so hot that I'd burn myself touching it like I have with other 10 gig adapters. Conclusion If $80 is too rich, this isn't the only option that uses the new chip; AliExpress is littered with alternatives. And you can get it on PCI Express cards, which bypasses the USB port requirement on desktop PCs. In the midst of all the price inflation in personal computing, it's nice to find a new device that's cheaper, faster, and (depending on your USB port) better. -------------------------------------------------------------------------------- 14. The Long Reply Source: https://ironicsans.ghost.io/the-long-reply/ Site: Ironic Sans Author: David Friedman Published: 2026-04-21 HN activity: 17 points · 0 comments Length: 972 words (~5 min read) Language: en Welcome! If you’re new here, please consider signing up for this free newsletter to get more random weird cultural stuff in your Inbox! A couple weeks ago, a post of mine went unexpectedly viral on Threads. As of this writing, it has been viewed 2,380,872 times. This is what I wrote: To which Noah replied: And then I wrote: I did not expect it to be any more popular than any other silly thing I might post on Threads. But for some reason it resonated with people. It might have been the wholesome topic in troubling times. A lot of people replied sharing pictures of their own trees, or commenting that they’d be back on June 9 for Noah’s update. A lot of people wrote that they were leaving a comment in the belief that engaging would make the algorithm more likely to show them any update that happens on June 9. And of course, when people engaged with the post, that made the algorithm more likely to show it to more people, and it snowballed over the day. I think people were also amused by the timescale of the reply. Five years is a long time to wait before replying to someone. When I got my reminder about Noah’s trees, I vaguely remembered having set it, but I certainly hadn’t given it any thought since then. Five years passed but I only thought about it on two of those days: the first day and the last day. So it was a very low-effort win. But Noah is really the master of long scale projects. You may remember him from his viral “Everyday” video, which he posted in 2006, featuring photos of himself taken every day. He had been doing it for six years already at that point, and unbelievably he is still doing it. Here’s his updated version after 20 years: Noah has other long-scale projects, too. He revisits the same spots and takes the same photos over time, like his Lumberland series, or his photos of a stone wall near his home, or this one tree that’s growing diagonally. So five years to reply? That’s nothing. And I have more examples to prove it. How long is too long to reply? In 2014, I asked that question on Twitter: After a year, Tim Chambers replied “One year?” After two years, he wrote, “Two years, probably.” Tim consistently replied every year on the anniversary of the original post. Other people did, also, but Tim was the most consistent. After five years, he wrote, “Not sure, but replying to this tweet may be the only reason I stay on Twitter.” And after ten years: "It would be funny if I replied to this tweet every year," I thought. Maybe I didn't expect Twitter to be around this long? Anyway, here we go, year ten. Ten years of annually replying to a single tweet! That’s impressive! But then Elon’s transition from Twitter to X was just about complete, and nobody has replied since then. That’s fine, because I wouldn’t even be there to see it. A twenty year note of congratulations Just a couple months ago, I got this nice note from Matt Maldre on Bluesky: Wow. I didn’t even realize that anniversary was coming up. It was nice to hear! And then, perhaps to assure me that he’s not a crazy stalker obsessed with my blog-turned-newsletter, he followed up with this: Well that’s a nice thing to do. It reminds me of how Paul Rubens (Pee-Wee Herman) apparently kept track of birthdays of people he’d meet, and sent them texts on their birthdays. Twenty years is a long time. But I think I have one more long reply that beats all of these. Cindi’s letter In 1998, my friend Cindi wrote herself a letter on her 25th birthday to be opened when she turns 50. She sealed it in an envelope and asked me to hold on to it and give it back to her in 25 years. I put it in a box and forgot all about it. In 2017, I came across the letter while cleaning out a closet. I figured that if I’d held onto it that long, I might as well wait a few more years and send it to her. We had mostly fallen out of touch over 25 years, but of course we live in a world where social media means nobody is ever fully out of touch. So in 2023 I messaged her on Facebook and asked for her address, and dropped it in the mail. I took a picture of it first. I’d held on to it so long, I wanted some sort of record of it just in case the post office lost it. Cindi let me know that she got the letter, and that reading it was very emotional for her. I don’t know what it said. That’s between 25-year-old Cindi and 50-year-old Cindi. But she did let me know that 25-year-old Cindi told 50-year-old Cindi to tell me she says hi. I have my entire email archive going back to 1997. I’m tempted to see what the furthest-back email is that I didn’t reply to, and write that person back. It’ll probably be something like, “Sorry I didn’t get back to you sooner. But yeah, that new movie The Matrix looks like it’ll be great!” So what’s the longest you’ve ever gone before sending or receiving a reply? If you’re reading this newsletter in a year or two, or even more, it’s not too late to let me know. Just hit Reply if it’s in your email, or leave a comment below if you’re reading on the website. And as always, thanks for reading! David P.S. For more of Noah, be sure to check out his excellent newsletter and YouTube channel. -------------------------------------------------------------------------------- 15. Simulacrum of Knowledge Work Source: https://blog.happyfellow.dev/simulacrum-of-knowledge-work/ Site: One Happy Fellow - blog Submitter: thehappyfellow (Hacker News) Submitted: 2026-04-25 17:20 UTC (Hacker News) HN activity: 107 points · 40 comments Length: 527 words (~3 min read) Language: en 25 Apr, 2026 How do you know the output is good without redoing the work yourself? You've received a report, a market analysis for the new product you're planning to launch. Reading through it you notice problems: the date on the report doesn't match the date you requested it on, it's from 6 months prior. Several paragraphs have obvious spelling errors. Some graphs are mislabeled and duplicated. The report is disregarded. The existence of typos and copy-paste errors which may not change the main conclusion of the report is enough to discard it. Someone who didn't put in enough care to make the report presentable on the surface level also didn't care enough to produce good research. You have judged the quality using a proxy measure: the superficial quality of the writing itself. It's not what you ultimately care about — what you care about is whether the report reflects reality and points you toward good decisions. But that's expensive to check. Surface quality is cheap, and it correlates well enough with the thing you can't easily measure. All of knowledge work has this problem. It's hard to objectively judge the quality of someone's work without spending a lot of effort on it. Therefore everyone relies heavily on proxy measures. Proxy measures kept misaligned incentives in check. LLMs broke them. Large language models are great at simulating a style of writing without necessarily reproducing the quality of the work. You can ask ChatGPT to write you a market analysis report and it will look and read like a deliverable from a top-tier consulting firm written by Serious Professionals. A software engineer can write thousands of lines of code which looks like high-quality code, at least if you have just a couple of seconds to skim through it. Their colleagues will ask AI to do a code review for them, the code review will uncover a lot of issues and potential problems, and these will be addressed. The ritual of working will be upheld with none of the underlying quality. We have built a working simulacrum of knowledge work. The incentives almost guarantee we are in big trouble. Many workers, quite rationally, want to do well on whatever dimension they are being measured on. If they are judged by the surface-level quality of their work, then it's no surprise most of "their" output will be written by LLMs. The LLMs have the same problem. The training doesn't evaluate "is the answer true" or "is the answer useful." It's either "is the answer likely to appear in the training corpus" or "is the RLHF judge happy with the answer." We are optimising LLMs to produce output which looks like high quality output. And we have very good optimisers. So here we are. We spent billions to create systems used to perform a simulacrum of work. Companies are racing to be the first on the tokens-spent leaderboard. The more LLM output workers produce, the less time anyone spends on looking deeply at the output. All we have time for is to skim it, slap "LGTM" on it and open their 17th Claude Code session. We've automated ourselves into Goodhart's law. -------------------------------------------------------------------------------- 16. How Hard Is It to Open a File? Source: https://blog.sebastianwick.net/posts/how-hard-is-it-to-open-a-file/ Site: swick's blog Author: Sebastian Wick Published: 2026-04-23 HN activity: 68 points · 10 comments Length: 1.9K words (~9 min read) It’s a question I had to ask myself multiple times over the last few months. Depending on the context the answer can be: very simple, just call the standard library function extremely hard, don’t trust anything If you are an app developer, you’re lucky and it’s almost always the first answer. If you develop something with a security boundary which involves files in any way, the correct answer is very likely the second one. Opening a File, the Hard Way Like so often, the details depend on the specifics, but in the worst-case scenario, there is a process on either side of the security boundary, which operate on a filesystem tree which is shared by both processes. Let’s say that the process with more privileges operates on a file on behalf of the process with less privileges. You might want to restrict this to files in a certain directory, to prevent the less privileged process from, for example, stealing your SSH key, and thus take a subpath that is relative to that directory. The first obvious problem is that the subpath can refer to files outside of the directory if it contains ... If the privileged process gets called with a subpath of ../.ssh/id_ed25519, you are in trouble. Easy fix: normalize the path, and if we ever go outside of the directory, fail. The next issue is that every component of the path might be a symlink. If the privileged process gets called with a subpath of link, and link is a symlink to ../.ssh/id_ed25519, you might be in trouble. If the process with less privileges cannot create files in that part of the tree, it cannot create a malicious symlink, and everything is fine. In all other scenarios, nothing is fine. Easy fix: resolve the symlinks, expand the path, then normalize it. This is usually where most people think we’re done, opening a file is not that hard after all, we can all do more fun things now. Really, this is where the fun begins. The fix above works, as long as the less privileged process cannot change the file system tree anywhere in the file’s path while the more privileged process tries to access it. Usually this is the case if you unpack an attacker-provided archive into a directory the attacker does not have access to. If it can however, we have a classic TOCTOU (time-of-check to time-of-use) race. We have the path foo/id_ed25519, we resolve the smlinks, we expand the path, we normalize it, and while we did all of that, the other process just replaced the regular directory foo that we just checked with a symlink which points to ../.ssh. We just checked that the path resolves to a path inside the target directory though, and happily open the path foo/id_ed25519 which now points to your ssh key. Not an easy fix. So, what is the fundamental issue here? A path string like /home/user/.local/share/flatpak/app/org.example.App/deploy describes a location in a filesystem namespace. It is not a reference to a file. By the time you finish speaking the path aloud, the thing it names may have changed. The safe primitive is the file descriptor. Once you have an fd pointing at an inode, the kernel pins that inode. The directory can be unlinked, renamed, or replaced with a symlink; the fd does not care. A common misconception is that file descriptors represent open files. It is true that they can do that, but fds opened with O_PATH do not require opening the file, but still provide a stable reference to an inode. The lesson that should be learned here is that you should not call any privileged process with a path. Period. Passing in file descriptors also has the benefit that they serve as proof that the calling process actually has access to the resource. Another important lesson is that dropping down from a file descriptor to a path makes everything racy again. For example, let’s say that we want to bind mount something based on a file descriptor, and we only have the traditional mount API, so we convert the fd to a path, and pass that to mount. Unfortunately for the user, the kernel resolves the symlinks in the path that an attacker might have managed to place there. Sometimes it’s possible to detect the issue after the fact, for example by checking that the inode and device of the mounted file and the file descriptor match. With that being said, sometimes it is not entirely avoidable to use paths, so let’s also look into that as well! In the scenario above, we have a directory in which we want all the paths to resolve in, and that the attacker does not control. We can thus open it with O_PATH and get a file descriptor for it without the attacker being able to redirect it somewhere else. With the openat syscall, we can open a path relative to the fd we just opened. It has all the same issues we discussed above, except that we can also pass O_NOFOLLOW. With that flag set, if the last segment of the path is a symlink, it does not follow it and instead opens the actual symlink inode. All the other components can still be symlinks, and they still will be followed. We can however just split up the path, and open the next file descriptor for the next path segment and resolve symlinks manually until we have done so for the entire path. libglnx chase libglnx is a utility library for GNOME C projects that provides fd-based filesystem operations as its primary API. Functions like glnx_openat_rdonly, glnx_file_replace_contents_at, and glnx_tmpfile_link_at all take directory fds and operate relative to them. The library is built around the discipline of “always have an fd, never use an absolute path when you can use an fd.” The most recent addition is glnx_chaseat, which provides safe path traversal, and was inspired by systemd’s chase(), and does precisely what was described above. int glnx_chaseat (int dirfd, const char *path, GlnxChaseFlags flags, GError **error); It returns an O_PATH | O_CLOEXEC fd for the resolved path, or -1 on error. The real magic is in the flags: typedef enum _GlnxChaseFlags { /* Default */ GLNX_CHASE_DEFAULT = 0, /* Disable triggering of automounts */ GLNX_CHASE_NO_AUTOMOUNT = 1 << 1, /* Do not follow the path's right-most component. When the path's right-most * component refers to symlink, return O_PATH fd of the symlink. */ GLNX_CHASE_NOFOLLOW = 1 << 2, /* Do not permit the path resolution to succeed if any component of the * resolution is not a descendant of the directory indicated by dirfd. */ GLNX_CHASE_RESOLVE_BENEATH = 1 << 3, /* Symlinks are resolved relative to the given dirfd instead of root. */ GLNX_CHASE_RESOLVE_IN_ROOT = 1 << 4, /* Fail if any symlink is encountered. */ GLNX_CHASE_RESOLVE_NO_SYMLINKS = 1 << 5, /* Fail if the path's right-most component is not a regular file */ GLNX_CHASE_MUST_BE_REGULAR = 1 << 6, /* Fail if the path's right-most component is not a directory */ GLNX_CHASE_MUST_BE_DIRECTORY = 1 << 7, /* Fail if the path's right-most component is not a socket */ GLNX_CHASE_MUST_BE_SOCKET = 1 << 8, } GlnxChaseFlags; While it doesn’t sound too complicated to implement, a lot of details are quite hairy. The implementation uses openat2, open_tree and openat depending on what is available and what behavior was requested, it handles auto-mount behavior, ensures that previously visited paths have not changed, and a few other things. An Aside on Standard Libraries The POSIX APIs are not great at dealing with the issue. The GLib/Gio APIs (GFile, etc.) are even worse and only accept paths. Granted, they also serve as a cross-platform abstraction where file descriptors are not a universal concept. Unfortunately, Rust also has this cross-platform abstraction which is based entirely on paths. If you use any of those APIs, you very likely created a vulnerability. The deeper issue is that those path-based APIs are often the standard way to interact with files. This makes it impossible to reason about the security of composed code. You can audit your own code meticulously, open everything with O_PATH | O_NOFOLLOW, chain *at() calls carefully — and then call a third-party library that calls open(path) internally. The security property you established in your code does not compose through that library call. This means that any system-level code that cares about filesystem security has to audit all transitive dependencies or avoid them in the first place. So what would a better GLib cross-platform API look like? I would say not too different from chaseat(), but returning opaque handles instead of file descriptors, which on Unix would carry the O_PATH file descriptor and a path that can be used for printing, debugging and things like that. You would open files from those handles, which would yield another kind of opaque handle for reading, writing, and so on. The current GFile was also designed to implement GVfs: g_file_new_for_uri("smb://server/share/file") gives you a GFile you can g_file_read() just like a local file. This is the right goal, but the wrong abstraction layer. Instead, this kind of access should be provided by FUSE, and the URI should be translated to a path on a specific FUSE mount. This would provide a few benefits: The fd-chasing approach works everywhere because it is a real filesystem managed by the kernel The filesystem becomes independent of GLib and can be used for example from Rust as well It stacks with other FUSE filesystems, such as the XDG Desktop Document Portal used by Flatpak Wait, Why Are You Talking About This? Nowadays I maintain a small project called Flatpak. Codean Labs recently did a security analysis on it and found a number of issues. Even though Flatpak developers were aware of the dangers of filesystems, and created libglnx because of it, most of the discovered issues were just about that. One of them (CVE-2026-34078) was a complete sandbox escape. flatpak run was designed as a command-line tool for trusted users. When you type flatpak run org.example.App, you control the arguments. The code that processes the arguments was written assuming the caller is legitimate. It accepted path strings, because that’s what command-line tools accept. The Flatpak portal was then built as a D-Bus service that sandboxed apps could call to start subsandboxes — and it did this by effectively constructing a flatpak run invocation and executing it. This connected a component designed for trusted input directly to an untrusted caller (the sandboxed app). Once that connection exists, every assumption baked into flatpak run about caller trustworthiness becomes a potential vulnerability. The fix wasn’t “change one function” — it was “audit the entire call chain from portal request to bubblewrap execution and replace every path string with an fd.” That’s commits touching the portal, flatpak-run, flatpak_run_app, flatpak_run_setup_base_argv, and the bwrap argument construction, plus new options (--app-fd, --usr-fd, --bind-fd, --ro-bind-fd) threaded through all of them. If the GLib standard file and path APIs were secure, we would not have had this issue. Another annoyance here is that the entire subsandboxing approach in Flatpak comes from 15 years ago, when unprivileged user namespaces were not common. Nowadays we could (and should) let apps use kernel-native unprivileged user namespaces to create their own subsandboxes. Unfortunately with rather large changes comes a high likelihood of something going wrong. For a few days we scrambled to fix a few regressions that prevented Steam, WebKit, and Chromium-based apps from launching. Huge thanks to Simon McVittie! In the end, we managed to fix everything, made Flatpak more secure, the ecosystem is now better equipped to handle this class of issues, and hopefully you learned something as well. -------------------------------------------------------------------------------- 17. Mine, an IDE for Coalton and Common Lisp Source: https://coalton-lang.github.io/mine/ Site: The Coalton Programming Language Submitter: varjag (Hacker News) Submitted: 2026-04-25 17:47 UTC (Hacker News) HN activity: 77 points · 27 comments Length: 364 words (~2 min read) Language: en mine is an integrated development environment for Coalton and Common Lisp for Windows, macOS, and Linux. 👉 Download the latest release. mine comes in two flavors: mine-app for Windows and macOS is a complete, all-in-one, packaged application with no dependencies. It Just Works™, or it’s a bug. mine-core for Windows, macOS, and Linux is a hacker-friendly “bring your own compliant terminal” variant. It allows you to use mine at the command line, but requires a terminal that has a Unicode font and supports the Kitty keyboard protocol. Coalton and Common Lisp Coalton? Common Lisp? Both? The editor is exclusive to neither, and both come built-in. If you want strong, static types with a flavor of functional programming, Coalton is available. If you want free-wheeling dynamicism and an advanced object system, Common Lisp is available. You can use one, the other, or mix-and-match as your project demands. Integrated REPL and Code Beaming The REPL is completely integrated, not a bolted on afterthought. From functions to entire projects, beam your code to the REPL so you can immediate interact with it. Interactive Debugger When you encounter an error, a debugger will pop up with the error, options to correct it, and a stack trace for your reference. Inline Diagnostics Beaming your code will flag errors and warnings, and they’ll show up right in your editor. In addition, optimization hints will highlight as well, flagging where your code may be sub-optimal in terms of efficiency. Type Hints and Auto-Complete When writing Coalton, the full data type of the function your cursor is on will be shown to you immediately. No guessing what arguments each function takes. If you don’t quite know the name of the function, just press tab. Structural Editing Lessons You’ve heard about structural editing, like ParEdit, but don’t want to read manuals and cheat sheets to learn it? Take the built-in structural editing lessons to learn how to do structural editng in about 5 minutes. Structural Editing is completely optional, but vastly increases the efficiency of Coalton development. All-Native Code No virtual machines and no interpreters. All your code is compiled and optimized to the native binary code of your CPU for maximum performance. -------------------------------------------------------------------------------- 18. What async promised and what it delivered Source: https://causality.blog/essays/what-async-promised/ Site: Causality Submitter: zdw (Hacker News) Submitted: 2026-04-22 05:28 UTC (Hacker News) HN activity: 172 points · 194 comments Length: 2.5K words (~11 min read) Language: en OS threads are expensive: an operating system thread typically reserves a megabyte of stack space and takes roughly a millisecond to create. Context switches happen in kernel space and burn CPU cycles. A server handling thousands of concurrent connections and dedicating one thread per connection means thousands of threads each consuming memory and competing for scheduling. The system spends time managing threads that could be better spent doing useful work. This is the C10K problem, named by Dan Kegel in 1999. If you were building a web server, a chat system, or anything with a large number of simultaneous connections, you needed a way to handle concurrency without a thread per connection. The answer came in waves, each solving the previous wave’s worst problem while introducing new ones. Previously we’ve looked at channels in Go and actors in Erlang. Now we turn to async, which is everywhere these days. Callbacks The first wave was straightforward: don’t block the thread. Instead of waiting for an i/o operation to complete, register a function to be called when it finishes and move on to the next piece of work. Event loops (select, poll, epoll, kqueue) multiplexed thousands of connections onto a handful of threads, and callbacks were the programmer’s interface to this machinery. Node.js built an entire ecosystem on this model, handling thousands of concurrent connections on a single thread. Nginx’s event-driven architecture was a major reason it displaced Apache for high-concurrency workloads. This nicely solved the performance problem, but at a cost: callbacks invert control flow. Instead of writing “do A, then B, then C” as three sequential statements, you write “do A, and when it’s done call this function, which does B, and when that’s done call this other function, which does C.” The programmer’s intent becomes scattered across nested closures. JavaScript developers named this “callback hell” and built an entire website to commiserate. Callbacks have deeper problems than aesthetics, such as fracturing error handling. Each callback needs its own error path. Errors can’t propagate naturally up the call stack because there is no call stack (callbacks run in a different context from where they are registered). Handling partial failure in a chain of callbacks means threading error state through every function in the chain. Plus, callbacks have no notion of cancellation. If you start an asynchronous operation and then decide you don’t need the result, there’s no general way to stop it. The callback will fire eventually, and your code needs to handle the case where it no longer cares about the result. Callbacks solved the resource problem (too many threads) by creating an ergonomics problem (code that’s hard to write, read, and get right). Promises and Futures The next wave started with a good idea: what if, instead of passing a callback for later invocation, an asynchronous operation immediately returned an object representing its eventual result? This is a promise (JavaScript) or future (Java, Rust, etc). The concept dates to Baker and Hewitt in 1977, but it took the C10K pressure of the 2010s to push it into mainstream programming. JavaScript standardized native Promises in ES2015 following the community-driven Promises/A+ spec, and Java 8 introduced CompletableFuture. Promises are more ergonomic than callbacks. First, promises are composable: promise.then(f).then(g) reads as a pipeline instead of a nested pyramid. Error handling also consolidates: a .catch() at the end of a chain handles failures from any step. And promises are values that you can store, pass around, and return from functions. A first-class handle to an eventual value moves the conversation away from raw threads and toward data dependencies. The idea that “this value depends on a computation that hasn’t finished yet” is a useful thing to be able to express. Here’s JavaScript reading a user profile and then fetching their recent orders, first with callbacks, then with promises: // Callbacks: nested, error handling at every level getUser(userId, (err, user) => { if (err) return handleError(err); getOrders(user.id, (err, orders) => { if (err) return handleError(err); render(user, orders); }); }); // Promises: chained, error handling consolidated getUser(userId) .then(user => getOrders(user.id).then(orders => [user, orders])) .then(([user, orders]) => render(user, orders)) .catch(handleError); The promise-based version is not a huge improvement on this small example, but the difference grows with complexity: five steps deep in callbacks is nearly unreadable, while five .then() calls chained together are at least linear. But promises introduced their own problems: Promises are one-shot. A promise resolves exactly once. This makes them unsuitable for modeling streams, events, repeated messages, or any ongoing communication. A WebSocket that receives a stream of messages doesn’t map onto “a value that will exist later.” This forces a split: promises for request-response patterns, and something else (event emitters, observables, callbacks again) for everything else. Composition is clunky. The example above hints at it: getting both user and orders into the final .then() requires nesting or awkward gymnastics with Promise.all. Two independent async operations are easy (Promise.all([a, b])), but anything more complex (conditional branching, loops over async operations, early exit) requires increasingly elaborate combinator patterns. These patterns work but they’re a functional programming idiom grafted onto an imperative language and they don’t feel natural. Errors vanish silently. JavaScript promises that reject without a .catch() handler originally just swallowed the error. The value was lost causing failures to be invisible. This was bad enough that Node.js eventually changed unhandled rejections from a warning to a process crash, and browsers added unhandledrejection events. A feature designed to improve error handling managed to create an entirely new class of silent failures that didn’t exist with callbacks. The type split. Every function now returns either a value or a promise of a value. So callers need to know which one they’re getting and libraries need to decide which one to provide. A function that was synchronous becomes asynchronous when you add a database call to it, and now every caller needs to handle a promise instead of a value. This is a mild form of the coloring problem that the next wave would make even worse. Async/Await Promise chains still looked nothing like the sequential code developers wrote for everything else. Async/await, pioneered by C# in 2012 and adopted by JavaScript (ES2017), Python (3.5), Rust (1.39), Kotlin, Swift, and Dart, delivered exactly that: // Promise chains function loadDashboard(userId) { return getUser(userId) .then(user => getOrders(user.id) .then(orders => [user, orders])) .then(([user, orders]) => render(user, orders)); } // Async/await async function loadDashboard(userId) { const user = await getUser(userId); const orders = await getOrders(user.id); return render(user, orders); } The async/await version reads like sequential code. Variables bind naturally. You can use try/catch instead of .catch(). Loops work with await inside them. It’s an ergonomic win for linear sequences of asynchronous operations. The industry adopted it fast, with JavaScript frameworks going all-in, Python’s asyncio becoming the standard approach for concurrent i/o, and Rust stabilizing async/await as the path to high-performance networking. Within a few years, async/await was the default way to write concurrent i/o code in most mainstream languages. Paying the Function Coloring Tax In 2015, right as async/await was gaining steam, Bob Nystrom published “What Color is Your Function?”, a thought experiment about a language where every function is either “red” or “blue.” Red functions can call blue functions, but blue functions can’t call red functions without special ceremony. Every function must choose a color, and if you call a red function from a blue one, the blue one must become red, spreading virally throughout the codebase. This was an analogy to async/await: async functions are red, sync functions are blue. An async function can call a sync function without issue, but calling an async function from a sync function requires blocking the thread or restructuring the code. Every function in your program must choose a color, and that choice propagates through every caller. Nystrom’s post stuck because it put a name to something developers had been experiencing without a vocabulary for it. Function coloring reshapes entire codebases and ecosystems. The Rust async ecosystem fragmented around competing runtimes (Tokio, async-std, smol) that provide incompatible implementations of fundamental types like TCP streams and timers. A library written for Tokio can’t easily be used with async-std. The popular HTTP client reqwest simply requires Tokio, and if your project uses a different runtime, that’s your problem. Now library authors either pick Tokio (locking out alternatives) or attempt runtime-agnostic abstractions (adding complexity and sometimes performance overhead). Tokio’s dominance is function coloring at ecosystem scale. The tax shows up at other scales too: At the function level, adding a single i/o call to a previously synchronous function changes its signature, its return type, and its calling convention. Every caller must be updated, and their callers must be updated. The change ripples through the call graph until it hits a framework entry point or a main function. A one-line database lookup can require modifying dozens of files. At the library level, authors face a choice of writing a sync library and exclude async users, or writing an async library and force sync users to add runtime dependencies (or maintain both). Many choose “both,” doubling the API surface, the test matrix, and the maintenance burden. In Python, the requests library (sync) and aiohttp (async) are separate projects by separate authors doing the same thing. httpx eventually appeared to offer both interfaces from one package, which is an improvement only needed because of the split. At the ecosystem level, the Rust example above is the norm, not the exception. Every library that touches i/o must choose a color, and that choice limits which other libraries it can work with. The Rust async book itself notes that “sync and async code also tend to promote different design patterns, which can make it difficult to compose code intended for the different environments.” And the costs aren’t just logistical: async/await introduced entirely new categories of bugs that threads don’t have. O’Connor documents a class of async Rust deadlocks he calls “futurelocks”: a future acquires a lock, then stops being polled while another future tries to acquire the same lock. With threads, a thread holding a lock always makes progress toward releasing it (unless you do something everyone knows is dangerous, like SuspendThread). With async Rust, the standard tools like select!, buffered streams, and FuturesUnordered routinely stop polling futures that hold resources. The original futurelock at Oxide required core dumps and a disassembler to diagnose. A Sequential Trap A subtler cost that gets less attention is that async/await’s greatest strength, making asynchronous code look sequential, is also a cognitive trap. async function loadDashboard(userId) { const user = await getUser(userId); const orders = await getOrders(user.id); const recommendations = await getRecommendations(user.id); return render(user, orders, recommendations); } This fetches orders and recommendations sequentially: getRecommendations doesn’t start until getOrders finishes. But these two operations are independent, because recommendations don’t depend on orders. So they could run in parallel, but don’t. The code looks clean and correct while leaving performance on the table. The parallel version requires the programmer to explicitly break out of sequential style: async function loadDashboard(userId) { const user = await getUser(userId); const [orders, recommendations] = await Promise.all([ getOrders(user.id), getRecommendations(user.id) ]); return render(user, orders, recommendations); } The pattern scales poorly beyond small examples. In a real application with dozens of async calls, determining which operations are independent and can be parallelized requires the programmer to manually analyze dependencies and restructure the code accordingly. The sequential syntax actively obscures the dependency structure, i.e. the one piece of information that would tell you what can run in parallel. Async/await was introduced to make asynchronous code easier to write. It made “what can run concurrently” something the programmer must determine manually and express through combinator patterns that break the sequential flow that was the whole point. What Async Got Right To be fair, async abstractions did improve things. Async/await’s ergonomics for linear sequences are better than callbacks or promise chains. For code that’s inherently sequential but happens to include i/o, async/await removes real syntactic noise. It’s easier to read and debug than callback-based code. And some languages learned the right lessons from the coloring problem. For example, Go deliberately chose goroutines over async/await, accepting a heavier runtime in exchange for no function coloring at all. (Edit note Apr 24: Go actually introduced a form of coloring through context.Context, which propagates through calls for cancellation) Java’s Project Loom (virtual threads in Java 21) made the same bet: lightweight threads that look and behave like regular threads, so no code needs to change color. The Loom team explicitly cited function coloring as a problem they wanted to avoid. Zig went further: it removed its compiler-level async/await entirely and rebuilt around an Io interface parameter that i/o operations accept. The runtime (threaded, event-loop, whatever the user supplies) fulfills the interface. Function signatures don’t change based on how they’re scheduled, and async/await become library functions rather than language keywords. Though some argue that the Io parameter itself is a form of coloring. Language designers who studied the async/await experience in other ecosystems concluded that the costs of function coloring outweigh the benefits and chose different paths. Accumulating Costs Each solution solved a problem but introduced new costs. And those costs are structural, affecting the shape of every program, library, and API in the codebase. Wave Solved Introduced Callbacks Thread-per-connection resource exhaustion Inverted control flow, fragmented error handling, callback hell Promises Nesting, error consolidation, values over callbacks One-shot limitation, silent error swallowing, mild type split Async/Await Ergonomics for linear async sequences Function coloring, ecosystem fragmentation, new deadlock classes, sequential trap Each wave made the local experience of writing async code more pleasant while making the global experience more complex. The developer writing a single async function has never had it better, while the team maintaining a large codebase with mixed sync/async code, managing dependency compatibility across runtimes, and trying to find parallelism opportunities hidden behind sequential-looking await chains are carrying a burden that didn’t exist before these abstractions were introduced. This isn’t a case of bad engineering. The people who designed callbacks, promises, and async/await were solving real problems, and each step was a reasonable response to the previous step’s failures. But fifteen years and several iterations in, the accumulated tax is sizable, and a pattern is visible: each fix treats symptoms while leaving the structure intact. The callbacks-to-promises-to-async/await arc may be the clearest illustration yet of a theme running through this series: approaches that start by asking “how do we manage concurrent execution?” keep generating new problems at every level of abstraction. You can watch this one play out in real time, across a single ecosystem, within a single decade. References Baker, Henry and Carl Hewitt. “The Incremental Garbage Collection of Processes.” ACM SIGART Bulletin 64 (1977): 55–59. Kegel, Dan. “The C10K Problem.” 1999. Nystrom, Bob. “What Color is Your Function?” February 1, 2015. Elizarov, Roman. “How Do You Color Your Functions?” Medium, November 18, 2019. Cro, Loris. “Zig’s New Async I/O.” Blog post, 2025. “Virtual Threads in Java.” Oracle Java Magazine. Corrode Rust Consulting. “The State of Async Rust: Runtimes.” Blog post. O’Connor, Jack. “Never Snooze a Future.” Blog post, 2026. -------------------------------------------------------------------------------- 19. Desmond Morris has died Source: https://www.bbc.com/news/articles/c51y797v200o Site: BBC News Author: Sam Woodhouse Published: 2026-04-20 HN activity: 111 points · 19 comments Length: 1.6K words (~7 min read) Language: en-GB 6 days ago Sam Woodhouse BBC His book The Naked Ape was a controversial sensation when it was released in 1967 Desmond Morris, the zoologist, author, artist and television presenter, has died aged 98. Morris was best known for his book, The Naked Ape, which was published in 1967. It framed modern humans as still being fundamentally ape-like despite our technological advances and evolution. He was also a surrealist painter and exhibited his work around the world alongside artists such as Joan Miró. Morris's son Jason confirmed his death on 20 April, calling his father "a great man and an even better father and grandfather", who lived "a lifetime of exploration, curiosity and creativity". Getty Images Morris in his office at London Zoo, where he served as the curator of mammals "Sexual intercourse began," wrote Philip Larkin, "in 1963... between the end of the [Lady] Chatterley['s Lover] ban and the Beatles' first LP." In the years that followed the sexual revolution, a slew of books - in their different ways - found eager readers among the freshly liberated. There was Germaine Greer's The Female Eunuch, Alex Comfort's The Joy of Sex and - in the Summer of Love of 1967 itself - Desmond Morris' The Naked Ape. He wrote it in four frenetic weeks. It explained our habits and rituals, with the naughtiness of "naked" and the Darwinian thrill of "ape". It was mankind seen through the eyes of a zoologist - not an anthropologist. It framed our behaviour in the context of evolution - not culture. As a thesis, it was hotly contested, but it was wildly popular and had lasting influence. It was a bible of human actions for the Age of Aquarius, and plumbed for insight into the practice of modern sex. Getty Images Desmond Morris at London Zoo teaching children about animal behaviour, with the help of an orangutan and a chimpanzee Desmond John Morris was born on 24 January 1928 in the village of Purton, near Swindon. As a child, he watched his father die slowly of wounds received in World War One. It filled the young Desmond with hatred for what humans did to each other. He cut himself off from mankind at the family lake, carefully observing the animals, fish and waterfowl. At Birmingham University, he studied zoology but refused to do animal experiments. He discovered a new approach - called "ethology" - which prized objective study of their behaviour instead. His doctoral thesis involved years watching the aggressive mating dance of the 10-spined stickleback. Giving paint brushes to chimps Granada found in him a natural broadcaster - the man to take on the mighty David Attenborough's natural history shows on the BBC. A studio was built inside London Zoo itself, which irritated Attenborough, who thought he had a relationship with the zoo. But feelings soon thawed and the two great TV interpreters of animal behaviour eventually became friends. Morris became the zoo's curator of mammals. He launched an attempt to breed pandas in captivity but, to his despair, London's Chi Chi repeatedly spurned the attentions of Moscow's An An. Reared in isolation, she saw herself as human and was not interested in a bear. A talented artist, the young Morris had spent his national service lecturing soldiers in fine arts and had exhibited surrealist paintings alongside Joan Miró. Now, he experimented with animal concepts of aesthetics, giving a paintbrush to a chimp called Congo. Getty Images Morris gave a chimpanzee called Congo a paintbrush to see if artistic statements were exclusively human in origin It proved, he said, artistic expression was not exclusively human in origin. It delighted Picasso, who thereafter took delight in biting those who came to see him. The paintings later sold for thousands. At a party, Morris then met publisher Tom Maschler, and pitched him the book that would change his life. It would explain, said Morris: - why humans were the only hairless ape in the world - why man was proud of having the largest brain but hid his relatively huge penis - why women's breasts were biologically designed as much for attracting partners as producing milk Getty Images Congo's work went up for auction in 2005, selling for more than £14,000 Maschler was transfixed. He sent Morris a monthly telegram for years, begging him to write it. It was finally done in a month of frantic scribbling and, when complete, caused eyes to pop. The Naked Ape was an overnight sensation, eventually selling 20 million copies. It applied Darwinian logic to human activity - including fighting, feeding, comfort and sex. Copulation, Morris claimed, was not mainly about producing children. It was, he insisted, more to do with cementing the pair bond "by providing mutual reward for sexual partners". We were, he said, "a very sexy ape". He had taken a job running the Institute of Contemporary Arts but, now fabulously wealthy, he quit. He ignored his mother's advice to bank the money, bought a 27-room villa in the Mediterranean and thoroughly enjoyed himself - sailing in summer and writing in winter. Back home, his book was proving controversial. Some disliked his dismissal of religion as a biological tendency to submit to an alpha male. Feminists were furious with his portrayal of men as "risk-taking" hunter-gatherers who drove human evolution, while women sat at home in the cave. Getty Images Morris believed there was little human behaviour that could not be explained by closely observing animals For many, human beings have self-consciousness and language, which elevates Homo sapiens. There was more to us, they said, than you could tell by watching the other 192 species of ape. But Morris was undeterred. He wrote The Human Zoo and Intimate Behaviour, in Malta, and then became fascinated by the expressive body language of the people of the Mediterranean. He decided to write about the meanings hidden in the way people waved their arms and gesticulated to make a point. "You look at people the way a bird-watcher looks at birds," said a friend. "Yes," said Morris, "you could call me a man-watcher." Getty Images "You look at people the way a bird-watcher looks at birds," a friend told Morris It took him three years to do the research for his new book and TV programme on the subject. Having done his best to spend his fortune, Morris returned to Oxford as a research fellow and travelled the world applying his techniques. Dragged to football matches by his son, Morris became fascinated by the passion of the fans on the terraces. He wrote about the rituals of chanting and synchronised clapping with his customary scientific insight. This was more than just sport, he felt; it was a form of male arena display. He continued to paint too, filling his cottage with surrealist depictions of life-forms he called "biomorphs". Many appeared to be engaged in complex rituals with sexual motives - the abstract expression of the primeval desires he was convinced had shaped mankind. Getty Images Desmond Morris exhibited his surrealist paintings around the world Morris branched out in to light entertainment with The Animal Roadshow and Animal Country, alongside Sarah Kennedy. He exhibited his paintings in London, Amsterdam and Brussels, and wrote popular books on watching everything from babies to cats. The TV production company Endemol approached him with an idea for a new reality series, Big Brother. Morris was initially attracted to watching captive human interaction on such an industrial scale but, put off by the game-show element, he turned them down. "Silly me," he later said. A "personal view" In 1994 - nearly 30 years after The Naked Ape was published - Morris made the TV series he should have made to accompany it. The Human Animal was lavishly filmed in exotic locations, showing diverse customs and suggesting their common biological roots. In deference to his many critics, the BBC added a rider to the title - implying that this was not scientific mainstream thinking but, instead, "a personal view". At the end of the first episode, Morris spoke directly to them. "I've sometimes been accused of degrading mankind, or insulting human dignity, of making man beastly," he said. "This surprised me because I like animals and I feel proud to call myself one. I've never looked down upon them, so to call human beings animals is not, to me, degrading." Desmond Morris in his former home in Oxfordshire. When his wife Ramona died in 2018, he moved to Ireland to be near their son, Jason In truth, the objections went further than that. Many disputed his claim that only man had left the ancient cave to hunt for animals, leaving him with the "risk-taking" genes that made men better at business and art than women. And for every fellow scientist that found him inspiring, others - in the words of the writer Adam Rutherford - saw his work as "salacious guesswork and erotic fantasy". Men might find breasts attractive, Rutherford complained, but that did not mean that was their purpose. Science moves on. A great deal more is now known about genes and genetics than anyone could guess in 1967. Although, when he was invited to update the Naked Ape, Morris stubbornly updated the population of the Earth from three billion to six billion - and left it at that. For all these objections, Desmond Morris will be remembered as a tremendous populariser of science - a man who helped place humans in the scheme of nature on planet Earth. -------------------------------------------------------------------------------- 20. The George Business, by Roger Zelazny (1980) Source: https://www.eternal-flame.org/library/oldlibrary/georgebusiness.html Site: eternal-flame.org Author: ? Submitted: 2026-04-24 02:07 UTC (Hacker News) HN activity: 3 points · 0 comments Length: 2.6K words (~12 min read) The George Business by: Roger Zelazny, Scribed by: Sam theScholar Deep in his lair, Dart twisted his green and golden length about his small hoard, his sleep troubled by dreams of a series of identical armored assailants. Since dragons' dreams are always prophetic, he woke with a shudder, cleared his throat to the point of sufficient illumination to check on the state of his treasure, stretched, yawned and set forth up the tunnel to consider the strength of the opposition. If it was too great, he would simply flee, he decided. The hell with the hoard; it wouldn't be the first time. As he peered from the cave mouth, he beheld a single knight in mis-matched armor atop a tired-looking gray horse, just rounding the bend. His lance was not even couched, but still pointing skyward. Assuring himself that the man was unaccompanied, he roared and slithered forth. "Halt," he bellowed, "you who are about to fry!" The knight obliged. "You're the one I came to see," the man said. "I have-" "Why," Dart asked, "do you wish to start this business up again? Do you realize how long it has been since a knight and a dragon have done battle?" "Yes, I do. Quite a while. But I-" "It is almost invariably fatal to one of the parties concerned. Usually your side." "Don't I know it. Look, you've got me wrong-" "I dreamt a dragon dream of a young man named George with whom I must do battle. You bear him an extremely close resemblance." "I can explain. It's not as bad as it looks. You see-" "Is your name George?" "Well, yes. But don't let that bother you-" "It does bother me. You want my pitiful hoard? It wouldn't keep you in beer money for the season. Hardly worth the risk." "I'm not after your hoard-" "I haven't grabbed off a virgin in centuries. They're usually old and tough, anyhow, not to mention hard to find." "No one's accusing-" "As for cattle, I always go a great distance. I've gone out of my way, you might say, to avoid getting a bad name in my own territory." "I know you're no real threat here. I've researched it quite carefully-" "And do you think that armor will really protect you when I exhale my deepest, hottest flames?" "Hell, no! So don't do it, huh? If you'd please-" "And that lance... You're not even holding it properly." George lowered the lance. "On that you are correct," he said, "but it happens to be tipped with one of the deadliest poisons known to Herman the Apothecary." "I say! That's hardly sporting!" "I know. But even if you incinerate me, I'll bet I can scratch you before I go." "Now that would be rather silly-both of us dying like that-wouldn't it?" Dart observed, edging away. "It would serve no useful purpose that I can see." "I feel precisely the same way about it." "Then why are we getting ready to fight?" "I have no desire whatsoever to fight with you!" "I'm afraid I don't understand. You said your name is George, and I had this dream-" "I can explain it." "But the poisoned lance-" "Self-protection, to hold you off long enough to put a proposition to you." Dart's eyelids lowered slightly. "What sort of proposition?" "I want to hire you." "Hire me? Whatever for? And what are you paying?" "Mind if I rest this lance a minute? No tricks?" "Go ahead. If you're talking gold your life is safe." George rested his lance and undid a pouch on his belt. He dipped his hand into it and withdrew a fistful of shining coins. He tossed them gently, so that they clinked and shone in the morning light. "You have my full attention. That's a good piece of change there." "My life's savings. All yours-in return for a bit of business." "What's the deal?" George replaced the coins in his pouch and gestured. "See that castle in the distance-two hills away?" "I've flown over it many times." "In the tower to the west are the chambers of Rosalind, daughter of the Baron Maurice. She is very dear to his heart, and I wish to wed her." "There's a problem?" "Yes. She's attracted to big, brawny barbarian types, into which category I, alas, do not fall. In short, she doesn't like me." "That is a problem." "So, if I could pay you to crash in there and abduct her, to bear her off to some convenient and isolated place and wait for me, I'll come along, we'll fake a battle, I'll vanquish you, you'll fly away and I'll take her home. I am certain I will then appear sufficiently heroic in her eyes to rise from sixth to first position in her list of suitors. How does that sound to you?" Dart sighed a long column of smoke. "Human, I bear your kind no special fondness-particularly the armored variety with lances-so I don't know why I'm telling you this. ... Well, I do know actually. ... But never mind. I could manage it, all right. But, if you win the hand of that maid, do you know what's going to happen? The novelty if your deed will wear off after a time-and you know that there will be no encore. Give her a year, I'd say, and you'll catch her fooling around with one of those brawny barbarians she finds so attractive. Then you must either fight him and be slaughtered or wear horns, as they say." George laughed. "It's nothing to me how she spends her spare time. I've a girlfriend in town myself." Dart's eyes widened. "I'm afraid I don't understand...." "She's the old baron's only offspring, and he's on his last legs. Why else do you think an uncomely wench like that would have six suitors? Why else would I gamble my life's savings to win her?" "I see," said Dart. "Yes, I can understand greed." "I call it a desire for security." "Quite. In that case, forget my simple-minded advice. All right, give me the gold and I'll do it." Dart gestured with one gleaming vane. "The first valley in those western mountains seems far enough from my home for our confrontation." "I'll pay you half now and half on delivery." "Agreed, be sure to have the balance with you, though, and drop it during the scuffle. I'll return fir it after you two have departed. Cheat me and I'll repeat the performance, with a different ending." "The thought had already occurred to me. -Now, we'd better practice a bit, to make it look realistic. I'll rush at you with the lance, and whatever side she's standing on I'll aim for it to pass you on the other. You raise that wing, grab the lance and scream like hell. Blow a few flames around, too." "I'm going to see you scour the tip of that lance before we rehearse this." "Right. -I'll release the lance while you're holding it next to you and rolling around. Then I'll dismount and rush toward you with my blade. I'll whack you with the flat of it-again, on the far side-a few times. Then you bellow again and fly away." "Just how sharp is that thing anyway?" "Damned dull. It was my grandfather's. Hasn't been honed since he was a boy." "And you drop the money during the fight?" "Certainly.-How does that sound?" "Not bad. I can have a few clusters of red berries under my wing, too. I'll squash them once the action gets going." "Nice touch. Yes, do that. Let's give it a quick rehearsal now and then get on with the real thing." "And don't whack too hard...." That afternoon, Rosalind of Maurice Manor was abducted by a green-and-gold dragon who crashed through the wall of her chamber and bore her off in the direction of in the direction of the western mountains. "Never fear!" shouted her sixth-ranked suitor - who just happened to be riding by - to her aged father who stood wringing his hands on a nearby balcony. "I'll rescue her!" and he rode off to the west. Coming into the valley where Rosalind stood backed into a rocky cleft, guarded by the fuming beast of gold and green, George couched his lance. "Release that maiden and face your doom!" he cried. Dart bellowed, George rushed. The lance fell from his hands and the dragon rolled upon the grounds, spewing gouts of fire into the air. A red substance dribbled from beneath the thundering creature's left wing. Before Rosalind's wide eyes, George advanced and swung his blade several times. "...and that!" he cried, as the monster stumbled to it's feet and sprang into the air, dripping more red. It circled once and beat its way off toward the top of the mountain, then over it and away. "Oh George!" Rosalind cried, and she was in his arms. "Oh, George..." He pressed her to him for a moment. "I'll take you home now," he said. That evening as he was counting his gold, Dart heard the sound of two horses approaching his cave. He rushed up the tunnel and peered out. George, now mounted on a proud white stallion and leading the gray, wore a matched suit of bright armor. He was not smiling, however. "Good evening," he said. "Good evening. What brings you back so soon?" "Things didn't turn out exactly as I'd anticipated." "You seem far better accoutered. I'd say your fortunes had taken a turn." "Oh, I recovered my expenses and came out a bit ahead. But that's all. I'm on my way out of town. Thought I'd stop by and tell you the end of the story. -Good show you put on, by the way. It probably would have done the trick-" "But-?" "She was married to one of the brawny barbarians this morning, in their family chapel. They were just getting ready for a wedding trip when you happened by." "I'm awfully sorry." "Well, it's the breaks. To add insult, though, her father dropped dead during your performance. My former competitor is now the new baron. He rewarded me with a new horse and armor, a gratuity and a scroll from the local scribe lauding me as a dragon slayer. Then he hinted rather strongly that the horse and my new reputation could take me far. Didn't like the way Rosalind was looking at me now I'm a hero." "That is a shame. Well, we tried." "Yes. So I just stopped by to thank you and let you know how it all turned out. It would have been a good idea-if it had worked." "You could have hardly foreseen such abrupt nuptials. -You know, I've spent the entire day thinking about the affair. We did manage it awfully well." "Oh, no doubt about that. It went beautifully." "I was thinking... How'd you like a chance to get your money back?" "What have you got in mind?" "Uh-When I was advising you earlier that you might not be happy with the lady, I was trying to think about the situation in human terms. Your desire was entirely understandable to me otherwise. In fact, you think quite a bit like a dragon." "Really?" "Yes. It's rather amazing, actually. Now-realizing that it only failed because of a fluke, your idea still has considerable merit." "I'm afraid I don't follow you." "There is-ah-a lovely lady of my own species whom I have been singularly unsuccessful in impressing for a long while now. Actually, there are an unusual number of parallels in our situations." "She has a large hoard, huh?" "Extremely so." "Older woman?" "Among dragons, a few centuries this way or that are not so important. But she, too, has other admirers and seems attracted by the more brash variety." "Uh-huh. I begin to get the drift. You gave me some advice once. I'll return the favor. Some things are more important than hoards." "Name one." "My life. If I were to threaten her she might do me in all by herself, before you could come to her rescue." "No, she's a demure little thing. Anyway, it's all a matter of timing. I'll perch on a hilltop nearby-I'll show you where-and signal you when to begin your approach. Now, this time I have to win, of course. Here's how we'll work it..." George sat on the white charger and divided his attention between the distant cave mouth and the crest of a high hill off to his left. After a time, a shining winged form flashed through the air and settled upon the hill. Moments later, it raised one bright wing. He lowered his visor, couched his lance and started forward. When he came within hailing distance of the cave he cried out: "I know you're in there, Megtag! I've come to destroy you and make off with your hoard! You godless beast! Eater of children! This is your last day on earth!" An enormous burnished head with cold green eyes emerged from the cave. Twenty feet of flame shot from it's huge mouth and scorched the rock before it. George halted hastily. The beast looked twice the size of Dart and did not seem in the least retiring. It's scales rattled like metal as it began to move forward. "Perhaps I exaggerated...." George began, and he heard the frantic flapping of giant vanes overhead. As the creature advanced, he felt himself seized by the shoulders. he was borne aloft so rapidly that the scene below him dwindled to toy size in a matter of moments. He saw his new steed bolt and flee rapidly back along the route they had followed. "What the hell happened?" he cried. "I hadn't been around for a while," Dart replied. "Didn't know one of the others had moved in with her. You're lucky I'm fast. That's Pelladon. He's a mean one." "Great. Don't you think you should have checked first?" "Sorry. I thought she'd take decades to make up her mind-without prompting. Oh, what a hoard! You should have seen it!" "Follow that horse. I want him back." They sat before Dart's cave, drinking. "Where'd you ever get a whole barrel of wine?" "Lifted it from a barge, up the river. I do that every now and then. I keep a pretty good cellar, if I do say." "Indeed. Well, we're none the poorer, really. We can drink to that." "True, but I've been thinking again. You know, you're a very good actor." "Thanks. You're not so bad yourself." "Now supposing-just supposing-you were to travel about. Good distances from here each time. Scout out villages, on the continent and in the isles. Find out which ones are well off and lackign in local heroes...." "Yes?" "...And let them see that dragon-slaying certificate of yours. Brag a bit. Then come back with a list of towns. Maps, too." "Go ahead." "Find the best spots for a little harmless predation and choose a good battle site-" "Refill?" "Please." "Here." "Thanks. Then you show up, and for a fee-" "Sixty-forty." "That's what I was thinking, but I'll bet you've got the figures transposed." "Maybe fifty-five and forty-five then." "Down the middle, and let's drink on it." "Fair enough. Why haggle?" "Now I know why I dreamed of fighting a great number of knights, all of them looking like you. You're going to make a name for yourself, George." This story copyright 1983 by The Amber Corporation No rights or affiliation are claimed by the transcriber -------------------------------------------------------------------------------- 21. Martin Galway's music source files from 1980's Commodore 64 games Source: https://github.com/MartinGalway/C64_music Site: GitHub Submitter: ingve (Hacker News) Submitted: 2026-04-25 10:46 UTC (Hacker News) HN activity: 164 points · 24 comments Length: 132 words (~1 min read) Language: en Music source files from 1980's Commodore 64 games So that folks can read through, analyse & understand the music players and how I went about doing my work. Feel free to re-assemble, modify & generate new music. Please credit the original author of this work, Martin Galway. I am the current copyright owner in all this music & programming code, but was not the owner at the time it was created in the 1980's. I acquired the rights from Infogrames later. "Wizball" used the "1st Generation" player, whose design had been in use since 1984 thu about mid-1987. The 2nd Generation player was first used on "Athena" - written for that game, in fact - and later on games like Times Of Lore and Insects In Space -Martin Galway April 14th 2026 -------------------------------------------------------------------------------- 22. Her Life Savings Mysteriously Disappeared After a Systems Glitch Source: https://www.nytimes.com/2026/04/25/your-money/fidelity-investments-fraud-alert.html Site: nytimes.com Submitter: danso (Hacker News) Submitted: 2026-04-25 23:32 UTC (Hacker News) HN activity: 46 points · 37 comments Scrape failed: http 403 -------------------------------------------------------------------------------- 23. Lute: A Standalone Runtime for Luau Source: https://lute.luau.org/ Site: lute.luau.org Submitter: vrn-sn (Hacker News) Submitted: 2026-04-22 22:41 UTC (Hacker News) HN activity: 67 points · 11 comments Length: 272 words (~2 min read) Language: en-US LuteRun Luau Anywhere 🖥️ General-Purpose APIs Lute provides a rich set of built-in APIs for common tasks: file system access, HTTP networking, cryptography, process management, and more. 🛠️ First-Class Tooling Lute includes a suite of tools, including a test runner, a linter, and the Luau type checker — all accessible through the `lute` CLI. 👾 Compatible with Roblox Lute runs Luau code, just like Roblox, allowing you to easily run and test modules that don't depend on the game engine itself. What is Lute? ​ While Luau is a powerful scripting language, it is sandboxed and primarily embedded in a larger program, like the Roblox game engine. This means it lacks built-in capabilities for interacting with the outside world. Lute fills the gap by providing a standalone runtime for Luau, designed for general-purpose programming outside of game engines. Think of it like Node.js or Deno, but for Luau. How can I use it? ​ Lute provides a rich set of built-in APIs for common programming tasks: file system access, HTTP networking, cryptography, process management, and more. You can use these APIs to build a wide variety of applications, from command-line tools to web servers to automation scripts and more. These capabilities come in the form of a set of low-level libraries exposed to Luau under the @lute require alias, and a higher-level standard library built on top of those, exposed under the @std alias. For Roblox developers, we're working hard to ensure the Roblox game engine will support this same set of @std APIs in the future, so you can write code that runs both in Lute and Roblox with minimal changes. -------------------------------------------------------------------------------- 24. Discret 11, the French TV encryption of the 80s Source: https://fabiensanglard.net/discret11/ Site: fabiensanglard.net Submitter: adunk (Hacker News) Submitted: 2026-04-25 11:10 UTC (Hacker News) HN activity: 151 points · 27 comments Length: 1.6K words (~7 min read) June 7, 2020 Discret 11, the French TV encryption of the 80's I spent my childhood in France, playing a lot of soccer and watching way too much TV. In the 80s, there were three channels available. Two of them, Antenne 2 and FR3, were state funded and boring while TF1 was privatized and offered plenty of Japanese cartoons. My generation grew up with Captain Tsubasa, Saint Seiya, Captain Harlock, and Grendizer. There was no cable and no Internet, the TV signal was broadcast over the air and every house had an antenna on its roof to capture waves full of Kame-hame-has. Things changed in 1984 with the launch of a fourth channel. Canal Plus (Channel Plus) was to revolutionize the TV landscape with recent movies, international sports coverage, and no commercials. To fuel its ambitions, "Canal" was to be funded with monthly fees paid by subscribers. The technical difficulty was dead simple. How do you make sure only those who paid can watch when the signal is broadcast to everybody? Easy, you encrypt it with something called "Discret 11". The SECAM signal The French TV system did not use NTSC but SECAM which is a lot like PAL. The video part is made of a stream of frames transmitted at 25Hz. Each frame is made of 625 blocks (hence one block is allocated 64µs). The audio stream is interleaved at the end of the blocks. Each blocks contain data for the TV electron gun to draw one scanline. It proceeds from top left to the bottom right of the screen. Because the gun needs to reposition itself vertically (VSYNC) and the signal needs meta-data, out of the 625 blocks only 576 result in visible lines. The vertical resolution is fully discrete but the horizontal resolution is analogue[1]. Due to horizontal reset (HSYNC), out of the 64µs in a line only 52µs are available resulting in a resolution of 704 points. Something that will come handy later is to remember that not all TVs were of the highest quality. Some chipped at the image and did not display the whole 704x576. There is this concept of invisible area ( ▮) which is never displayed, Action-safe area (▮) which may be displayed, and Title-safe area (▮) which is guaranteed to be displayed by all TVs. Encryption Discret 11 doesn't encrypt at the frame level but at the line level. Actually it does not even encrypt, it only delays a line by shifting it to the right and padding the left part with black. This is done by exploiting the analogue nature of the signal by delaying the line data and replacing it with blank. The beauty of this processes is that it can be achieved with cheap analog hardware without need for an expensive numeric system. To decide how much to decal a line, Discret 11 uses a secret 11-bit key (hence the name). The key is used as a seed in a Linear Feedback Shift Register (the same technique used in Wolfenstein 3D during Fizzlefade[2]) to generate a pseudo-random series of numbers. For each of the 576 lines, a number is obtained from the LFSR. Modulo 3 brings the value from range 0-2047 to 0-2. This tells by how much to delay (pad) a line to the right (0, 13, or 26 "pixels"). That's it. It is simple but highly efficient as you can see by this example. If lines are delayed to the right and left-padded with black then some data is lost. How can the image be perfectly reconstructed during decryption? That is where the areas mentioned earlier are exploited. The TV signal did not use the full 576x704, it was padded with black borders to remain in the Title area. Hence what was inserted on the left was exactly what was lost on the right. Cryptimage[3] developer, Mannix, kindly provided more insight on the internals of Discret 11. The choice of the delay (0, 902 ns and 1804 ns) depends of the LFSR value assigned to the line and the current frame inside of a sequence of 6 frames (every 6 frames the LFSR is reset to its initial seed value). The decoder also monitors the luminance value of 2 TV lines : 310 and 622, these lines can blink to "full black" or "full white", and it will allow the decoder to synchronize the decryption process, to select the correct level of audience (carried by TV line 622) and to initialize the seed of LFSR, the decoder uses also a 16 bits code stored in its EEPROM chip, in order to compute the correct seed value. A 8-bit microcontroler of the Intel MCS48 family is used (Intel 8048) inside the decoder, it contains the main program. -- Mannix Wait, line 310 would flip to all white/black for synchronization? But that is in the middle of the screen isn't it? No it isn't. Each frame is actually made of two fields containing all even and then all odd lines. The electron gun refreshes first the even scanlines and then the odd scanlines. This is how a refresh rate of 50Hz is achieved with a 25Hz signal. Line 310 is actually at the bottom of the screen and not visible. What about the audio signal? Probably because it was much less of an issue if it was cracked, the audio signal received significantly less polish than the video. It is an occurrence of security via obscurity. A normal SECAM signal uses FM on a 6Mhz carrier. Discret 11 modulates the signal via AM using a carrier signal of 12.8 kHz (with a low-filter to avoid aliasing[4]). The idea is to separate the sound into two bands around 12.8 KHz and to transpose the high band down and the low band up. This is a fully reversible "hard-wired" process that requires no key, only some insight. Decryption With an encrypted SECAM signal flowing out of their towers, Canal+ engineers had to figure out an easy way to consume it on the subscribers' end. The solution was to ship to people a device called a "decodeur". Receiving as input the encrypted signal from the antenna it had a SCART[5] output to be plugged into the TV set. To watch Canal+, customers did not set their TV to channel four but to SCART input. Anti-cheating system and the LEET key Mailed codes, courtesy of Mannix. Now comes the problem of preventing people from cheating the system. The elephant in the room is the system of "secret key". It would have eventually leaked so it was rotated every month. Users had to enter the new key via a pad on the top of the "decodeur". Keys rotation was decided four months in advance and sent by mail. With an 11-bit key it would have made sense to let people enter a four-digit number. But that would have introduced two weaknesses in the scheme by allowing brute-force attacks and also let customer cancel their membership to use their friend's key. Instead, the decision was made to provide codes that were not four-digits but eight-digits long. That number was to be fed to a chip and hashed along with the decoder serial number, hence avoiding both brute-force and key sharing attacks. And there were more advantages as outlined by Mannix. The eight digits entered by the user actually results in not one but six keys. That is because the system had a (never used) audience feature made of levels. That was to allow subdividing memberships into Cinema, Sports, Documentaries and so on. The eight digits and the serial number in the EEPROM become a 16-bit key which in turns is used to generate six 11-bit keys for each levels. To identify what level a show belonged to, it was encoded in the blink of line 622. There is also a 7th audience level, used at the end of the month (for 2 of 3 days, for the transition to the next month), it is a kind of "free mode" where all decoders can decrypt even if the user did not pay the subscription, the 7th level audience uses always this 11 bit key : 1337. -- Mannix Epilogue Despite its simplicity and efficacy, Discret 11 did not operate for long. "Canal" went live on November 4th, 1984. Two hours later, as the latest Belmondo movie was playing, it was discovered that 2% of TVs were incompatible with the system[6]. That was 180,000 very unhappy users. In December 1984, Radio Plans magazine almost printed the Discret 11 schematics but was legally barred from it by a court decision. The drawings still managed to leak and became widely photocopied. Eventually, under the dubious motive of allowing Belgium, Luxembourg, and Monaco citizens to access the content, "Le quotidien de Paris" magazine published the plans anyway[7]. Piracy became rampant. Asking for "TBA 970" delay chips in electronic stores prompted employees to offer the full list required to build a "decodeur pirate". The encryption system was updated to Nagravision encryption in 1992 and Discret 11 was retired by 1995. These issues did not prevent the fourth channel from becoming immensely successful. It eventually launched CanalSatellite in 1996 and became a major satellite broadcaster in Europe[8]. References ^ [1] Does PAL has any resolution? ^ [2] Fizzlefade ^ [3] Cryptimage ^ [4] Décodage du son canal+ ^ [5] SCART ^ [6] In the Baba book by Pierre Lescure ^ [7] Discret 11 schematics by Radio plans ^ [8] Hight Above book * -------------------------------------------------------------------------------- 25. GPT‑5.5 Bio Bug Bounty Source: https://openai.com/index/gpt-5-5-bio-bug-bounty/ Site: OpenAI Submitter: Murfalo (Hacker News) Submitted: 2026-04-25 14:17 UTC (Hacker News) HN activity: 136 points · 98 comments Length: 224 words (~1 min read) Language: en-US Invitation As part of our ongoing efforts to strengthen our safeguards for advanced AI capabilities in biology, we’re introducing a Bio Bug Bounty for GPT‑5.5 and accepting applications. We’re inviting researchers with experience in AI red teaming, security, or biosecurity to try to find a universal jailbreak that can defeat our five-question bio safety challenge. Program overview Model in scope: GPT‑5.5 in Codex Desktop only. Challenge: Identify one universal jailbreaking prompt to successfully answer all five bio safety questions from a clean chat without prompting moderation. Rewards: $25,000 to the first true universal jailbreak to clear all five questions. Smaller awards may be granted for partial wins at our discretion. Timeline: Applications open April 23, 2026 with rolling acceptances, and close on June 22, 2026. Testing begins April 28, 2026 and ends on July 27, 2026. Access: Application and invites. We will extend invitations to a vetted list of trusted bio red-teamers, and review new applications. Once selected, successful applicants will be onboarded to the bio bug bounty platform. Disclosure: All prompts, completions, findings, and communications are covered by NDA. How to participate Submit a short application here⁠⁠(opens in a new window) (name, affiliation, experience) by June 22, 2026. Accepted applicants and collaborators must have existing ChatGPT accounts to apply, and will sign a NDA. Apply now and help us make frontier AI safer. -------------------------------------------------------------------------------- 26. Colorado Adds Open-Source Exemption to Age-Verification Bill Source: https://fosstodon.org/@carlrichell/116460505717380644 Site: Fosstodon Submitter: terminalbraid (Hacker News) Submitted: 2026-04-25 22:41 UTC (Hacker News) HN activity: 67 points · 22 comments Language: en No extractable content. -------------------------------------------------------------------------------- 27. Show HN: Kloak, A secret manager that keeps K8s workload away from secrets Source: https://getkloak.io/ Site: getkloak.io Submitter: neo2006 (Hacker News) Submitted: 2026-04-25 19:03 UTC (Hacker News) HN activity: 45 points · 37 comments Length: 379 words (~2 min read) Language: en Agentless Kubernetes Security Secure Your Secrets, Agentless Kloak transparently intercepts HTTPS traffic in Kubernetes using pure eBPF, replacing hashed placeholders with real secrets at the network edge. Your applications never see the actual credentials, so a compromised process cannot leak what it never had. eBPF Powered Zero Code Changes K8s Native app # Your app sends this header: Authorization: kloak:MPZVR3GHWT4E6YBCA01JQXK5N8 # Kloak transforms it to: Authorization: Bearer sk-live-xyz123... ✓ Secret never exposed to application quick start # Install Kloak with Helm $ helm repo add kloak https://chart.getkloak.io $ helm repo update $ helm install kloak kloak/kloak \ -n kloak-system --create-namespace \ --set demo.enabled=true Features Everything You Need for Secure Secret Management Kloak provides enterprise-grade security without the complexity Secure by Design Secrets are replaced at the network edge. Your application code never sees real credentials, eliminating accidental exposure. Zero Latency Impact eBPF-powered traffic redirection happens in kernel space, adding negligible overhead to your requests. Kubernetes Native Works with standard Kubernetes Secrets. Add a label and Kloak handles the rest automatically. Host Restrictions Control which secrets can be used with which hosts. Prevent credential misuse with fine-grained access control. Zero Code Changes No SDK required. Works with any language or framework. Use the hash placeholder in your config. Pure eBPF Integration No bulky sidecars or complex CNI plugins. Kloak operates purely at the kernel level for maximum efficiency. Open Source Fully open source under the AGPL-3.0 License. Inspect the code, contribute, and build with confidence. How It Works Simple, Secure, Transparent Kloak operates at the network layer, making secret management invisible to your applications 01 Register Your Secrets Label your Kubernetes secrets with getkloak.io/enabled=true. Kloak generates a unique ULID placeholder for each secret value. labels: getkloak.io/enabled: "true" getkloak.io/hosts: "api.example.com" 02 Use Hash Placeholders Reference the generated hash in your application config instead of the actual secret. Your app never sees the real value. headers: Authorization: "kloak:MPZVR3GHWT4E6YBCA01JQXK5N8" 03 Automatic Transform When your app makes an HTTPS request, Kloak intercepts it and replaces the hash with the real secret before forwarding. # Request leaves your pod with real credentials Authorization: Bearer sk-live-xyz123... Architecture Built for Kubernetes A cloud-native solution using proven technologies Control Plane Controller Watches secrets & manages eBPF programs Data Plane Application Pod App eBPF Traffic Control & Secret Replacement -------------------------------------------------------------------------------- 28. Which one is more important: more parameters or more computation? (2021) Source: https://parl.ai/projects/params_vs_compute/ Site: parl.ai Submitter: jxmorris12 (Hacker News) Submitted: 2026-04-24 16:44 UTC (Hacker News) HN activity: 52 points · 11 comments Length: 1.1K words (~5 min read) Language: en Which one is more important: more parameters or more computation? When we talk about the power of a deep learning model, often the only metric we pay attention to is its size, which is measured by the number parameters in that model. However, the amount of computation to run that model is an important metric too, but it is often overlooked because it is usually tied to the model size. Practitioners can then tend to think of those two metrics as a single thing. This is true most of the time, as each parameter participates in computation only once per input. So if a model has 1 million parameters, then it will take roughly 1 million floating point operations to process an input. This applies to feedforward models, recurrent models, and even Transformers. We are announcing the publication of two new methods that together help study this important question further -- and show that the computation of a model should be considered separately from the model size. Firstly, we can increase the model size without using more computation and improve its performance. The first paper proposes a simple, elegant method to achieve that by proposing hash layers. The second paper shows that the opposite is also true. We can increase the amount of computation without adding any new parameters to the model, which can improve performance significantly. A new family of staircase attention models is proposed that achieves this feat. Taken together, we believe these results open up a new way of thinking about deep learning models, requiring us to disentangle the concepts of parameters and computation. Thinking in this way, we believe we can arrive at more powerful models that are architected with regard to the resources available. Hash Layers In recent years, a trend emerged of making Transformer models bigger and bigger as a way of achieving impressive results on language tasks. The number of parameters in those models extend to billions, and even a trillion. While this shows the potential of deep learning, the bigger models require more computation that makes them less practical. One way to make big models use less computation is a sparse mixture-of-experts (MoE) approach. Each expert has its own parameters, which are only used for a small part of the input. Each input is routed to only some of the experts, meaning only some of the parameters need to be used, resulting in less computation. Indeed, recent works showed that Transformers can be made bigger efficiently this way. The key element of MoE is a router that decides which expert to use on which data. In our paper, we propose a routing mechanism based on hashing of input tokens. Unlike previous works, the hashing MoE is much simpler as it does not require any learning or change in objective function. Each word in the dictionary is simply assigned to a fixed expert, which is either chosen at random or assigned such that the distribution is balanced. Despite its simplicity, the method works well on a number of challenging tasks in language and dialogue. On the pushshift.io Reddit language modeling task, our hashing mechanism outperforms the learning-based Switch baseline, especially when there are more experts. The largest models here have 1.28 billion parameters, but only 17% of them are used for any particular input. We go further by training 4.5 billion parameter models on larger data, where we see the hashing outperforms another competitive sparse MoE model, BASE. The natural balancing of the expert assignment also means that training is efficient and scalable across a cluster, compared to those existing approaches. In our experiments this gives an improvement of about 11% in updates-per-second compared to BASE, and as the number of expert layers increases, we expect this difference to become more exaggerated. Staircase Attention While adding more parameters to Transformers for better performance is a popular topic of study, increasing its computation is underexplored. One reason for that is that the standard Transformer interlocks computation and parameters with the architecture choice, making this impossible. In our paper, we introduce an alternative family of architectures which detaches these concepts, and show that adding more computation is an alternate route to improving the performance. In particular, we propose a family of models with recurrent applications of Transformers, called Staircase and Ladder models. The Ladder model simply stacks the same Transformer multiple times. This means a parameter in the Transformer will participate in the computation multiple times, increasing the amount of computation while keeping the model size fixed. This straightforward modification brings a significant performance improvement to real-world tasks such as language modeling and dialogue. Furthermore, it indicates that increasing computation -- thus adding more power per parameter -- is a compelling research direction for better performance. The Staircase model stacks Transformers, like Ladder, but shifts each Transformer multiple time steps forward. This change makes it possible to continue stacking Transformers as long as inputs continue, forming a model shaped like a staircase. Unlike Transformers, this continuation makes Staircase recurrent in time, which is crucial for maintaining an internal state for tracking changes. On simple constructed tasks where the model just needs to maintain an internal state and update it with incoming information, feedforward models like Transformer and Ladder struggle, but Staircase can solve them with ease. In addition, Staircase models also enjoy the same performance boost as Ladder models on language modeling tasks because they have more compute per parameter. Why not both? A natural question after introducing these two methods is -- can we combine then? The answer is -- yes! The improvements gained from the two approaches appear to be orthogonal, and we observe significant gains from a Hash Layer + Ladder model compared to either alone. Taken together, these two methods give a fine-grained control over the parameter size and computation size, leading to these improvements. In summary, our work has examined the issues of computation vs. parameter size, and shown that these two concepts should be treated quite differently when thinking about new methods -- rather than tying them together as in many standard machine learning models. In particular, we present two new types of architecture that explore these tradeoffs -- either increasing the parameter size, or the computation amount -- and showing how their ideas can be combined together. We believe this way of thinking, and the use of our new methods in particular, can be a fruitful way forward for machine learning research. To get more into the details read the Hash Layers and Staircase Attention papers. Code is available here. -------------------------------------------------------------------------------- 29. Can you stop beans from making you gassy? Source: https://www.seriouseats.com/how-to-reduce-bean-gas-tested-11883862 Site: Serious Eats Author: Dave Arnold Submitted: 2026-04-25 20:17 UTC (Hacker News) HN activity: 117 points · 89 comments Length: 3.7K words (~17 min read) Language: en For the past year, I’ve been tinkering with an idea—an idea to make a fart-free bean. Get it out of your system now; this article is about farting. More specifically, it’s about beans, the uncomfortable bubbles of gas they create in our digestive tracts, and whether we can do anything about it. It's not a new question,* as the countless articles that have been churned out over the years attempting to answer it make clear. But those articles don't do much more than repeat the same old advice, much of the time uncritically and with little to no evidentiary basis. Unwilling to pass along the same folk medicine that so many other publications seem to think is sufficient, we decided it was time to do our own tests. Is a fart-free bean possible? Science to the rect...um...um—to the rescue! Serious Eats / Michelle Kondrich *It's also not a frivolous one: Harold McGee, the foremost author on the science of food in the kitchen, ended up writing his seminal work, On Food and Cooking, because someone asked him why beans make you fart. If that’s not enough for you, also note that St. Augustine (who knew of people who could discharge their farts in odorless melodies), Montaigne, and Ben Franklin all expounded on gas (primarily to tell you not to worry about it, advice we are ignoring). And it is said that philosopher-mathematician Pythagoras (who, PS, did not invent the Pythagorean theorem, but that’s a tale for another day) was allegedly afraid of both beans and their magical toots. Wanting to do a scientific study on beans and intestinal gas is one thing, actually doing it is another. I didn’t have a way to conduct an extensive flatus survey, and I didn’t have access to the fancy laboratory equipment needed to analyze bean samples objectively. I wanted to test a whole passel of gas-reduction theories, which made my problem even more difficult. I was about to give up on the project entirely when a way forward revealed itself. Harold McGee and I have been lecturing together at Harvard’s Science of Cooking class every year for the past decade. At this year’s session, I loudly moaned about my inability to tackle the fartless bean problem scientifically. To my surprise and delight, professors Pia Sörensen and Dave Weitz, who run the class, agreed to measure bean fart potential as part of a class project. They had access to all the necessary equipment—centrifuges, liquid chromatographs, mass spectrometers, and freeze dryers. Even better, they were willing to recruit student volunteers to assemble what we couldn't help but dub the "Harvard Fart Squad," a group of the nation’s best and brightest young minds who would put their GI systems on the line for self-reported, in-vivo science. From left to right: Vivian Nha Nguyen (a member of the Harvard Fart Squad) and Nancy Lin, hand out bean puree samples for students Etai Clyde and Juan Valdez to eat. Courtesy of Eliza Grinnell/Harvard SEAS The question was, what to test? What Causes Bean-Induced Flatulence? Before one can decide what to test, we need to know what we're looking for. Most toot-inducing foods contain ingredients called FODMAPs (an acronym for Fermentable Oligosaccharides, Disaccharides, Monosaccharides, and Polyols—quite a mouthful). Humans cannot digest FODMAPs, but bacteria in our gut can. The byproduct of that bacterial digestion is gas. The FODMAPs that make beans the musical fruit are a small group of oligosaccharides (complex sugars). In theory, if you can reduce the amount of those oligosaccharides in your beans, you can reduce the amount of gas they produce.* We settled on testing for the three sugars most discussed in scientific bean literature: raffinose, stachyose, and verbascose. (We learned in our tests that the pinto beans we were working with contain relatively little raffinose and verbascose but a lot of stachyose, so our analysis focused solely on that.) *Here I feel the need for an aside. Many, many people will tell you that the key to reducing bean gas is to eat more beans. Eating more beans, they argue, works because it allows our digestive systems, and the microbiome in them, to acclimate to the beans. Over time, they say, the gassiness will go down. This makes no sense to me. If these oligosaccharides are food for bacteria in our gut, common sense would say that feeding that bacteria more food would, if anything, do the opposite by supporting their population growth while giving them plenty of raw material to digest. It wasn't within the scope of this project to test (and, I suspect, disprove) this theory, but count me as highly doubtful. If anything, I have to imagine that eating more beans more often just makes people more used to being gassy, and that, in turn, makes them notice it less. (Their significant others might have a very different take…) What We Tested Interview a hundred bean-eaters about how to minimize bean gas and you're likely to get two hundred suggestions. Use canned beans instead of dried! Rinse the canned beans! Soak dried beans! Discard the soaking water! Rinse your cooked beans! Throw some bay leaves in the pot! No, add a piece of kombu! It goes on and on. We decided to test several of these common recommendations, omitting some (e.g., turmeric, ginger) that fundamentally alter the flavor and/or appearance of the cooked beans in a way that may not be practical across a range of bean recipes. I wanted to test alkalinity—some people say baking soda makes for less gassy beans, but testing it properly would have added a slew of additional samples that we didn't have the bandwidth for (either way, baking soda can improve bean texture and reduce cooking times, so there's plenty of other reasons to add it; if it also reduces farts, I'd call that a bonus). On top of all these dubious folk remedies, there's one that studies have shown actually works: Science has discovered an enzyme—alpha-galactosidase—that can break down the offending sugars into harmless bits. You can buy it at the pharmacy under the brand name Beano. According to package directions, one is meant to ingest a Beano tablet before eating beans. Problem solved. Not for me, though. I want a method to knock the wind out of beans that doesn't require me and my guests to pop a pill first. It's impractical. But, I wondered, could I move the alpha-galactosidase solution out of the medicine cabinet and into the pantry by using it not as a pill but as an ingredient when cooking the beans? View of the Harvard lab where bean testing took place. Serious Eats / Pia Sorensen The Beano company flatly states that my idea won’t work: Cooking will denature the alpha-galactosidase enzyme before it does its job. I figured I could get around this blockade—that is pretty much what I do—by carefully adding the enzyme to the beans only when temperatures were low enough not to denature it. In the end, we lab-tested 51 bean samples, measuring 17 different variables executed in triplicate. We opted for pinto beans because they’re widely available, taste great, and are anecdotally fairly farty. To see if this lab data made a difference to derrierès, the "Harvard Fart Squad" students then prepared a double-blind test of farty vs. non-farty beans (as determined by our first round of testing), handing out 45 cups of bean dip to their classmates. Here’s what we discovered. How The Tests Were Run Bean samples during lyophilization. Serious Eats / Pia Sorensen I prepared all the necessary bean samples, blended them (since the samples need to be homogenous for the tests), vacuum bagged and froze them, and mailed them to Pia Sörensen and Kelly Chatman at Harvard’s "Small Molecule Mass Spectrometer" facility where they ran the quantitative tests with help from students Melody Cao and Fatema Abdulla. The description that follows is some real CSI-level stuff. The words are big, but the concepts aren’t complicated, so stick with me. First, the oligosaccharides needed to be extracted from the bean puree by dissolving them in ultra-pure water. After that, centrifugation followed to remove the solids, and then the samples were run through a liquid chromatography/mass spectrometry unit. Liquid chromatography (LC for short) separates molecules based on how quickly they wash through a long column packed with solids resisting the flow. First, you run pure samples of the material—in this case, stachyose—through the machine to see how long it takes them to travel through the column. Next, you put in your unknown sample and wait that same amount of time for the stachyose to appear. Then, the mass spectrometer (Mass Spec or MS) takes over, vaporizing the stachyose into charged ions and using a voltage to accelerate those ions to high speed. The heavier those molecules are, the slower they travel, so by measuring their speed, you can determine their weight, thereby making sure you are only measuring stachyose. You are literally weighing individual molecules—pretty cool. The Mass Spec gave us a reading proportional to how much of the target molecule (in this case, stachyose) was present in a sample. After that, the crew at Harvard lyophilized (fancy word for freeze-dried) the beans to see how much water they contained. The Results Do Any Traditional (i.e., Non–Beano) Cooking Techniques Reduce Bean-Induced Fartiness? Categorically no. You can soak your beans for anywhere between 8 and 24 hours, you can throw away that soaking water or keep it, you can cook direct from dry, you can pressure cook, you can cook with bay leaves or kombu, you can even parboil soaked beans for a minute in boiling water and throw that water away. None of these strategies makes an appreciable difference in the fart levels of beans cooked from scratch. Strangely, the beans cooked with bay leaves had the highest fart potential of any of the samples tested—almost as though the bay conserved the farts rather than dispelling them. But we would need more tests to be sure. These tests blew me away. At the very least, I expected that presoaking the beans and pitching the water would reduce fartiness. After all, the sugars are water soluble, so they should leach into the soaking water and get discarded. As it turns out, not so much. I still don’t understand this result. Serious Eats / Pia Sorensen Do Canned Beans Fart Less Than Those Cooked from Scratch? Yes, marginally. We tested nine cans of Goya pinto beans from three different lots, and they consistently had about twenty percent less fartiness than the beans cooked from dry. Note, though, that this is not a statement we can make with certainty because the canned beans were from Goya and the dried beans were from the Jack Rabbit company. It is entirely plausible that the Goya corporation has a source of pinto beans with less fartiness than the folks at Jack Rabbit. Does Rinsing Canned Beans Reduce Gas? A resounding yes. Gram per gram, rinsed beans are over twenty percent less farty than unrinsed, and the liquid you throw away is thirty percent fartier than the beans are themselves. The liquid in the bean can has many more farts per gram than the beans themselves do, but pitching this liquid comes at a high cost: flavor. Compared to beans with their cooking liquid, rinsed beans are tasteless. Here is why: Liquid separated from beans and weighted. Serious Eats / Alex Touceda A 15-ounce can of pinto beans contains roughly 447 grams of beans and liquid. Drained and rinsed, you end up with 267 grams of beans. Getting rid of the liquid means you discard over a third of the can contents. Furthermore, the bean liquid contains about fifteen percent of the solids that the raw beans started with. But wait—it gets worse. The bean solids that are in the liquid are the ones that dissolve in water, which are also the solids with flavor. Think about making a meat broth: Simmering meat for a long time in water shifts the meat’s flavor into the liquid. Same with beans. Rinsing beans is like throwing away beef stock and eating only the flavorless, overcooked beef. The amount of farts in your bean/water system is relatively constant. If you reduce the water, the farts per gram in your system go up. Conversely, if you dilute (as you would in a soup), the farts per gram decrease. Looking at the bean liquid this way may indicate why home-cooked (from dry) beans may seem fartier to you: When you cook beans from dry, you may end up with less liquid per bean than you have in canned beans. As we’ve seen, a can of beans is over one-third liquid, which you will typically toss or use. If you are cooking dry beans to use as a side dish, there's a chance you will reduce the liquid even more during cooking, and reducing the liquid increases the farts per gram. Does Beano Work As a Gas-Reducing Ingredient in the Bean Pot? And Should You Use It? Yes and no. And maybe. In our tests of pureed beans—think bean dip—Beano did a fantastic job. With whole beans, though, it appears that only the liquid gets effectively de-farted. Pureed beans treated with Beano had almost two and a half times less fart potential than pureed canned beans and nearly three times less fartiness than beans cooked from scratch. A big win! Serious Eats / Alex Touceda The technique is simple: Use one Beano 800 (800 is a measure of how much enzyme the Beano contains) per pound of cooked beans. So a 15-ounce can of beans takes one Beano (I use two, cause hey—better safe than farting). If you cook a pound of dried beans, you should end up with about four pounds of cooked beans and broth, so you’ll use four (or, in my case, eight, because, as I said, I'd rather not be farting). You pull apart the Beano gel capsules to harvest the powder inside. After you have cooked your beans from dry, wait for them to cool down to 104°F/40° and add the powder; the exact temperature isn’t essential as long as you don't add the enzyme when the water is hot enough to denature it (I don't actually know what that upper limit is, but 104°F is safe). If you are using canned beans, add the powder before you heat them. Puree the beans with the Beano in a blender and wait about an hour for the enzyme to do its thing. Then heat or eat. Beano didn't do as well in our quantitative lab tests of whole beans. We added Beano to canned beans without draining the liquid and warmed the mix up to 104°F (40°C) for an hour to let the enzyme work. Then we heated the beans with their liquid to boiling to destroy the enzyme and separated the liquid from the beans for testing. The liquid was virtually fart-free, but the whole beans had only slightly less fartiness than canned beans that had been drained and rinsed. The upshot? When you use the Beano technique on whole (as opposed to pureed) beans, you are only treating the liquid. Essentially you are turning the unrinsed beans into rinsed beans while preserving the flavor. Worth doing if bean fartiness is an issue for you or your guests, but not enough to eliminate bean farts entirely. The "Harvard Fart Squad" In-Vivo Data With this quantitative lab data in hand, Ada Vazzana, Micaela Rosen, and Vivian Nha Nguyen, the students running the Fart Squad, went to work in the field. After all, it's one thing to see the oligosaccharide content of the beans as measured by lab equipment, but it's another to be able to say whether any observed differences in the lab have a measurable impact on real, live digestive systems. Given that we saw the most significant effect on oligosaccharide reduction by treating pureed beans with Beano, we decided that was the variable to put to the test with our live subjects. The students prepared a bean dip using canned pintos, olive oil, lemon juice, and salt. Half the dip was treated with Beano following the protocol described above; half was not. They then handed the samples out randomly to 45 fellow students, who did not know whether they were receiving the Beano sample or the control. The students also received a questionnaire at class time, which was before lunch. “The flatulence hit almost immediately after eating the dip.” Volunteers rated themselves on how farty they believed themselves to be usually versus how farty they felt six hours after eating the bean dip. As of this writing, we have half of the surveys back, and even with this smaller-than-hoped-for sample, the Beano group reported, on average, slightly less gas after eating the beans than they usually have on any given day. In contrast, the non-Beano control group scored an average of one point fartier on a 10-point scale. One point doesn’t seem like much until you dig a bit deeper: Only one person in the Beano group felt two points gassier than usual. In contrast, one-third of the control group scored three or four points gassier, with one respondent saying, “The flatulence hit almost immediately after eating the dip.” For bean dips and purees, Beano works. For everything else, fartgeddaboutit! Addendum: The Cooking Procedures Cook from dry with no soaking (the control): Cook 100 grams of beans in 400 grams of water with 2.6 grams of salt for roughly two hours, adding water as necessary so that the final batch weighs 388 grams. Why did I choose these seemingly random numbers? Well, as it happens, that is approximately the same bean-to-water and bean-to-salt ratio I measured in my canned bean samples. I wanted the dry bean tests to be apples-to-apples with the canned beans. In cooking for myself, I would further reduce the final amount of liquid in the beans. Soak for 8 hours and then cook: Using the same ratios as the control above, soak the beans in the water and the salt for eight hours before cooking. The cooking, in this case, only took about an hour. I added salt to the soaking water because salt helps beans absorb more water while they are soaking. Rationale: some people believe that the beans—still alive when they are dry—will start to break down the complex sugars as they soak. Eight hours was the minimum amount of soaking time we thought might have an effect. As an aside, many bean people believe that presoaking makes for better beans. I don’t wish to get into this argument with you, but I’ll say this: Presoaking larger and tougher skinned beans makes beans cook much more quickly on the stove and helps keep the beans whole as they cook. Soak for 24 hours and then cook: Same as above, but soak the beans for 24 hours—the maximum time we thought was reasonable. Soak for 8 hours, throw away the soaking water, and cook: Same as number two, but I threw away the soaking water and added fresh water and salt before cooking. Rationale: since the offending complex sugars are water soluble, they should leach into the water and go down the drain. Many bean purists hate throwing away soaking water because it also discards flavor. I tasted all the water I threw out—it didn’t taste like much. Significant color leached into the water, though, and this would be a massive deal in dark beans like red kidney or black beans. Soak for 24 hours, throw away the soaking water, and cook: Same as above, but soak for 24 hours. Soak for 8 hours, throw away the soaking water and then blanch in boiling water for 60 seconds; discard blanch water, and cook: Same as number four, but add a blanching step before cooking. Rationale: Canned beans are most often blanched in near-boiling water before being canned to get rid of any air inside the bean that would cause problems during canning. We theorized this quick blanch would remove toots. Interestingly, these beans tasted blander than the other beans cooked from scratch (I tasted every sample), with a flavor closer to the canned beans. Soak for 8 hours with bay leaves and then cook: Same as number two, but two bay leaves were added to the soaking water and allowed to remain during cooking. Rationale: Some people believe that bay leaves mellow beans. I told Daniel that I regularly use a ton of bay leaves because the ones I buy are low quality and lack flavor. He told me to get decent ones and use an average amount instead. The ones I purchased were from La Boite Epice and were, in fact, revelatory. I was shocked at how much better they were than my supermarket dross. Two leaves in the standard recipe made the entire batch redolent of bay. I am now firmly sold on only using good-quality bay leaves. Soak for 8 hours with kombu and then cook: Same as above, but using kombu instead of bay. Kombu is the umami-rich seaweed used to make the Japanese stock dashi. Many people believe kombu helps lessen the windy burden of the bean. The kombu I used was the Wellpac brand from Korea, and I used a 120x115 mm square (6.55 grams) in each batch. Surprisingly, the beans did not taste that much of kombu, although they were perhaps a bit richer. Pressure cook without soaking: Rationale: the high heat in pressure cooking (259°F as opposed to 212°F) might break the sugars down into harmless bits. All tests were cooked in a Kuhn Rikon pressure cooker at second ring (15psi) for 45 minutes and allowed to cool naturally before I opened the cooker. Soak for 8 hours and then cook, cool, and add Beano: Last, my piece de resistance, my secret weapon in the bean wars, and the main reason I wanted to run this study: Beano. Cook as in number two. After cooking, wait for the beans to cool to 104°F and stir in two capsules of Beano powder. Cover illustration by Michelle Kondrich December 4th, 2022 -------------------------------------------------------------------------------- 30. A web-based RDP client built with Go WebAssembly and grdp Source: https://github.com/nakagami/grdpwasm Site: GitHub Submitter: mariuz (Hacker News) Submitted: 2026-04-25 10:59 UTC (Hacker News) HN activity: 117 points · 45 comments Length: 384 words (~2 min read) Language: en A web-based RDP client built with Go WebAssembly and grdp. Connect to a Windows Remote Desktop server directly from your browser — no plugins required. Architecture Browser (WASM) ──WebSocket──► proxy (Go) ──TCP──► RDP Server Because browsers cannot open raw TCP sockets, a lightweight Go proxy server bridges WebSocket connections from the browser to the RDP server's TCP port. Requirements Go 1.24 or later A reachable RDP server (Windows or any RDP-compatible host) Build git clone https://github.com/nakagami/grdpwasm.git cd grdpwasm make all make all produces: Output Description static/main.wasm Go WASM binary (runs in the browser) static/wasm_exec.js Go runtime JS support file proxy/proxy WebSocket-to-TCP proxy + static file server Run make serve # or equivalently: ./proxy/proxy -listen :8080 -static static Then open http://localhost:8080 in your browser. Proxy options Flag Default Description -listen :8080 Address and port to listen on -static static Directory to serve static files from Usage Open http://localhost:8080 in a browser. Fill in the connection form: Host — hostname or IP address of the RDP server Port — RDP port (default 3389) Domain — Windows domain (leave blank for local accounts) User — username Password — password Width / Height — initial desktop resolution Click Connect. The remote desktop appears in the canvas. Click the canvas to capture keyboard focus. Click Disconnect to end the session. Keyboard & Mouse All standard keyboard input is forwarded to the remote desktop via RDP scan codes. Mouse move, button clicks, and scroll wheel are fully supported. Note: The browser tab must have focus for keyboard events to be forwarded. Click inside the canvas area if keys stop responding. Audio Remote audio is streamed via RDPSND and played through the browser's Web Audio API (PCM 44100 Hz, stereo, 16-bit signed little-endian). Security notes The proxy accepts connections from any origin. Run it only on a trusted network or add authentication before exposing it to the internet. Credentials are transmitted from the browser to the proxy over WebSocket. Use HTTPS/WSS (put the proxy behind a TLS-terminating reverse proxy such as nginx or Caddy) when accessing it over an untrusted network. Development make wasm # rebuild only the WASM binary make proxy # rebuild only the proxy server make wasm_exec # refresh wasm_exec.js from the local Go toolchain make clean # remove all build artifacts License GPLv3 — see grdp LICENSE.