- Can Small Language Models Use What They Retrieve? An Empirical Study of Retrieval Utilization Across Model Scale - Sanchit Pandey
- Can Small Language Models With Retrieval-Augmented Generation Replace Large Language Models When Learning Computer Science? - Suqing Liu, Zezhu Yu, Feiran Huang, Yousef Bulbulia, Andreas Bergen, Michael Liut
- Quicksort inventor Tony Hoare reaches the base case at 92
- The Emperor's Old Clothes - C. A. R. Hoare
- Software Design: A Parable - C. A. R. Hoare
- Programming as Theory Building - Peter Naur
- Going beyond open data – increasing transparency and trust in language models with OLMoTrace - Jiacheng Liu et al.
- The Man Who Killed Google Search - Ed Zitron
- Requiem for Raghavan - Ed Zitron
- Legally Significant Changes - gnu.org
- Clinejection — Compromising Cline's Production Releases just by Prompting an Issue Triager - Adnan Khan
Cline is an open-source AI coding tool that integrates with developer IDEs such as VSCode and its many forks. Users can download Cline through the VS Code Marketplace or OpenVSX. Since Cline is an open-source project the team uses a GitHub for development. On December 21st, 2025, Cline maintainers added an AI agent to triage issues created on the repository. This AI agent ran within a GitHub Actions workflow and ran with broad privileges. You might be able to guess where this is heading…
Between Dec 21st, 2025 and Feb 9th, 2026 a prompt injection vulnerability in Cline’s (now removed) Claude Issue Triage workflow allowed any attacker with a GitHub account to compromise production Cline releases on both the Visual Studio Code Marketplace and OpenVSX and publish malware to millions of developers!
- The Real World of Technology - Ursula Franklin
- Audio @ archive.org
- Digitization of Audio Announcement - Ed Summers
- Democratizing AI Compute - Chris Lattner - seems like nice background - could be better with the terminology but hard to do when someone is trying hard to market their stuff :)
Did the court find that AI training constitutes “reproduction” under German copyright law?
Yes. Following Article 2 InfoSoc Directive, the court held that a reproduction exists “in any form and by any means.” Even a fixation through numerical probability values qualifies, as long as the work can later be perceived through technical means. The court considered the model parameters to embody the protected expression.
...
Did the court consider any other exemptions or implied consents by the authors?
No. The court stated that training AI models is not an ordinary or expected use of a work to which authors have implicitly consented. The acts were therefore unlicensed. Furthermore, the court found that the use was not justified by quotation, parody or similar limitations to copyright.
Who did the court find to be responsible for the AI outputs?
The court determined that responsibility lies with OpenAI. The company selected the training data, built and operated the system, and determined its architecture. User prompts merely trigger the model’s internal processes and do not create independent liability.
- LLMs and plagiarism: a case study - lcamtuf
- Technical Issues of Separation in Function Cells and Value Cells - Richard P. Gabriel, Kent M. Pitman
- I BUILT A FULLY AUTOMATIC MANSPLAINER - Yannic Kilcher
- Introduction to Small Language Models: The Complete Guide for 2026 - Vinod Chugani - "complete"? :P
- Anthropic Study: AI Coding Assistance Reduces Developer Skill Mastery by 17% - Steef-Jan Wiggers
- Selected Talks - Gregor Kiczales
- Why Black Boxes Are So Hard To Reuse - Gregor Kiczales
- Data Warehousing: Aggregating Data for Analysis - mentions "The Data Warehouse Toolkit"
- Pretty-Printing, Converting List to Linear Structure - Ira Goldstein
- Sturgeon's Law
"ninety percent of everything is crap"
- Money creation in the modern economy - Michael McLeay, Amar Radia and Ryland Thomas of the Bank’s Monetary Analysis Directorate
- Money creation in the modern economoy - Quarterly Bulletin - Bank of England
- Intoducing Gloat and Glojure - Ingy
- Piledriving the GenAI Grift with Nikhil Suresh - Last Week in AWS
- Speech in Acceptance of the National Book Foundation Medal for Distinguished Contribution to American Letters - Ursula K. Le Guin
To the givers of this beautiful reward, my thanks, from the heart. My family, my agents, my editors, know that my being here is their doing as well as my own, and that the beautiful reward is theirs as much as mine. And I rejoice in accepting it for, and sharing it with, all the writers who’ve been excluded from literature for so long — my fellow authors of fantasy and science fiction, writers of the imagination, who for fifty years have watched the beautiful rewards go to the so-called realists.
Hard times are coming, when we’ll be wanting the voices of writers who can see alternatives to how we live now, can see through our fear-stricken society and its obsessive technologies to other ways of being, and even imagine real grounds for hope. We’ll need writers who can remember freedom — poets, visionaries — realists of a larger reality.
- I Will Fucking Piledrive You If You Mention AI Again - Nikhil Suresh
- I Will Fucking Dropkick You If You Use That Spreadsheet - Nikhil Suresh
- Contra Ptacek's Terrible Article On AI - Nikhil Suresh
- The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis - Jin Wang, Wenxiang Fan - kind of lame that specifically ChatGPT seems to be "promoted" (see abstract)...suspicious cat is suspicious
- Critical thinking in the AI era: An exploration of EFL students’ perceptions, benefits, and limitations - Darwin, Diyenti Rusdin, Nur Mukminatien, Nunung Suryati, Ekaning D. Laksmi, Marzuki
- To Think or Not to Think: The Impact of AI on Critical-Thinking Skills - Christine Anne Royce, Valerie Bennett
-
Promote Active Engagement With Scientific Data
Rather than letting AI generate answers, ask students to interpret AI-generated data themselves. Also within this area, ask students to engage in hypothesis testing through additional research and evidence-based reasoning and determine if they see emergent patterns. If planned properly and modeled for the students so that they can practice this type of use, AI becomes not just a source of information, but also a means of engaging with scientific inquiry on a deeper level. An example would be taking a typical lab or assignment and having students find data to support or refute an argument about what they found. This data gathering could be related to water safety, diseases, or pathogen rates where they live.
-
Use AI to Facilitate Scientific Argumentation
Encourage students to use AI as a tool to gather evidence for debates or scientific arguments. Ensure that students then “fact check” the information that was provided for accuracy. A second strategy would be to provide AI with information—i.e., a map of a path that a hurricane is on—and then present two explanations for the path, one that is meteorologically accurate (always double-check yourself) and one that is plausible, but inaccurate. Provide these explanations to the students and ask them to determine which one is on target and why.
-
Require the Use of Claim, Evidence, and Reasoning (CER)
Phenomenon-based learning places students in the role of scientists, encouraging them to ask questions, form hypotheses, and conduct experiments. AI can support this by offering dynamic simulations and interactive environments where students can test their ideas or even provide potential explanations for a phenomenon. Students should still be asked to critically evaluate any information that is AI-generated and explain their own reasoning for an answer.
-
Frame AI as a Resource, Not a Shortcut
By framing AI as a resource for exploration rather than a shortcut to solutions, teachers can help students maintain an active role in their learning process. Instead of using AI to provide direct answers, educators can encourage students to use AI as a discussion partner. For example, students can use AI-generated data as a starting point for class debates or group projects. This promotes collaborative problem solving and requires students to evaluate, question, and interpret the information AI provides. Pose thought-provoking questions to students regarding the data, such as “Are there any biases in the data? What are the sources of this data?”
-
- The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers - Hao-Ping (Hank) Lee, Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, Nicholas Wilson
- The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review - Chunpeng Zhai, Santoso Wilbowo, Lily D. Li
- AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking - Michael Gerlich
- Excerpt from "The Dawn of Everything"
In this book we will not only be presenting a new history of humankind, but inviting the reader into a new science of history, one that restores our ancestors to their full humanity. Rather than asking how we ended up unequal, we will start by asking how it was that ‘inequality’ became such an issue to begin with, then gradually build up an alternative narrative that corresponds more closely to our current state of knowledge. If humans did not spend 95 per cent of their evolutionary past in tiny bands of hunter- gatherers, what were they doing all that time? If agriculture, and cities, did not mean a plunge into hierarchy and domination, then what did they imply? What was really happening in those periods we usually see as marking the emergence of ‘the state’? The answers are often unexpected, and suggest that the course of human history may be less set in stone, and more full of playful possibilities, than we tend to assume.
unsafeisn't a keyword in C because everything is unsafe - Will Lillis
- LLME - Michael Fogus
Moreover, as a Socratic partner, LLMs are incredibly frustrating in their inability to move a “discussion” forward. Indeed, the inability to leverage (or even to identify) necessary tension highlights a huge problem in the emergent sycophantic behavior of these tools. A good Socratic partner creates pressure to move toward truth and shared understanding, but LLMs are too sycophantic, lack an awareness of useful tension,4 cannot often identify contradiction, and lack an ability to adhere to the trajectory of a conversation. These traits are poison to my software design process.
- CLJ Screening - Alex Miller
- Philosophy, Bullshit, and Peer Review - Neil Levy
- Excerpt from "The Dawn of Everything"
If, as many are suggesting, our species’ future now hinges on our capacity to create something different (say, a system in which wealth cannot be freely transformed into power, or where some people are not told their needs are unimportant, or that their lives have no intrinsic worth), then what ultimately matters is whether we can rediscover the freedoms that make us human in the first place. As long ago as 1936, the prehistorian V. Gordon Childe wrote a book called Man Makes Himself. Apart from the sexist language, this is the spirit we wish to invoke. We are projects of collective self-creation. What if we approached human history that way? What if we treat people, from the beginning, as imaginative, intelligent, playful creatures who deserve to be understood as such? What if, instead of telling a story about how our species fell from some idyllic state of equality, we ask how we came to be trapped in such tight conceptual shackles that we can no longer even imagine the possibility of reinventing ourselves?
- ChatGPT is bullshit - Michael Townsen Hicks, James Humphries, Joe Slater
Investors, policymakers, and members of the general public make decisions on how to treat these machines and how to react to them based not on a deep technical understanding of how they work, but on the often metaphorical way in which their abilities and function are communicated. Calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived. This, as we’ve argued, is the wrong metaphor. The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all. They are bullshitting.
Calling chatbot inaccuracies ‘hallucinations’ feeds in to overblown hype about their abilities among technology cheerleaders, and could lead to unnecessary consternation among the general public. It also suggests solutions to the inaccuracy problems which might not work, and could lead to misguided efforts at AI alignment amongst specialists. It can also lead to the wrong attitude towards the machine when it gets things right: the inaccuracies show that it is bullshitting, even when it’s right. Calling these inaccuracies ‘bullshit’ rather than ‘hallucinations’ isn’t just more accurate (as we’ve argued); it’s good science and technology communication in an area that sorely needs it.
- The GenAI Divide - State of AI in Business 2025 (via archive.org) - MIT NANDA
Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach.
Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment. But these tools primarily enhance individual productivity, not P&L performance. Meanwhile, enterprisegrade systems, custom or vendor-sold, are being quietly rejected. Sixty percent of organizations evaluated such tools, but only 20 percent reached pilot stage and just 5 percent reached production. Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations.
...
The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time.
- The Many Flavors of Ignore Files - Andrew Nesbit
Every tool wants to be git until it has to implement git’s edge cases.
-
Excerpt from "The Dawn of Everything"
Nonetheless, on those occasions when people do reflect on the lessons of prehistory, they almost invariably come back to questions of this kind. We are all familiar with the Christian answer: people once lived in a state of innocence, yet were tainted by original sin. We desired to be godlike and have been punished for it; now we live in a fallen state while hoping for future redemption. Today, the popular version of this story is typically some updated variation on Jean-Jacques Rousseau’s Discourse on the Origin and the Foundation of Inequality Among Mankind, which he wrote in 1754. Once upon a time, the story goes, we were hunter-gatherers, living in a prolonged state of childlike innocence, in tiny bands. These bands were egalitarian; they could be for the very reason that they were so small. It was only after the ‘Agricultural Revolution’, and then still more the rise of cities, that this happy condition came to an end, ushering in ‘civilization’ and ‘the state’ – which also meant the appearance of written literature, science and philosophy, but at the same time, almost everything bad in human life: patriarchy, standing armies, mass executions and annoying bureaucrats demanding that we spend much of our lives filling in forms.
Of course, this is a very crude simplification, but it really does seem to be the foundational story that rises to the surface whenever anyone, from industrial psychologists to revolutionary theorists, says something like ‘but of course human beings spent most of their evolutionary history living in groups of ten or twenty people,’ or ‘agriculture was perhaps humanity’s worst mistake.’ And as we’ll see, many popular writers make the argument quite explicitly. The problem is that anyone seeking an alternative to this rather depressing view of history will quickly find that the only one on offer is actually even worse: if not Rousseau, then Thomas Hobbes.
...
As the reader can probably detect from our tone, we don’t much like the choice between these two alternatives. Our objections can be classified into three broad categories. As accounts of the general course of human history, they:
- simply aren’t true;
- have dire political implications;
- make the past needlessly dull.
This book is an attempt to begin to tell another, more hopeful and more interesting story; one which, at the same time, takes better account of what the last few decades of research have taught us. Partly, this is a matter of bringing together evidence that has accumulated in archaeology, anthropology and kindred disciplines; evidence that points towards a completely new account of how human societies developed over roughly the last 30,000 years. Almost all of this research goes against the familiar narrative, but too often the most remarkable discoveries remain confined to the work of specialists, or have to be teased out by reading between the lines of scientific publications.
- How Vibe Coding is Killing Open Source - Maya Posch
- Vibe Coding Kills Open Source - Miklós Koren, Gábor Békés, Julian Hinz, Aaron Lohmann
- The open source design stack - Scott Riley
- Disassembling a Cortex-M raw binary file with Ghidra - Niall Cooling
- Understanding the C runtime memory model - Niall Cooling
- Introduction to Janet RPC - Joe Creager
- The Law of Leaky Abstractions - Joel Spolsky
- ClojureWasmBeta - chaploud
- Personal AI Agents like OpenClaw Are a Security Nightmare - Amy Chang, Vineeth Sai Narajala
- Designing Organizations for an Information-Rich World - Herbert A. Simon
- High tech is watching you - John Laidler (interview with Shoshana Zuboff)
- Backseat Software - Mike Swanson
- What not where: Why a blue sky OS? - Peter Alvaro
- The Search for Meaning Through Collaboration and Code - Timothy Pratley
- Defeating Bowser with A* Search - Adrian Smith
- Making Tools Developers Actually Use - Michiel Borkent
- From Scripts to Buy-In: How Small Clojure Wins Create Big Opportunities - Burin Choomnuan
- Memory Safety Is ... - matklad
- make.ts - matklad
- Testing Opus 4.5 For C Programming - Daniel Hooper
- Demo: Base language, compile-time execution - Jonathan Blow
- Insurers to Pull Back From AI Liability Coverage - Datamation
- HiTeX Press: A spam factory for AI-generated books - Laurent Le Brun
- One week of bugs - Dan Luu
- This Is How Science Happens - Hillel Wayne
- Serving webapps from your REPL - Timothy Pratley
- Design in Practice slides - Rich Hickey
- My approach to running a link blog - Simon Willison
- Xerox scanners/photocopiers randomly alter numbers in scanned documents - David Kriesel
- Lies, damned lies and scans - David Kriesel
- Something to read in Quarantine: Essays 2018 to 2020 - de Pony Sum
- Eloquent: Improving Text Editing on Mobile - Scott Jenson
- Classic HCI Demos - Jack Rusher
- Are we stuck with the same Desktop UX forever? - Scott Jenson
- How Video Games Inspire Great UX - Scott Jenson

