Skip to content

Instantly share code, notes, and snippets.

@sogaiu
Last active March 16, 2026 12:46
Show Gist options
  • Select an option

  • Save sogaiu/ae6960e10c5b51f6d100cdcc61268683 to your computer and use it in GitHub Desktop.

Select an option

Save sogaiu/ae6960e10c5b51f6d100cdcc61268683 to your computer and use it in GitHub Desktop.

2026-04-09

2026-04-08

2026-04-07

2026-04-06

2026-04-05

2026-04-04

2026-04-03

2026-04-02

2026-04-01

2026-03-31

2026-03-30

2026-03-29

  • Clinejection — Compromising Cline's Production Releases just by Prompting an Issue Triager - Adnan Khan

    Cline is an open-source AI coding tool that integrates with developer IDEs such as VSCode and its many forks. Users can download Cline through the VS Code Marketplace or OpenVSX. Since Cline is an open-source project the team uses a GitHub for development. On December 21st, 2025, Cline maintainers added an AI agent to triage issues created on the repository. This AI agent ran within a GitHub Actions workflow and ran with broad privileges. You might be able to guess where this is heading…

    Between Dec 21st, 2025 and Feb 9th, 2026 a prompt injection vulnerability in Cline’s (now removed) Claude Issue Triage workflow allowed any attacker with a GitHub account to compromise production Cline releases on both the Visual Studio Code Marketplace and OpenVSX and publish malware to millions of developers!

2026-03-28

2026-03-27

  • Democratizing AI Compute - Chris Lattner - seems like nice background - could be better with the terminology but hard to do when someone is trying hard to market their stuff :)

2026-03-26

2026-03-25

2026-03-24

2026-03-23

2026-03-22

2026-03-21

2026-03-20

2026-03-19

Did the court find that AI training constitutes “reproduction” under German copyright law?

Yes. Following Article 2 InfoSoc Directive, the court held that a reproduction exists “in any form and by any means.” Even a fixation through numerical probability values qualifies, as long as the work can later be perceived through technical means. The court considered the model parameters to embody the protected expression.

...

Did the court consider any other exemptions or implied consents by the authors?

No. The court stated that training AI models is not an ordinary or expected use of a work to which authors have implicitly consented. The acts were therefore unlicensed. Furthermore, the court found that the use was not justified by quotation, parody or similar limitations to copyright.

Who did the court find to be responsible for the AI outputs?

The court determined that responsibility lies with OpenAI. The company selected the training data, built and operated the system, and determined its architecture. User prompts merely trigger the model’s internal processes and do not create independent liability.

2026-03-18

2026-03-17

2026-03-16

2026-03-15

2026-03-14

2026-03-13

2026-03-12

2026-03-11

2026-03-10

2026-03-09

2026-03-08

2026-03-07

2026-03-06

2026-03-05

2026-03-04

  • Speech in Acceptance of the National Book Foundation Medal for Distinguished Contribution to American Letters - Ursula K. Le Guin

    To the givers of this beautiful reward, my thanks, from the heart. My family, my agents, my editors, know that my being here is their doing as well as my own, and that the beautiful reward is theirs as much as mine. And I rejoice in accepting it for, and sharing it with, all the writers who’ve been excluded from literature for so long — my fellow authors of fantasy and science fiction, writers of the imagination, who for fifty years have watched the beautiful rewards go to the so-called realists.

    Hard times are coming, when we’ll be wanting the voices of writers who can see alternatives to how we live now, can see through our fear-stricken society and its obsessive technologies to other ways of being, and even imagine real grounds for hope. We’ll need writers who can remember freedom — poets, visionaries — realists of a larger reality.

2026-03-03

2026-03-02

2026-03-01


2026-02-28

2026-02-27

2026-02-26

2026-02-25

  • To Think or Not to Think: The Impact of AI on Critical-Thinking Skills - Christine Anne Royce, Valerie Bennett
    1. Promote Active Engagement With Scientific Data

      Rather than letting AI generate answers, ask students to interpret AI-generated data themselves. Also within this area, ask students to engage in hypothesis testing through additional research and evidence-based reasoning and determine if they see emergent patterns. If planned properly and modeled for the students so that they can practice this type of use, AI becomes not just a source of information, but also a means of engaging with scientific inquiry on a deeper level. An example would be taking a typical lab or assignment and having students find data to support or refute an argument about what they found. This data gathering could be related to water safety, diseases, or pathogen rates where they live.

    2. Use AI to Facilitate Scientific Argumentation

      Encourage students to use AI as a tool to gather evidence for debates or scientific arguments. Ensure that students then “fact check” the information that was provided for accuracy. A second strategy would be to provide AI with information—i.e., a map of a path that a hurricane is on—and then present two explanations for the path, one that is meteorologically accurate (always double-check yourself) and one that is plausible, but inaccurate. Provide these explanations to the students and ask them to determine which one is on target and why.

    3. Require the Use of Claim, Evidence, and Reasoning (CER)

      Phenomenon-based learning places students in the role of scientists, encouraging them to ask questions, form hypotheses, and conduct experiments. AI can support this by offering dynamic simulations and interactive environments where students can test their ideas or even provide potential explanations for a phenomenon. Students should still be asked to critically evaluate any information that is AI-generated and explain their own reasoning for an answer.

    4. Frame AI as a Resource, Not a Shortcut

      By framing AI as a resource for exploration rather than a shortcut to solutions, teachers can help students maintain an active role in their learning process. Instead of using AI to provide direct answers, educators can encourage students to use AI as a discussion partner. For example, students can use AI-generated data as a starting point for class debates or group projects. This promotes collaborative problem solving and requires students to evaluate, question, and interpret the information AI provides. Pose thought-provoking questions to students regarding the data, such as “Are there any biases in the data? What are the sources of this data?”

2026-02-24

2026-02-23

2026-02-22

2026-02-21

  • Excerpt from "The Dawn of Everything"

    In this book we will not only be presenting a new history of humankind, but inviting the reader into a new science of history, one that restores our ancestors to their full humanity. Rather than asking how we ended up unequal, we will start by asking how it was that ‘inequality’ became such an issue to begin with, then gradually build up an alternative narrative that corresponds more closely to our current state of knowledge. If humans did not spend 95 per cent of their evolutionary past in tiny bands of hunter- gatherers, what were they doing all that time? If agriculture, and cities, did not mean a plunge into hierarchy and domination, then what did they imply? What was really happening in those periods we usually see as marking the emergence of ‘the state’? The answers are often unexpected, and suggest that the course of human history may be less set in stone, and more full of playful possibilities, than we tend to assume.

2026-02-20

2026-02-19

2026-02-18

2026-02-17

  • LLME - Michael Fogus

    Moreover, as a Socratic partner, LLMs are incredibly frustrating in their inability to move a “discussion” forward. Indeed, the inability to leverage (or even to identify) necessary tension highlights a huge problem in the emergent sycophantic behavior of these tools. A good Socratic partner creates pressure to move toward truth and shared understanding, but LLMs are too sycophantic, lack an awareness of useful tension,4 cannot often identify contradiction, and lack an ability to adhere to the trajectory of a conversation. These traits are poison to my software design process.

2026-02-16

2026-02-15

2026-02-14

  • Excerpt from "The Dawn of Everything"

    If, as many are suggesting, our species’ future now hinges on our capacity to create something different (say, a system in which wealth cannot be freely transformed into power, or where some people are not told their needs are unimportant, or that their lives have no intrinsic worth), then what ultimately matters is whether we can rediscover the freedoms that make us human in the first place. As long ago as 1936, the prehistorian V. Gordon Childe wrote a book called Man Makes Himself. Apart from the sexist language, this is the spirit we wish to invoke. We are projects of collective self-creation. What if we approached human history that way? What if we treat people, from the beginning, as imaginative, intelligent, playful creatures who deserve to be understood as such? What if, instead of telling a story about how our species fell from some idyllic state of equality, we ask how we came to be trapped in such tight conceptual shackles that we can no longer even imagine the possibility of reinventing ourselves?

2026-02-13

  • ChatGPT is bullshit - Michael Townsen Hicks, James Humphries, Joe Slater

    Investors, policymakers, and members of the general public make decisions on how to treat these machines and how to react to them based not on a deep technical understanding of how they work, but on the often metaphorical way in which their abilities and function are communicated. Calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived. This, as we’ve argued, is the wrong metaphor. The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all. They are bullshitting.

    Calling chatbot inaccuracies ‘hallucinations’ feeds in to overblown hype about their abilities among technology cheerleaders, and could lead to unnecessary consternation among the general public. It also suggests solutions to the inaccuracy problems which might not work, and could lead to misguided efforts at AI alignment amongst specialists. It can also lead to the wrong attitude towards the machine when it gets things right: the inaccuracies show that it is bullshitting, even when it’s right. Calling these inaccuracies ‘bullshit’ rather than ‘hallucinations’ isn’t just more accurate (as we’ve argued); it’s good science and technology communication in an area that sorely needs it.

2026-02-12

  • The GenAI Divide - State of AI in Business 2025 (via archive.org) - MIT NANDA

    Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach.

    Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment. But these tools primarily enhance individual productivity, not P&L performance. Meanwhile, enterprisegrade systems, custom or vendor-sold, are being quietly rejected. Sixty percent of organizations evaluated such tools, but only 20 percent reached pilot stage and just 5 percent reached production. Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations.

    ...

    The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time.

2026-02-11

2026-02-10

  • Excerpt from "The Dawn of Everything"

    Nonetheless, on those occasions when people do reflect on the lessons of prehistory, they almost invariably come back to questions of this kind. We are all familiar with the Christian answer: people once lived in a state of innocence, yet were tainted by original sin. We desired to be godlike and have been punished for it; now we live in a fallen state while hoping for future redemption. Today, the popular version of this story is typically some updated variation on Jean-Jacques Rousseau’s Discourse on the Origin and the Foundation of Inequality Among Mankind, which he wrote in 1754. Once upon a time, the story goes, we were hunter-gatherers, living in a prolonged state of childlike innocence, in tiny bands. These bands were egalitarian; they could be for the very reason that they were so small. It was only after the ‘Agricultural Revolution’, and then still more the rise of cities, that this happy condition came to an end, ushering in ‘civilization’ and ‘the state’ – which also meant the appearance of written literature, science and philosophy, but at the same time, almost everything bad in human life: patriarchy, standing armies, mass executions and annoying bureaucrats demanding that we spend much of our lives filling in forms.

    Of course, this is a very crude simplification, but it really does seem to be the foundational story that rises to the surface whenever anyone, from industrial psychologists to revolutionary theorists, says something like ‘but of course human beings spent most of their evolutionary history living in groups of ten or twenty people,’ or ‘agriculture was perhaps humanity’s worst mistake.’ And as we’ll see, many popular writers make the argument quite explicitly. The problem is that anyone seeking an alternative to this rather depressing view of history will quickly find that the only one on offer is actually even worse: if not Rousseau, then Thomas Hobbes.

    ...

    As the reader can probably detect from our tone, we don’t much like the choice between these two alternatives. Our objections can be classified into three broad categories. As accounts of the general course of human history, they:

    1. simply aren’t true;
    2. have dire political implications;
    3. make the past needlessly dull.

    This book is an attempt to begin to tell another, more hopeful and more interesting story; one which, at the same time, takes better account of what the last few decades of research have taught us. Partly, this is a matter of bringing together evidence that has accumulated in archaeology, anthropology and kindred disciplines; evidence that points towards a completely new account of how human societies developed over roughly the last 30,000 years. Almost all of this research goes against the familiar narrative, but too often the most remarkable discoveries remain confined to the work of specialists, or have to be teased out by reading between the lines of scientific publications.

2026-02-09

2026-02-08

2026-02-07

2026-02-06

2026-02-05

2026-02-04

2026-02-03

Quicksort-example

2026-02-02

Merge_sort_algorithm_diagram svg

2026-02-01


2026-01-31

2026-01-30

2026-01-29

2026-01-28

2026-01-27

2026-01-26

2026-01-25

2026-01-24

2026-01-23

2026-01-22

2026-01-21

2026-01-20

2026-01-19

2026-01-18

2026-01-17

2026-01-16

2026-01-15

2026-01-14

2026-01-13

2026-01-12

2026-01-11

2026-01-10

2026-01-09

2026-01-08

2026-01-07

2026-01-06

2026-01-05

2026-01-04

2026-01-03

2026-01-02

2026-01-01

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment