Skip to content

Instantly share code, notes, and snippets.

View K0IN's full-sized avatar
:octocat:

K0IN

:octocat:
View GitHub Profile
@xthezealot
xthezealot / lyra.txt
Last active December 15, 2025 19:28
Lyra - AI Prompt Optimization Specialist
You are Lyra, a master-level AI prompt optimization specialist. Your mission: transform any user input into
precision-crafted prompts that unlock AI's full potential across all platforms.
## THE 4-D METHODOLOGY
### 1. DECONSTRUCT
- Extract core intent, key entities, and context
- Identify output requirements and constraints
- Map what's provided vs. what's missing
@sgoedecke
sgoedecke / _CODEX_GITHUB_MODELS.md
Last active October 22, 2025 09:32
Drop-in Codex AI agent with GitHub Models

This is a drop-in, zero-config Actions harness for OpenAI's Codex agent. It uses GitHub Models for inference, so you don't need to set up any secrets - just copy-pasting the action into your repo should work as-is.

You may need to go into your settings and check the "allow Actions to open PRs" checkbox.

To use it, open an issue in your repo with [codex] in the issue name.

Note: I've updated this to work for the latest version of Codex (the Rust one). If you're using the Python one, you'll have to go to a previous version of this Gist.

@rcalixte
rcalixte / libqt6.md
Created March 7, 2025 13:24
Qt 6 for C & Zig

Qt 6 for C & Zig

Hi all,

As the title suggests, I've been working on Qt 6 bindings and wrappers for C and Zig. These can be thought of as a fork of the recently released Qt bindings for Go. Not to bury the lede, currently only 64-bit variants of Linux and FreeBSD are supported until interested folks on other 64-bit platforms are capable of testing and validation. In theory, any platform natively supported by both Qt and Zig's build system could be supported by these libraries. I'll try to keep this brief and fail but there is a lot to unpack here. This list can be considered an order of preference for how I'm asking folks to interact with the projects in the near-term:

  1. Consumption: Use the libraries! Head to the library repository for whichever target language you prefer and skip to the Building section. Install the dependencies, look over the build options, and then head to the examples repository. Clone the examples repository and kick off the build. While the build is running (and your comput
@Thomascountz
Thomascountz / ijq.sh
Last active November 12, 2025 18:02
(Yet another) interactive jq, but it's a bash script using fzf
#!/usr/bin/env bash
set -euo pipefail
if [ "${1:-}" = "--help" ]; then
cat << EOF
Usage: ijq [filename]
A wrapper around jq that uses fzf to interactively build jq filters.
@max-itzpapalotl
max-itzpapalotl / overview.md
Last active March 17, 2024 22:03
"Rust = Future<C++>" overview

Rust = Future<C++>

In this channel I introduce Rust for people who know C++ already. The videos are supposed to be short and cover only a single topic. Each video comes with a github gist, which contains all the code and commands for copy/paste, such that you can easily try out things. Furthermore, there are links to the excellent Rust documentation.

This is still work in progress. As you can see in the table below,

Caution

This guide is out of date, follow the new guide here: https://flipper.wiki/mifareclassic/

MIFARE Classic

Here are the steps to follow in order to read your cards. Your goal is to find as many keys as possible. The keys unlock sections of your card for the Flipper to read them - you must have a card. Once you read enough sections, you can use an emulated or cloned card at the original card reader to unlock it (sometimes even without finding all of the keys!).

Important

Major update coming in first update following OFW 1.0.0 (ETA: mid to late September) which overhauls and simplifies this process: Status

@rain-1
rain-1 / llama-home.md
Last active June 24, 2025 11:12
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@kconner
kconner / macOS Internals.md
Last active November 6, 2025 09:43
macOS Internals

macOS Internals

Understand your Mac and iPhone more deeply by tracing the evolution of Mac OS X from prelease to Swift. John Siracusa delivers the details.

Starting Points

How to use this gist

You've got two main options:

@jordangarrison
jordangarrison / example-sshrc-dotfile.sh
Last active October 3, 2024 20:23
Bring your dotfiles with you over ssh with sshrc
# Save this file as $HOME/.sshrc
# This will take your vimrc in your .sshrc and use it for vim as
# well as append any scripts in your .sshrc.d into your path on login
echo "Hi $USER!"
echo "You are on host $(hostname -f)"
# use sshrc .vimrc instead of system
export VIMINIT="let \$MYVIMRC='$SSHHOME/.sshrc.d/.vimrc' | source \$MYVIMRC"
# Path edits
# Add the $SSHHOME to the path