@ochafik's gists
Last updated: 2025-12-01 00:35 UTC | Total: 273 gists | (61 public, 212 private)
@ochafik's gists
Last updated: 2025-12-01 00:35 UTC | Total: 273 gists | (61 public, 212 private)
So yeah, the last stable build of OpenSCAD is over 4 years old as of this writing.
Nightly (Dev) builds of OpenSCAD have seen 100x speed improvements since that release, when enabling the Manifold backend (see this presentation).
If you can install a Dev build instead of the stable version, you should! (then, pass --backend=manifold if using in Command line, or switch the renderer from slow CGAL to Manifold in the UI settings if using the UI; see instructions here).
If you can't install the Dev build, well, you kinda can (for command-line use only, and if you have Docker): put the openscad script below in your PATH, and you're good to go (make sure it's executable w/ chmod +x path/to/openscad).
Oh, and on MacOS some projects that use OpenSCAD assume its binary is in /Applications/OpenSCAD.app/Contents/MacOS/OpenSCAD
| #!/usr/bin/env node | |
| /* | |
| Gets the file under $OLLAMA_HOME/models/blobs/ for the application/vnd.ollama.image.model key in the manifest | |
| - Note that metadata of modelId:modelTag is stored under $OLLAMA_HOME/models/manifests/registry.ollama.ai/library/${modelId}/${modelTag} | |
| - You'll need to get the Jinja template from the original model using llama.cpp's scripts/get_chat_template.py script | |
| ollama pull qwen2.5-coder:7b | |
| llama-server -m $( ./get_ollama_gguf.js qwen2.5-coder:7b ) -fa --jinja --chat-template-file <( ./scripts/get_chat_template.py Qwen/Qwen2.5-Coder-7B-Instruct-GGUF tool_use ) | |
| Initially shared here: https://github.com/ggml-org/llama.cpp/pull/9639#issuecomment-2704208342 |
| Begin by enclosing all thoughts within <thinking> tags, exploring multiple angles and approaches. | |
| Break down the solution into clear steps within <step> tags. Start with a 20-step budget, requesting more for complex problems if needed. | |
| Use <count> tags after each step to show the remaining budget. Stop when reaching 0. | |
| Continuously adjust your reasoning based on intermediate results and reflections, adapting your strategy as you progress. | |
| Regularly evaluate progress using <reflection> tags. Be critical and honest about your reasoning process. | |
| Assign a quality score between 0.0 and 1.0 using <reward> tags after each reflection. Use this to guide your approach: | |
| 0.8+: Continue current approach | |
| 0.5-0.7: Consider minor adjustments | |
| Below 0.5: Seriously consider backtracking and trying a different approach |
Good question! I am collecting human data on how quantization affects outputs. See here for more information: ggml-org/llama.cpp#5962
In the meantime, use the largest that fully fits in your GPU. If you can comfortably fit Q4_K_S, try using a model with more parameters.
See the wiki upstream: https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix
| #!/usr/bin/env python3 | |
| import subprocess | |
| import json | |
| import os | |
| from pathlib import Path | |
| import requests | |
| from requests.compat import urljoin |
| #!/usr/bin/env bash | |
| # --slave /usr/bin/$1 $1 /usr/bin/$1-\${version} \\ | |
| function register_clang_version { | |
| local version=$1 | |
| local priority=$2 | |
| update-alternatives \ | |
| --install /usr/bin/llvm-config llvm-config /usr/bin/llvm-config-${version} ${priority} \ |
# make sure to replace `<hash>` with your gist's hash
git clone https://gist.github.com/<hash>.git # with https
git clone [email protected]:<hash>.git # or with sshToute ressemblance avec l'affaire du racolage passif serait purement fortuite.
Ne nous voilons pas la face: en matière de politique publique, ce n'est pas l'intention qui compte, ce sont les résultats.
Eh bien quid justement des intentions en jeu dans l'affaire du burkini, et des résultats probables?
Je vois grosso modo 3 profils majeurs pour nos bien-intentionnés amis anti-burkini:
| // Run with dart: | |
| // dart async_example.dart | |
| import 'dart:async'; | |
| f(g) async { | |
| try { | |
| final value = await g(); | |
| return 'Result: ' + (value + 1); | |
| } catch (e) { | |
| if (e == 'rethrow me') throw e; | |
| return 'Caught: ' + e; |