Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.
| #!/bin/bash | |
| ############################################################################################################################## | |
| # USAGE: sbatch myscript.sh <RUN_NAME> python <script.py> [args...] | |
| # EXAMPLE: sbatch myscript.sh my_experiment_v1 python train.py --lr 0.01 | |
| ############################################################################################################################## | |
| #SBATCH --job-name=Likelihoods | |
| #SBATCH --cpus-per-task=8 | |
| #SBATCH --nodes=1 | |
| #SBATCH --tasks-per-node=1 | |
| #SBATCH --hint=nomultithread |
| [Unit] | |
| Description=Set NVIDIA power limit above default | |
| [Service] | |
| Type=oneshot | |
| ExecStartPre=/usr/bin/nvidia-smi -pm 1 | |
| ExecStart=/usr/bin/nvidia-smi -pl 275 |
| #!/usr/bin/awk -f | |
| # This program is a copy of guff, a plot device. https://github.com/silentbicycle/guff | |
| # My copy here is written in awk instead of C, has no compelling benefit. | |
| # Public domain. @thingskatedid | |
| # Run as awk -v x=xyz ... or env variables for stuff? | |
| # Assumptions: the data is evenly spaced along the x-axis | |
| # TODO: moving average |
This is a short post that explains how to write a high-performance matrix multiplication program on modern processors. In this tutorial I will use a single core of the Skylake-client CPU with AVX2, but the principles in this post also apply to other processors with different instruction sets (such as AVX512).
Matrix multiplication is a mathematical operation that defines the product of
These are some simple bash functions and scripts for making CSV/TSV files prettier on the command line
see http://stefaanlippens.net/pretty-csv.html for more information.