This can save some disk space, especially if you have many large git repos on your hard drive. Meant to be run from a parent directory where all code/projects are kept. Skips directories that aren't git repos.
Testing helm-invenio chart locally using Minikube.
The values file uses the front matter starter image ghcr.io/front-matter/invenio-rdm-starter:v12.1.0.0 but that uses gunicorn for a web server instead of uwsgi so the web pod crashes. The invenio demo images are incompatible with minikube on apple silicon. The most complete test would be to use a locally built Invenio image or our images in Artifact Registry with an image pull secret.
brew install minikube kubectl helm chart-testing| courses = require('./course_section_data_AP_Spring_2025.json') | |
| // the course codes are not consistent so we can't use those | |
| // some say senior project, senior thesis, senior game project, thesis project, and "fashion studio 5" | |
| srprojects = courses.filter(c => c.status != "Preliminary") | |
| .filter(c => c.section_title.toLowerCase().match(/senior|thesis/)) | |
| .filter(c => { | |
| let level = parseInt(c.section_code.match(/-(\d{4})-/)[1]) | |
| return level < 5000 && level >= 4000 | |
| }) |
On the Manage Activities admin page there's a count of how many courses use a particular activity and you can click the count to execute a search that retrieves all those courses. There's a button to show all the courses on the same page, but no useful way to do further filtering or extract data. The JS above can be copy-pasted into your browser's JavaScript console to create a CSV of the courses listed on the module list page.
We have parentheticals for semester in our course titles like "20th Century Fashion (2022SP)" so this attempts to extract those, though the regex is primitive and will mess up with courses with a second set of parentheses. This is usually easy to cleanup manually.
| #!/usr/bin/env bash | |
| # create a csv of path, file size, last access time for all files in a directory tree | |
| # usage: file-tree-csv.sh /path/to/directory > output.csv | |
| FILEDIR=/opt/moodledata/filedir | |
| cd $FILEDIR || (echo "Moodle filedir not found, exiting" && exit 1) | |
| echo "path,size,atime" | |
| # shellcheck disable=SC2044 | |
| # all Moodle file names & paths only contain hexadecimal chars | |
| for file in $(find . -type f); do | |
| size=$(stat -c %s $file) |
| #!/usr/bin/env python | |
| # run from root of cca/vault_migration project over _all_ VAULT metadata JSON like | |
| # `poetry run python missing-names.py vm/*.json` | |
| import csv | |
| import json | |
| import os | |
| import sys | |
| import xmltodict |
| # backup the mdl_data_records table before we modify it | |
| gcloud sql export sql mysql-prod-1 gs://cca-manual-db-dumps/(dt)-mdl_data_records.sql -d m_prod1 -t mdl_data_records |
| #!/usr/bin/env fish | |
| # download ALL live vault items to item.json and metadata.xml files | |
| # 47283 total items, we can download 50 at a time | |
| set total (eq search -l 1 | jq '.available') | |
| set length 50 | |
| set pages (math floor $total / $length) | |
| for i in (seq 0 $pages) | |
| set start (math $i \* $length) | |
| echo "Downloading items $start to" (math $start + $length) | |
| eq search -l $length --info metadata --start $start > .tmp/$i.json |
| // check if these sections appear on EQUELLA search results page | |
| const sections = [ | |
| 'GELCT-6700-2', | |
| 'LITPA-2000-10', | |
| 'WRITE-6000-2', | |
| ] | |
| console.log(`Checking for ${sections.length} section codes`) | |
| // return list of missing sections | |
| const missing = sections.filter(s => { |
| #!/usr/bin/env fish | |
| # used for Art Practical site | |
| # fill in credentials | |
| set USER username | |
| set PASS password | |
| set COLLECTION 15633 | |
| # destination files | |
| set JSONFILE data.json |