First get the virtual disk (vhdx) from the Hyper-V platform.
Use a Linux machine with libvirt installed.
Do not forget to install the UEFI firmware:
sudo apt install ovmf| #!/bin/bash | |
| # Function to display usage information | |
| usage() { | |
| echo "Usage: $0 /path/to/input.mp4 [ /path/to/output_directory ]" | |
| exit 1 | |
| } | |
| # Check if at least one argument (input file) is provided | |
| if [ $# -lt 1 ]; then |
| #!/bin/sh | |
| # Source: http://kubernetes.io/docs/getting-started-guides/kubeadm | |
| set -e | |
| source /etc/lsb-release | |
| if [ "$DISTRIB_RELEASE" != "20.04" ]; then | |
| echo "################################# " | |
| echo "############ WARNING ############ " |
| #!/bin/sh | |
| # Source: http://kubernetes.io/docs/getting-started-guides/kubeadm | |
| set -e | |
| source /etc/lsb-release | |
| if [ "$DISTRIB_RELEASE" != "20.04" ]; then | |
| echo "################################# " | |
| echo "############ WARNING ############ " |
| Windows Registry Editor Version 5.00 | |
| [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server] | |
| "MaxOutstandingConnections"=dword:00000bb8 |
| # Reference: https://www.exclamationlabs.com/blog/continuous-deployment-to-npm-using-gitlab-ci/ | |
| # GitLab uses docker in the background, so we need to specify the | |
| # image versions. This is useful because we're freely to use | |
| # multiple node versions to work with it. They come from the docker | |
| # repo. | |
| # Uses NodeJS V 9.4.0 | |
| image: node:9.4.0 | |
| # And to cache them as well. |
| # Drain and delete the nodes (for each node you have) | |
| kubectl drain kubenode1 --delete-local-data --force --ignore-daemonsets | |
| kubectl delete node kubenode1 | |
| # Reset the deployment | |
| sudo kubeadm reset | |
| # On each node | |
| ## Reset the nodes and weave |
| # get total requests by status code | |
| awk '{print $9}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | |
| # get top requesters by IP | |
| awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head | awk -v OFS='\t' '{"host " $2 | getline ip; print $0, ip}' | |
| # get top requesters by user agent | |
| awk -F'"' '{print $6}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head | |
| # get top requests by URL |
We have a setup that I assume is quite common: A publicly accessible Nimbus running Storm UI. The worker
nodes can only be accessed from the Nimbus (via the LAN). All the nodes have internal DNS names (i.e.
node.lan.example.com), which is set in the configuration files; they use these DNS names to reach each
other. The Nimbus has an external DNS name (storm.example.com) for public access. The Nimbus's UI is
behind an Nginx proxy, which provides HTTP Auth and HTTPS.
Because of this setup, the logviewer links in the UI do not work. In order to fix this, we employ an elaborate hack shown in the conf file below. It uses ngx_http_substitutions_filter_module to rewrite content returned by the Storm UI and some complicated URL rewrite tricks to proxy the workers' logviewers through through the Nimbus.
| # === Optimized my.cnf configuration for MySQL/MariaDB (on cPanel/WHM servers) === | |
| # | |
| # by Fotis Evangelou, developer of Engintron (engintron.com) | |
| # | |
| # ~ Updated September 2024 ~ | |
| # | |
| # | |
| # The settings provided below are a starting point for a 8-16 GB RAM server with 4-8 CPU cores. | |
| # If you have different resources available you should adjust accordingly to save CPU, RAM & disk I/O usage. | |
| # |