Last active
August 26, 2020 23:11
-
-
Save mik-laj/fd2bbae8e06050cef15ea88d5b6c9b28 to your computer and use it in GitHub Desktop.
A simple script supporting Cloud Composer admistration
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| I moved this script to repo: https://github.com/PolideaInternal/airflow-pianka-sh |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi.
In order to make this script work with Composer environment behind private IP you have to:
Make sure that it is possible to use IAP for SSH tunneling in your network (you need to set up a firewall rule, see here). Your account also needs a permission:
iap.tunnelInstances.accessViaIAP.Collect credentials using private master endpoint:
gcloud container clusters get-credentials "${COMPOSER_GKE_CLUSTER_NAME}" --zone "${COMPOSER_GKE_CLUSTER_ZONE}" --internal-ip &>/dev/nullStart a process in the background that will open dynamic port forwarding to node of your cluster:
gcloud compute ssh "${COMPOSER_GKE_NODE_NAME}" --zone "${COMPOSER_GKE_CLUSTER_ZONE}" --quiet --ssh-flag=-N --ssh-flag=-vvv --ssh-flag="-D 127.0.0.1:${SOCKS_PORT}"Where:
${COMPOSER_GKE_NODE_NAME}is any node belonging to${COMPOSER_GKE_CLUSTER_NAME}${SOCKS_PORT}is any free port on your local machineThis command by default will use the ssh key called
google_compute_enginelocated at~/.ssh/and will generate one if it does not exist. If you already have this key, make sure it has no passphrase, otherwise you will have to specify it every time you connect. You can also specify path to other key with--ssh-key-fileflag. Your account will needcompute.instances.getandcompute.instances.setMetadatapermissions, as well asService Account Userrole on Composer’s service account.Since you do not have access to kubectl yet, then in order to find a node you will need to collect the instance group from node-pool and then the node from the instance group (
compute.instanceGroups.listpermission needed):At this point you should be able to execute some
kubectlcommands by specifyinghttps_proxyvariable, for example:https_proxy=socks5://localhost:${SOCKS_PORT} kubectl get pod -AHowever, you will not be able to execute
kubectl execcommands, as they do not support socks proxies (see this PR: kubernetes/kubernetes#84205).To make it work, you will need to use HTTP proxy instead, or use some way that will convert http requests to socks requests (see this superuser thread for possible solutions: https://superuser.com/questions/280129/http-proxy-over-ssh-not-socks)
For example, you can use
http-proxy-to-socksnodejs tool (https://github.com/oyyd/http-proxy-to-socks).After installing it, simply run another background process:
hpts -s 127.0.0.1:${SOCKS_PORT} -p ${HTTP_PORT}Where
${HTTP_PORT}is another free port on your local machine.Then execute
kubectlcommands with followinghttps_proxyvariable set, for example:https_proxy=http://localhost:${HTTP_PORT} kubectl exec --namespace="${COMPOSER_GKE_NAMESPACE_NAME}" -t "${COMPOSER_GKE_WORKER_NAME}" --container airflow-worker -- echo "hello"