Skip to content

Instantly share code, notes, and snippets.

@anniethiessen
Last active April 2, 2024 22:30
Show Gist options
  • Select an option

  • Save anniethiessen/34cc6220f89fbbae76987e5addb7b6b8 to your computer and use it in GitHub Desktop.

Select an option

Save anniethiessen/34cc6220f89fbbae76987e5addb7b6b8 to your computer and use it in GitHub Desktop.
Script file to build and publish AWS lambda layer.
#!/usr/bin/env bash
: '
Script file to build and publish AWS lambda layer.
Ensure AWS is installed and configured, docker engine is started,
define script variables, then execute script.
Accepts -v (verbose output) and -q (quiet output) arguments.
Prerequisites:
- aws-cli 1.29.62 -> https://pypi.org/project/awscli/
- docker
Script Variables:
- CONDA_ENV: The conda environment that has required packages installed.
Optional, if current environment has them installed.
- ASSET_FILE_NAME: Name of asset file or directory (must be unique to other layers asset file names).
Python files will be copied to layer build. Required, no default.
- REQUIREMENT_FILE_NAME: Name of requirement file. Will be copied to layer package
and requirements will be installed in layer build.
Optional, defaults to "requirements.txt".
- README_FILE_NAME: Name of README file. Will be copied to layer package.
Optional, defaults to "README.rst".
- LICENSE_FILE_NAME: Name of license file. Will be copied to layer package
and text will be passed to layer license.
Optional, defaults to "LICENSE".
- DOCKER_IMAGE: Docker image.
The ""build"" function builds layer using this image.
Optional, defaults to "lambci/lambda:build-python3.6".
- DOCKER_RUNTIME: Docker runtime must be same runtime as ""DOCKER_IMAGE"
and one of ""LAYER_COMPATIBLE_RUNTIMES"".
The ""build"" function builds layer using this runtime.
Optional, defaults to "python3.6".
- ARCHIVE_S3_BUCKET: S3 bucket to which archive files will be uploaded.
The ""check"" function checks if it exists and provides an option to create it if necessary.
Required, no default.
- ARCHIVE_S3_PREFIX: S3 bucket prefix to which the archive file should be uploaded.
Optional, defaults to "layers".
- ARCHIVE_FILE_NAME: File name the archive file should be given (should have a zip extension).
Optional, defaults to "layer.zip".
- LAYER_NAME: Name of published layer. Required, no default.
- LAYER_DESCRIPTION: Description of published layer.
Optional, no default.
- LAYER_LICENSE: Short version of license of published layer.
Optional, no default.
- LAYER_COMPATIBLE_RUNTIMES: Compatible runtime of published layer.
Only single runtime supported by this script at this time.
Optional, defaults to "python3.6".
- LAYER_COMPATIBLE_ARCHITECTURES: Compatible architectures of published layer.
Only single architecture supported by this script at this time.
Optional, defaults to "x86_64".
- PERFORM_PREPARE: Whether to perform the prepare step.
The ""prepare"" function replaces any previous work directory with a new one
and copies ""REQUIREMENT_FILE_NAME"", ""LICENSE_FILE_NAME"",
""README_FILE_NAME"", and ""ASSET_FILE_NAME"" files to it.
Optional, defaults to "true".
- PERFORM_BUILD: Whether to perform the build step.
The ""build"" function uses Docker to build and compress layer image.
Optional, defaults to "true".
- PERFORM_UPLOAD: Whether to perform the upload step.
The ""upload"" function uploads compressed layer image to S3.
Optional, defaults to "true".
- PERFORM_PUBLISH: Whether to perform the publish step.
The ""publish"" function publishes layer to AWS lambda.
Optional, defaults to "true".
- PERFORM_CLEAN: Whether to perform the clean-up step.
The ""clean"" function removes working directory.
Optional, defaults to "true".
Usage Example:
CONDA_ENV=xxx
ARCHIVE_S3_BUCKET=xxx
wget -cO - "<this_file_url>" > "temp.sh"
chmod +x "temp.sh"
source "temp.sh" "$@"
rm -rf "temp.sh"
'
#--------------------------------------------
#------------------ CONSTANTS ---------------
#--------------------------------------------
CONDA_ENV="${CONDA_ENV}"
ASSET_FILE_NAME="${ASSET_FILE_NAME}"
REQUIREMENT_FILE_NAME="${REQUIREMENT_FILE_NAME:-requirements.txt}"
README_FILE_NAME="${README_FILE_NAME:-README.rst}"
LICENSE_FILE_NAME="${LICENSE_FILE_NAME:-LICENSE}"
DOCKER_RUNTIME="${DOCKER_RUNTIME:-python3.6}"
DOCKER_IMAGE="${DOCKER_IMAGE:-lambci/lambda:build-python3.6}"
ARCHIVE_S3_BUCKET="${ARCHIVE_S3_BUCKET}"
ARCHIVE_S3_PREFIX="${ARCHIVE_S3_PREFIX:-layers}"
ARCHIVE_FILE_NAME="${ARCHIVE_FILE_NAME:-layer.zip}"
LAYER_NAME="${LAYER_NAME}"
LAYER_DESCRIPTION="${LAYER_DESCRIPTION}"
LAYER_LICENSE="${LAYER_LICENSE}"
LAYER_COMPATIBLE_RUNTIMES="${LAYER_COMPATIBLE_RUNTIMES:-python3.6}"
LAYER_COMPATIBLE_ARCHITECTURES="${LAYER_COMPATIBLE_ARCHITECTURES:-x86_64}"
PERFORM_PREPARE=${PERFORM_PREPARE:-true}
PERFORM_BUILD=${PERFORM_BUILD:-true}
PERFORM_UPLOAD=${PERFORM_UPLOAD:-true}
PERFORM_PUBLISH=${PERFORM_PUBLISH:-true}
PERFORM_CLEAN=${PERFORM_CLEAN:-true}
#------- DO NOT EDIT BELOW THIS LINE --------
FORMAT_SCRIPT_URL="https://gist.githubusercontent.com/anniethiessen/efb6bc0e52ccfc8b330aa41364b53e97/raw/0012edc1f009a36d196f03f09fda68e70691860b/shell_script_essentials.sh"
FORMAT_SCRIPT_NAME="shell_script_essentials.sh"
WORK_DIR="temp"
BUILD_DIR="python"
#--------------------------------------------
#--------------- FUNCTIONS ------------------
#--------------------------------------------
function run_format_script {
wget -cO - ${FORMAT_SCRIPT_URL} > ${FORMAT_SCRIPT_NAME}
chmod +x ${FORMAT_SCRIPT_NAME}
source ${FORMAT_SCRIPT_NAME} "$@"
rm -rf ${FORMAT_SCRIPT_NAME}
}
function prepare () {
rm \
-rf "${WORK_DIR}" \
&> ${OUTPUT}
local retval=$?
if [[ ${retval} -eq 0 ]]; then
output_success_message "Previous work directory removed."
else
output_error_message "Previous work directory removal error." ${PROMPT_VERBOSE}
exit_script
fi
mkdir \
-p "${WORK_DIR}/${BUILD_DIR}" \
&> ${OUTPUT}
local retval=$?
if [[ ${retval} -eq 0 ]]; then
output_success_message "Work directory created."
else
output_error_message "Work directory creation error." ${PROMPT_VERBOSE}
exit_script
fi
cp "${REQUIREMENT_FILE_NAME}" "${WORK_DIR}/${REQUIREMENT_FILE_NAME}" \
&> ${OUTPUT}
local retval=$?
if [[ ${retval} -eq 0 ]]; then
output_success_message "Requirement file copied to work directory."
else
output_error_message "Requirement file copy error." ${PROMPT_VERBOSE}
exit_script
fi
cp "${LICENSE_FILE_NAME}" "${WORK_DIR}/${LICENSE_FILE_NAME}" \
&> ${OUTPUT}
local retval=$?
if [[ ${retval} -eq 0 ]]; then
output_success_message "License file copied to work directory."
else
output_error_message "License file copy error." ${PROMPT_VERBOSE}
exit_script
fi
cp "${README_FILE_NAME}" "${WORK_DIR}/${README_FILE_NAME}" \
&> ${OUTPUT}
local retval=$?
if [[ ${retval} -eq 0 ]]; then
output_success_message "README file copied to work directory."
else
output_error_message "README file copy error." ${PROMPT_VERBOSE}
exit_script
fi
if [[ -d ${ASSET_FILE_NAME} ]]; then
mkdir \
-p "${WORK_DIR}/${BUILD_DIR}/${ASSET_FILE_NAME}" \
&> ${OUTPUT}
for asset_module in "${ASSET_FILE_NAME}"/*.py; do
cp "${asset_module}" "${WORK_DIR}/${BUILD_DIR}/${ASSET_FILE_NAME}/$(basename "${asset_module}")" \
&> ${OUTPUT}
done
elif [[ -f ${ASSET_FILE_NAME} ]]; then
cp "${ASSET_FILE_NAME}" "${WORK_DIR}/${BUILD_DIR}/${ASSET_FILE_NAME}" \
&> ${OUTPUT}
else
output_error_message "Asset file/package does not exist." ${PROMPT_VERBOSE}
exit_script
fi
local retval=$?
if [[ ${retval} -eq 0 ]]; then
output_success_message "Asset file/package copied to build directory."
else
output_error_message "Assets file/package copy error." ${PROMPT_VERBOSE}
exit_script
fi
}
function build () {
cd "${WORK_DIR}"
docker run \
--rm \
-v "$(pwd):/var/task:z" "${DOCKER_IMAGE}" "${DOCKER_RUNTIME}" \
-m pip --isolated install \
-t "${BUILD_DIR}" \
-r "${REQUIREMENT_FILE_NAME}" \
&> ${OUTPUT}
local retval=$?
if [[ ${retval} -eq 0 ]]; then
output_success_message "Archive built."
else
output_error_message "Archive build error." ${PROMPT_VERBOSE}
exit_script
fi
zip -r "${ARCHIVE_FILE_NAME}" . \
&> ${OUTPUT}
local retval=$?
if [[ ${retval} -eq 0 ]]; then
output_success_message "Archive compressed."
else
output_error_message "Archive compression error." ${PROMPT_VERBOSE}
exit_script
fi
cd ..
}
function upload () {
aws s3 cp \
"${WORK_DIR}/${ARCHIVE_FILE_NAME}" \
"s3://${ARCHIVE_S3_BUCKET}/${ARCHIVE_S3_PREFIX}/${ARCHIVE_FILE_NAME}" \
&> ${OUTPUT}
local retval=$?
if [[ ${retval} -eq 0 ]]; then
output_success_message "Archive uploaded to S3."
else
output_error_message "Archive upload to S3 error." ${PROMPT_VERBOSE}
fi
}
function publish () {
local compatible_runtimes=(`echo "${LAYER_COMPATIBLE_RUNTIMES}"`)
local compatible_architectures=(`echo "${LAYER_COMPATIBLE_ARCHITECTURES}"`)
aws lambda publish-layer-version \
--layer-name "${LAYER_NAME}" \
--description "${LAYER_DESCRIPTION}" \
--license-info "${LAYER_LICENSE}" \
--compatible-runtimes "${compatible_runtimes[@]}" \
--compatible-architectures "${compatible_architectures[@]}" \
--content "S3Bucket=${ARCHIVE_S3_BUCKET},S3Key=${ARCHIVE_S3_PREFIX}/${ARCHIVE_FILE_NAME}" \
&> ${VERBOSE_OUTPUT}
local retval=$?
if [[ ${retval} -eq 0 ]]; then
output_success_message "Build published to Lambda."
else
output_error_message "Build publish to Lambda error." ${PROMPT_VERBOSE}
fi
}
function clean () {
rm -rf "${WORK_DIR}" \
&> ${OUTPUT}
local retval=$?
if [[ ${retval} -eq 0 ]]; then
output_success_message "Work directory removed."
else
output_error_message "Work directory removal error." ${PROMPT_VERBOSE}
fi
}
#--------------------------------------------
#------------------ MAIN --------------------
#--------------------------------------------
run_format_script "$@"
if [ -n "${CONDA_ENV}" ]; then
eval "$(conda shell.bash hook)"
conda activate ${CONDA_ENV}
output_success_message "Conda environment activated."
PYTHON_VERSION=$( python --version 2>&1 )
AWSCLI_VERSION=$( aws --version 2>&1 )
DOCKER_VERSION=$( docker --version 2>&1 )
output_info_message "Using ${PYTHON_VERSION}, ${AWSCLI_VERSION}, and ${DOCKER_VERSION}"
fi
output_header_message "----------------------------------------"
output_header_message "[1/5] PREPARE"
output_header_message "preparing files for archive ..."
output_header_message "----------------------------------------"
if [ "${PERFORM_PREPARE}" = true ] ; then
prepare
else output_warning_message "Preparation skipped."; fi
output_header_message "----------------------------------------"
output_header_message "[2/5] BUILD"
output_header_message "building archive with Docker ..."
output_header_message "----------------------------------------"
if [ "${PERFORM_BUILD}" = true ] ; then
build
else output_warning_message "Build skipped."; fi
output_header_message "----------------------------------------"
output_header_message "[3/5] UPLOAD"
output_header_message "uploading archive to S3 ..."
output_header_message "----------------------------------------"
if [ "${PERFORM_UPLOAD}" = true ] ; then
upload
else output_warning_message "Upload skipped."; fi
output_header_message "----------------------------------------"
output_header_message "[4/5] PUBLISH"
output_header_message "publishing layer version to Lambda ..."
output_header_message "----------------------------------------"
if [ "${PERFORM_PUBLISH}" = true ] ; then
publish
else output_warning_message "Publish skipped."; fi
output_header_message "----------------------------------------"
output_header_message "[5/5] CLEAN"
output_header_message "cleaning local artifacts ..."
output_header_message "----------------------------------------"
if [ "${PERFORM_CLEAN}" = true ] ; then
clean
else output_warning_message "Clean-up skipped."; fi
@anniethiessen
Copy link
Author

anniethiessen commented Sep 13, 2023

Known Issues:

v1:

  • no check function as described in docstring

@anniethiessen
Copy link
Author

anniethiessen commented Oct 15, 2023

TODO:

@anniethiessen
Copy link
Author

anniethiessen commented Oct 26, 2023

Change Log:

v1: Initial

v2:
-Added CONDA_ENV variable: Conda environment is activated if defined

v3:
-Replaced DOCKER_BUILD with DOCKER_IMAGE and DOCKER_RUNTIME
-Runtime default updated from python3.7 to python3.6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment