Skip to content

Instantly share code, notes, and snippets.

@josephbolus
Last active August 24, 2025 02:28
Show Gist options
  • Select an option

  • Save josephbolus/323cba4c59f50b78b5885aee4c3a5dee to your computer and use it in GitHub Desktop.

Select an option

Save josephbolus/323cba4c59f50b78b5885aee4c3a5dee to your computer and use it in GitHub Desktop.
#!/bin/bash
# Simple MySQL Shell Backup Script - Direct equivalent to mysqldump version
# Usage: ./backup_mysqlsh_simple.sh [password]
BACKUP_DIR="/path/to/backup" # Change as needed
mkdir -p "$BACKUP_DIR" || exit 1
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_SUBDIR="${BACKUP_DIR}/backup_${DATE}"
# Get non-system databases as JavaScript array
DB_ARRAY=$(mysql -u root -p$1 -NBe "
SELECT CONCAT('[\"', GROUP_CONCAT(schema_name SEPARATOR '\",\"'), '\"]')
FROM information_schema.schemata
WHERE schema_name NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys');" 2>/dev/null) || {
echo "Failed to list databases";
exit 1;
}
echo "Backing up databases to: $BACKUP_SUBDIR"
# Execute MySQL Shell dump
mysqlsh --user=root --password=$1 --no-wizard -e "
util.dumpSchemas($DB_ARRAY, '$BACKUP_SUBDIR', {
consistent: true, // equivalent to --single-transaction
includeTriggers: true, // equivalent to --triggers
includeRoutines: true, // equivalent to --routines
includeEvents: true, // equivalent to --events
compression: 'gzip', // built-in compression
showProgress: true
})" || { echo "Backup failed"; exit 1; }
# Create tarball (MySQL Shell creates a directory, not a single file)
tar -czf "${BACKUP_SUBDIR}.tar.gz" -C "$BACKUP_DIR" "$(basename "$BACKUP_SUBDIR")"
rm -rf "$BACKUP_SUBDIR"
echo "Backup completed: ${BACKUP_SUBDIR}.tar.gz"
echo "Size: $(du -h ${BACKUP_SUBDIR}.tar.gz | cut -f1)"
# To restore:
# tar -xzf ${BACKUP_SUBDIR}.tar.gz
# mysqlsh --user=root --password -e "util.loadDump('backup_${DATE}')"

Complete MySQL Export Guide - 100% Database Replication

Overview

This guide provides instructions for creating a complete 1:1 MySQL export that includes all databases, users with passwords, triggers, events, and procedures for testing environments.

Table of Contents

Quick Start - Simple Complete Backup

The Essential Command

For a complete 1:1 copy of your MySQL server, this single command is sufficient:

mysqldump -u root -p \
  --all-databases \
  --routines \
  --triggers \
  --events \
  --single-transaction \
  --master-data=2 \
  > full_backup.sql

That's it! This command creates a complete backup including:

  • ✅ All databases
  • ✅ All users and their passwords (stored in the mysql database)
  • ✅ All privileges and grants
  • ✅ All stored procedures and functions
  • ✅ All triggers
  • ✅ All scheduled events
  • ✅ Binary log position (as a comment) for replication or point-in-time recovery

Quick Restore

To restore on your test server:

mysql -u root -p < full_backup.sql

What Gets Exported

Understanding --all-databases

The --all-databases flag exports every database on your server, including:

  • All user databases - Your application databases
  • The mysql database - Contains all user accounts, password hashes, and privileges
  • Other system databases - Like sys (if present)

Important: The mysql database contains these critical tables:

  • user - User accounts and password hashes
  • db - Database-level privileges
  • tables_priv - Table-level privileges
  • columns_priv - Column-level privileges
  • procs_priv - Procedure/function privileges
  • proxies_priv - Proxy privileges

Key Options Explained

Option Purpose Required?
--all-databases Exports all databases including mysql (users/passwords) Yes
--routines Includes stored procedures and functions Yes
--triggers Includes all triggers Yes
--events Includes scheduled events Yes
--single-transaction Ensures consistent backup for InnoDB tables Recommended
--master-data=2 Adds binary log position as comment for replication/recovery Recommended

About --master-data=2

The --master-data=2 option adds the binary log filename and position as a comment in your dump file:

-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000003', MASTER_LOG_POS=154;

Why include it:

  • Replication setup - Know exactly where to start replication if you later make this test server a replica
  • Point-in-time recovery - Combined with binary logs, restore to any specific moment after the backup
  • Documentation - Records the exact binary log position when the backup was taken
  • No downside - As a comment, it doesn't affect the restore process

Note: Using --master-data=2 (with =2) adds it as a comment only. Using --master-data=1 would execute the CHANGE MASTER command on restore, which you typically don't want for a test copy.

Restoration Process

Basic Full Restore

# 1. Stop applications using the database (optional but recommended)
systemctl stop your-application

# 2. Restore the complete backup
mysql -u root -p < full_backup.sql

# 3. Restart applications
systemctl start your-application

Restore from Compressed Backup

# If you compressed your backup
gunzip -c full_backup.sql.gz | mysql -u root -p

Verification

Before Backup - Count Objects

mysql -u root -p -e "
SELECT 
    (SELECT COUNT(*) FROM information_schema.SCHEMATA) as 'Databases',
    (SELECT COUNT(*) FROM information_schema.TABLES) as 'Tables',
    (SELECT COUNT(*) FROM information_schema.ROUTINES) as 'Routines',
    (SELECT COUNT(*) FROM information_schema.TRIGGERS) as 'Triggers',
    (SELECT COUNT(*) FROM information_schema.EVENTS) as 'Events',
    (SELECT COUNT(*) FROM mysql.user) as 'Users';"

After Restore - Verify Counts Match

Run the same query on the restored database and compare the numbers.

Test User Login

# Test that a specific user can still login
mysql -u someuser -p -e "SELECT 'User login successful';"

Advanced Options

Additional Useful Flags

For specific requirements, you might want to add these options:

mysqldump -u root -p \
  --all-databases \
  --routines \
  --triggers \
  --events \
  --single-transaction \
  --master-data=2 \            # Binary log position (recommended)
  --flush-privileges \         # Adds FLUSH PRIVILEGES to the dump
  --hex-blob \                 # Better handling of binary data
  --add-drop-database \        # Adds DROP DATABASE before CREATE
  --add-drop-table \           # Adds DROP TABLE before CREATE
  --order-by-primary \         # Consistent ordering of data
  --tz-utc \                   # Handles timezones consistently
  > full_backup.sql

MySQL Shell Method (MySQL 8.0+)

For MySQL 8.0 and later, MySQL Shell provides a modern alternative:

# Start MySQL Shell
mysqlsh -u root -p

# In MySQL Shell, run:
util.dumpInstance("/path/to/backup/directory", {
    includeUsers: true,       # This is default true
    includeRoutines: true,
    includeEvents: true,
    includeTriggers: true,
    showProgress: true
})

# To restore:
util.loadDump("/path/to/backup/directory")

Advantages of MySQL Shell:

  • Parallel processing (faster for large databases)
  • Better handling of modern MySQL features
  • Progress indication
  • More granular control

Key Differences Between mysqldump and MySQL Shell

Feature/Option mysqldump MySQL Shell
Output Format Single SQL file Directory with multiple files
Consistent Backup --single-transaction consistent: true
Include Triggers --triggers includeTriggers: true
Include Routines --routines includeRoutines: true
Include Events --events includeEvents: true
Binary Log Position --master-data=2 Automatically captured in metadata
Compression Pipe to gzip (| gzip) compression: 'gzip' (built-in)
Data Format Text-based SQL statements Binary/JSON format
Restore Speed Slower (SQL parsing) Faster (optimized binary)
Parallel Processing Single-threaded Multi-threaded (threads: N)
Progress Indication No built-in progress showProgress: true
Chunking Large Tables Not available Automatic chunking
Include Users --all-databases includes mysql DB includeUsers: true (with dumpInstance)
Exclude Schemas --ignore-database=name excludeSchemas: ['name1', 'name2']
Character Set --default-character-set defaultCharacterSet: 'utf8mb4'
File Size Larger (SQL text) Smaller (binary + better compression)
Restore Command mysql -u root -p < dump.sql util.loadDump('directory')
Version Compatibility Manual handling compatibility: ['strip_restricted_grants']
Memory Usage --quick for low memory Automatic optimization
Lock Tables --lock-tables=false Handled automatically

When to Export Users Separately

You might need to export users separately in these specific cases:

  1. Cross-version migrations - When MySQL versions differ significantly
  2. Selective restoration - When you want data but NOT users
  3. GTID conflicts - In replication setups with GTID enabled
  4. System table incompatibilities - Between major MySQL versions

If you need separate user export for these cases:

# Export just users and grants as SQL statements
mysql -u root -p -NBe "
SELECT CONCAT('CREATE USER IF NOT EXISTS \'', user, '\'@\'', host, '\' 
IDENTIFIED WITH \'', plugin, '\' AS \'', authentication_string, '\';') 
FROM mysql.user 
WHERE user NOT IN ('mysql.sys', 'mysql.session', 'mysql.infoschema');" > users.sql

mysql -u root -p -NBe "
SELECT CONCAT('SHOW GRANTS FOR \'', user, '\'@\'', host, '\';') 
FROM mysql.user 
WHERE user NOT IN ('mysql.sys', 'mysql.session', 'mysql.infoschema');" | \
mysql -u root -p -N | sed 's/$/;/g' > grants.sql

Compression for Large Databases

# Compress on the fly
mysqldump -u root -p \
  --all-databases \
  --routines \
  --triggers \
  --events \
  --single-transaction \
  --master-data=2 \
  | gzip > full_backup.sql.gz

# Restore compressed backup
gunzip -c full_backup.sql.gz | mysql -u root -p

Automated Backup Script

#!/bin/bash

# Simple automated backup script
BACKUP_DIR="/backup/mysql"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/mysql_backup_${DATE}.sql"

# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"

# Create the backup
mysqldump -u root -p \
  --all-databases \
  --routines \
  --triggers \
  --events \
  --single-transaction \
  --master-data=2 \
  > "$BACKUP_FILE"

# Compress it
gzip "$BACKUP_FILE"

echo "Backup completed: ${BACKUP_FILE}.gz"
echo "Size: $(du -h ${BACKUP_FILE}.gz | cut -f1)"

# Optional: Remove backups older than 7 days
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +7 -delete

Best Practices

Security

  1. Use .my.cnf for passwords instead of command line:

    # ~/.my.cnf
    [client]
    user=root
    password=your_password
    chmod 600 ~/.my.cnf
  2. Encrypt sensitive backups:

    mysqldump [options] | gzip | gpg --symmetric --cipher-algo AES256 > backup.sql.gz.gpg
  3. Restrict file permissions:

    chmod 600 full_backup.sql

Performance

  1. Use --single-transaction for InnoDB tables to avoid locking
  2. Use compression for large databases to save space
  3. Schedule backups during low-traffic periods
  4. Consider --quick flag for very large tables (dumps row by row)

Reliability

  1. Always test restore on a separate test instance
  2. Verify backup completeness before relying on it
  3. Keep multiple backup generations
  4. Document MySQL versions for compatibility:
    mysql --version > mysql_version.txt

Troubleshooting

Common Issues

"Access denied" errors

Ensure the backup user has sufficient privileges:

GRANT ALL PRIVILEGES ON *.* TO 'backup_user'@'localhost' WITH GRANT OPTION;
GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'backup_user'@'localhost';
FLUSH PRIVILEGES;

Note: The --master-data=2 option requires RELOAD and REPLICATION CLIENT privileges.

"Packet too large" during restore

mysql -u root -p --max_allowed_packet=1024M < backup.sql

"Unknown table 'COLUMN_STATISTICS'" (MySQL 8.0 → 5.7)

mysqldump --column-statistics=0 [other options]

GTID errors during restore

If you encounter GTID-related errors, you may need to reset GTID state:

RESET MASTER;
-- Then restore your backup

Partial Backups

If you need specific databases only:

# Specific databases
mysqldump -u root -p --databases db1 db2 db3 --routines --triggers --events --master-data=2 > partial_backup.sql

# Exclude certain databases
mysqldump -u root -p --all-databases \
  --ignore-database=test \
  --ignore-database=temp \
  --routines --triggers --events --master-data=2 > backup_without_test.sql

Quick Reference

Minimal Complete Backup

mysqldump -u root -p --all-databases --routines --triggers --events --single-transaction --master-data=2 > backup.sql

Restore

mysql -u root -p < backup.sql

Compressed Backup

mysqldump -u root -p --all-databases --routines --triggers --events --single-transaction --master-data=2 | gzip > backup.sql.gz

Compressed Restore

gunzip -c backup.sql.gz | mysql -u root -p

Summary

For a complete 1:1 MySQL copy for testing:

  1. Use the simple command - It includes everything you need
  2. The --all-databases flag includes users - No separate user export needed
  3. Include --master-data=2 - Captures binary log position for future flexibility
  4. Test your restore - Always verify on a test instance
  5. Use compression - For large databases
  6. Keep it simple - Don't overcomplicate unless you have specific requirements

The beauty of mysqldump --all-databases is that it truly captures everything, making it perfect for creating test environments that exactly mirror production. The addition of --master-data=2 ensures you have the binary log position recorded for potential future use in replication or point-in-time recovery scenarios.


Last updated: 2024
MySQL versions covered: 5.5, 5.6, 5.7, 8.0+

#!/bin/bash
# Usage: ./complete_mysql_backup.sh yourpassword
BACKUP_DIR="/path/to/backup" # Change as needed
mkdir -p "$BACKUP_DIR" || exit 1
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/complete_backup_${DATE}.sql.gz"
# Get non-system databases
DB_LIST=$(mysql -u root -p$1 -NBe "SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys');" 2>/dev/null) || { echo "Failed to list databases"; exit 1; }
# Start the backup file (compress on-the-fly)
{
echo "-- Complete MySQL Backup Generated on $(date)"
echo "-- "
echo ""
# Export users and grants
echo "-- Users and Privileges"
mysql -u root -p$1 -NBe "SELECT CONCAT('CREATE USER IF NOT EXISTS \'', user, '\'@\'', host, '\' IDENTIFIED WITH \'', plugin, '\' AS \'', authentication_string, '\';') FROM mysql.user WHERE user NOT IN ('mysql.sys', 'mysql.session', 'mysql.infoschema');" 2>/dev/null || { echo "Failed to export users"; exit 1; }
echo ""
mysql -u root -p$1 -NBe "SELECT CONCAT('SHOW GRANTS FOR \'', user, '\'@\'', host, '\';') FROM mysql.user WHERE user NOT IN ('mysql.sys', 'mysql.session', 'mysql.infoschema');" | mysql -u root -p$1 -N 2>/dev/null | grep '^GRANT' || { echo "Failed to export grants"; exit 1; }
echo ""
echo "FLUSH PRIVILEGES;"
echo ""
# Export all non-system databases with objects
echo "-- All User Databases, Routines, Triggers, and Events"
mysqldump -u root -p$1 --databases $DB_LIST --triggers --routines --events --single-transaction --master-data=2 2>/dev/null || { echo "Failed to dump databases"; exit 1; }
} | gzip > "$BACKUP_FILE"
echo "Backup completed: $BACKUP_FILE"
#!/bin/bash
# Complete MySQL Backup Automation Script
# Save as: mysql_backup.sh
# Configuration
BACKUP_DIR="/backup/mysql"
MYSQL_USER="root"
MYSQL_PASS="password"
RETENTION_DAYS=7
EMAIL="[email protected]"
LOG_FILE="/var/log/mysql_backup.log"
# Functions
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> "$LOG_FILE"
}
send_notification() {
echo "$1" | mail -s "MySQL Backup Notification" "$EMAIL"
}
# Main backup process
main() {
log_message "Starting MySQL backup"
# Create backup
BACKUP_FILE="${BACKUP_DIR}/mysql_$(date +%Y%m%d_%H%M%S).sql"
if mysqldump -u $MYSQL_USER -p$MYSQL_PASS \
--all-databases \
--routines \
--triggers \
--events \
--single-transaction \
> "$BACKUP_FILE" 2>> "$LOG_FILE"; then
# Compress backup
gzip "$BACKUP_FILE"
log_message "Backup completed: ${BACKUP_FILE}.gz"
# Remove old backups
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
log_message "Old backups cleaned up"
send_notification "MySQL backup completed successfully"
else
log_message "Backup failed!"
send_notification "MySQL backup FAILED - check logs"
exit 1
fi
}
# Run main function
main
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment