The following steps show how to upgrade Cloud SQL PostgreSQL to a newer version using Google Cloud DMS. A few interesting observations that were made are:
- You can write to the destination database, but the cloudsqlexternalsync user owns all objects. You must create users/roles and manually run grant statements to match the source.
- During the promotion, the target database restarts.
- The drop/create/grant statements ran during promotion [1] are here.
- This screenshot shows grants after promotion [2]. They don't match the source instance. Here is a good blog post that talks about this [3].
- In optimal settings, DMS creates two subscriptions per database (max of 10 subscriptions). The more subscriptions you have, the longer it can take to clean up during the promotion.
- This screenshot [4] shows that I was able to insert data into the destination database after applying appropriate grants while the migration job was in the CDC phase.
[1]
[2]
[3] https://medium.com/google-cloud/migrate-grants-and-reassign-owner-in-alloydb-for-postgresql-post-dms-610393731024
- Google cloud project exists with network setup as desired.
- All the necessary privileges have been granted to create Cloud SQL instances and DMS jobs.
- A VM exists to use as a bastion host to connect to Cloud SQL with private IP address.
- https://cloud.google.com/sdk/gcloud/reference/database-migration
- https://cloud.google.com/sql/docs/postgres/connect-instance-private-ip
- https://cloud.google.com/database-migration/docs/postgres/diagnose-issues#manage-users-and-roles
- https://medium.com/google-cloud/migrate-grants-and-reassign-owner-in-alloydb-for-postgresql-post-dms-610393731024
# Get current project, network info and make sure that APIs are enabled.
export PROJECT_ID=$(gcloud config get-value project)
export REGION=us-central1
export GCP_ZONE="us-central1-a"
export NETWORK_NAME=default
# enable apis
gcloud services enable compute.googleapis.com \
servicenetworking.googleapis.com \
sqladmin.googleapis.com \
iap.googleapis.com
#############################################################
# Change following as desire.
#############################################################
export DB_SRC_INSTANCE_NAME=dms-test-96
export DB_SRC_VERSION=POSTGRES_9_6
export DB_TRG_INSTANCE_NAME=dms-test-17
export DB_TRG_VERSION=POSTGRES_17
export DB_CPU="2"
export DB_MEMORY="4GB"
export REP_USER=replication_user
export REP_USER_PASSWORD=$(openssl rand -base64 32) # random
export SCHEMA_USER=dms
export SCHEMA_USER_PASSWORD=$(openssl rand -base64 32) # random
export DB_NAME=dms-test-db
export MIG_JOB_NAME=dms-test-write
export VM_NAME=reverse-proxy
export DB_PORT=5432
#############################################################
# CREATE SOURCE CLOUD SQL DATABASE (PostgreSQL)
#############################################################
gcloud sql instances create $DB_SRC_INSTANCE_NAME \
--cpu=$DB_CPU \
--memory=$DB_MEMORY \
--region=$REGION \
--no-assign-ip \
--enable-google-private-path \
--network=$NETWORK_NAME \
--database-version=$DB_SRC_VERSION \
--no-deletion-protection \
--enable-google-private-path \
--edition=enterprise \
--database-flags=cloudsql.logical_decoding=on,cloudsql.enable_pglogical=on,log_statement=all,log_min_duration_statement=100
# lock down default (postgres) user [manually input at prompt]
gcloud sql users set-password postgres \
--instance=$DB_SRC_INSTANCE_NAME \
--prompt-for-password
# create replication user
gcloud sql users create $REP_USER \
--instance=$DB_SRC_INSTANCE_NAME \
--password $REP_USER_PASSWORD
# create schema user
gcloud sql users create $SCHEMA_USER \
--instance=$DB_SRC_INSTANCE_NAME \
--password $SCHEMA_USER_PASSWORD
# create a database in source instance
gcloud sql databases create $DB_NAME \
--instance=$DB_SRC_INSTANCE_NAME
# get source instance private IP address
export SRC_DB_ADDR=$(gcloud sql instances describe $DB_SRC_INSTANCE_NAME --format='value(ipAddresses.ipAddress)')
# get updates and install PostgreSQL Client on a pre-existing VM
gcloud compute ssh $VM_NAME \
-- "sudo apt update && sudo apt install -y netcat-traditional && sudo apt-get install -y postgresql-client"
# test to make sure VM can connect to Cloud SQL instance
gcloud compute ssh $VM_NAME \
-- "nc -zv $SRC_DB_ADDR $DB_PORT"
# create test table and insert some data
gcloud compute ssh $VM_NAME \
-- "psql -h $SRC_DB_ADDR -U $SCHEMA_USER $DB_NAME << EOF
SELECT current_database();
-- create a test table
CREATE TABLE t1 (
col1 INT PRIMARY KEY,
col2 VARCHAR(100)
);
insert into t1 values (1, 'ONE FROM SOURCE');
insert into t1 values (2, 'TWO FROM SOURCE');
grant select on t1 to $REP_USER;
EOF"
###############################################################################
# run the following for each databases to grant $REP_USER permissions
###############################################################################
# for postgres database
gcloud compute ssh $VM_NAME \
-- "psql -h $SRC_DB_ADDR -U $REP_USER postgres << EOF
ALTER ROLE $REP_USER WITH REPLICATION;
CREATE EXTENSION pglogical;
GRANT USAGE ON SCHEMA pglogical TO $REP_USER;
GRANT ALL ON SCHEMA pglogical TO $REP_USER;
GRANT SELECT ON pglogical.tables TO $REP_USER;
GRANT SELECT ON pglogical.depend TO $REP_USER;
GRANT SELECT ON pglogical.local_node TO $REP_USER;
GRANT SELECT ON pglogical.local_sync_status TO $REP_USER;
GRANT SELECT ON pglogical.node TO $REP_USER;
GRANT SELECT ON pglogical.node_interface TO $REP_USER;
GRANT SELECT ON pglogical.queue TO $REP_USER;
GRANT SELECT ON pglogical.replication_set TO $REP_USER;
GRANT SELECT ON pglogical.replication_set_seq TO $REP_USER;
GRANT SELECT ON pglogical.replication_set_table TO $REP_USER;
GRANT SELECT ON pglogical.sequence_state TO $REP_USER;
GRANT SELECT ON pglogical.subscription TO $REP_USER;
--public schema
GRANT ALL ON SCHEMA public TO $REP_USER;
grant select on all tables in schema public to $REP_USER;
EOF"
# for $DB_NAME database
gcloud compute ssh $VM_NAME \
-- "psql -h $SRC_DB_ADDR -U $REP_USER $DB_NAME << EOF
ALTER ROLE $REP_USER WITH REPLICATION;
CREATE EXTENSION pglogical;
GRANT USAGE ON SCHEMA pglogical TO $REP_USER;
GRANT ALL ON SCHEMA pglogical TO $REP_USER;
GRANT SELECT ON pglogical.tables TO $REP_USER;
GRANT SELECT ON pglogical.depend TO $REP_USER;
GRANT SELECT ON pglogical.local_node TO $REP_USER;
GRANT SELECT ON pglogical.local_sync_status TO $REP_USER;
GRANT SELECT ON pglogical.node TO $REP_USER;
GRANT SELECT ON pglogical.node_interface TO $REP_USER;
GRANT SELECT ON pglogical.queue TO $REP_USER;
GRANT SELECT ON pglogical.replication_set TO $REP_USER;
GRANT SELECT ON pglogical.replication_set_seq TO $REP_USER;
GRANT SELECT ON pglogical.replication_set_table TO $REP_USER;
GRANT SELECT ON pglogical.sequence_state TO $REP_USER;
GRANT SELECT ON pglogical.subscription TO $REP_USER;
--public schema
GRANT ALL ON SCHEMA public TO $REP_USER;
grant select on all tables in schema public to $REP_USER;
EOF"
#############################################################
# CREATE TARGET CLOUD SQL DATABASE (PostgreSQL)
#############################################################
gcloud sql instances create $DB_TRG_INSTANCE_NAME \
--cpu=$DB_CPU \
--memory=$DB_MEMORY \
--region=$REGION \
--no-assign-ip \
--enable-google-private-path \
--network=$NETWORK_NAME \
--database-version=$DB_TRG_VERSION \
--no-deletion-protection \
--enable-google-private-path \
--edition=enterprise \
--database-flags=log_statement=all,log_min_duration_statment=100
# lock down default (postgres) user [manually input at prompt]
gcloud sql users set-password postgres \
--instance=$DB_TRG_INSTANCE_NAME \
--prompt-for-password
# create a replication user in target instance
gcloud sql users create $REP_USER \
--instance=$DB_TRG_INSTANCE_NAME \
--password $REP_USER_PASSWORD
# get DB private IP address
export TRG_DB_ADDR=$(gcloud sql instances describe $DB_TRG_INSTANCE_NAME --format='value(ipAddresses.ipAddress)')
#############################################################
# CREATE DMS JOB
#############################################################
# create a DMS source connection profile
gcloud database-migration connection-profiles create postgresql src-$DB_SRC_INSTANCE_NAME \
--region=$REGION \
--role=SOURCE \
--password=$REP_USER_PASSWORD \
--username=$REP_USER \
--host=$SRC_DB_ADDR \
--port=5432 \
--cloudsql-instance=$DB_SRC_INSTANCE_NAME
# create a DMS destination connection profile
gcloud database-migration connection-profiles create postgresql trg-$DB_TRG_INSTANCE_NAME \
--region=$REGION \
--role=DESTINATION \
--password=$REP_USER_PASSWORD \
--username=$REP_USER \
--host=$TRG_DB_ADDR \
--port=5432 \
--cloudsql-instance=$DB_TRG_INSTANCE_NAME
# create a DMS migration job
gcloud database-migration migration-jobs create $MIG_JOB_NAME \
--region=$REGION \
--type=CONTINUOUS \
--source=src-$DB_SRC_INSTANCE_NAME \
--destination=trg-$DB_TRG_INSTANCE_NAME \
--peer-vpc=projects/$PROJECT_ID/global/networks/$NETWORK_NAME
# demote destination. This can take few mins.
gcloud database-migration migration-jobs demote-destination $MIG_JOB_NAME \
--region=$REGION
# verify the migration job
gcloud database-migration migration-jobs verify $MIG_JOB_NAME --region=$REGION
# start the migration job
gcloud database-migration migration-jobs start $MIG_JOB_NAME --region=$REGION
# describe the migration job to check if phase is in CDC
gcloud database-migration migration-jobs describe $MIG_JOB_NAME --region=$REGION
###############################################################################
# once dms job is at CDC phase, insert some data at destination
###############################################################################
# create schema user
gcloud sql users create $SCHEMA_USER \
--instance=$DB_TRG_INSTANCE_NAME \
--password $SCHEMA_USER_PASSWORD
# need to grant permissions first.
gcloud compute ssh $VM_NAME \
-- "psql -h $TRG_DB_ADDR -U postgres $DB_NAME << EOF
select *
FROM information_schema.role_table_grants
WHERE table_name='t1';
grant select,insert,update,delete on t1 to $SCHEMA_USER;
select *
FROM information_schema.role_table_grants
WHERE table_name='t1';
EOF"Output
Password for user postgres:
grantor | grantee | table_catalog | table_schema | table_name | privilege_type | is_grantable | with_hierarchy
----------------------+----------------------+---------------+--------------+------------+----------------+--------------+----------------
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | INSERT | YES | NO
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | SELECT | YES | YES
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | UPDATE | YES | NO
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | DELETE | YES | NO
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | TRUNCATE | YES | NO
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | REFERENCES | YES | NO
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | TRIGGER | YES | NO
(7 rows)
GRANT
grantor | grantee | table_catalog | table_schema | table_name | privilege_type | is_grantable | with_hierarchy
----------------------+----------------------+---------------+--------------+------------+----------------+--------------+----------------
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | INSERT | YES | NO
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | SELECT | YES | YES
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | UPDATE | YES | NO
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | DELETE | YES | NO
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | TRUNCATE | YES | NO
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | REFERENCES | YES | NO
cloudsqlexternalsync | cloudsqlexternalsync | dms-test-db | public | t1 | TRIGGER | YES | NO
cloudsqlexternalsync | dms | dms-test-db | public | t1 | INSERT | NO | NO
cloudsqlexternalsync | dms | dms-test-db | public | t1 | SELECT | NO | YES
cloudsqlexternalsync | dms | dms-test-db | public | t1 | UPDATE | NO | NO
cloudsqlexternalsync | dms | dms-test-db | public | t1 | DELETE | NO | NO
(11 rows)# insert some data
gcloud compute ssh $VM_NAME \
-- "psql -h $TRG_DB_ADDR -U $SCHEMA_USER $DB_NAME << EOF
select * from t1;
insert into t1 values (3, 'THREE FROM DEST');
insert into t1 values (4, 'FOUR FROM DEST');
select * from t1;
EOF"Output
col1 | col2
------+-----------------
1 | ONE FROM SOURCE
2 | TWO FROM SOURCE
(2 rows)
INSERT 0 1
INSERT 0 1
col1 | col2
------+-----------------
1 | ONE FROM SOURCE
2 | TWO FROM SOURCE
3 | THREE FROM DEST
4 | FOUR FROM DEST
(4 rows)'###############################################################################
# During cutover, promote the destination instance
###############################################################################
gcloud database-migration migration-jobs promote $MIG_JOB_NAME --region=$REGION
# verify the grants
gcloud compute ssh $VM_NAME \
-- "psql -h $TRG_DB_ADDR -U postgres $DB_NAME << EOF
select *
FROM information_schema.role_table_grants
WHERE table_name='t1';
EOF"
# TODO: Copy the users/roles/grants from source database to target database.
# Change the objects owner from cloudsqlexternalsync to appropriate owners.
###############################################################################
# CLEAN UP
###############################################################################
gcloud database-migration migration-jobs delete $MIG_JOB_NAME --region=$REGION
gcloud database-migration connection-profiles delete src-$DB_SRC_INSTANCE_NAME --region=$REGION
gcloud database-migration connection-profiles delete trg-$DB_TRG_INSTANCE_NAME --region=$REGION
gcloud sql instances delete $DB_SRC_INSTANCE_NAME --quite
gcloud sql instances create $DB_TRG_INSTANCE_NAME --quite
gcloud compute ssh $VM_NAME \
-- "psql -h $TRG_DB_ADDR -U postgres postgres << EOF
DROP DATABASE \"$DB_NAME\";
EOF"