Monday, August 13, 2012

Database (schema) backup to the cloud with PL/SQL

In my last blog post I described using the DBMS_DATAPUMP and DBMS_JOB packages to automate database schema backups from the database itself, rather than using batch files to do so. I also mentioned that "since the files are written to the database server itself, you need some kind of process to move the files to another server or storage location, so the backups don't disappear along with the database if the database server itself is somehow lost or corrupted. [One] option is to upload the backup file from the database to some online ("cloud") storage service, such as Amazon S3."

Since I have previously written a PL/SQL API for Amazon S3, uploading your backup files to S3 is not very difficult. The high-level steps are:


  1. Create an export (dump file) of your database schema using DBMS_DATAPUMP (or the DATAPUMP_UTIL_PKG wrapper described in my last blog post).
  2. Save the export file to disk on the server.
  3. Upload the export file to S3.
  4. (Optionally) Delete the export file from the server.
  5. (Optionally) Send an email notification indicating if the upload was successful or not.
In the "demos" folder of the Alexandria Utility Library for PL/SQL you can find a package called DATAPUMP_CLOUD_PKG which does the work described above in a procedure called BACKUP_SCHEMA_TO_S3.

Here is how you can set up a call to DBMS_JOB to schedule regular backups of your database schema to Amazon S3 (note the double quoting of the string values):

declare
  l_job number;
  l_what varchar2(4000);
begin

  l_what := 'datapump_cloud_pkg.backup_schema_to_s3(''your_amazon_id'', ''your_amazon_password'', ''your_amazon_bucket'', ''your_amazon_folder'', ''YOUR_ORACLE_BACKUP_DIR'', ''your_backup_file_name_'' || to_char(sysdate, ''yyyymmddhh24miss''), p_email_failure => ''your_email_address@domain'', p_email_success => ''your_email_address@domain'', p_version => ''10.2'', p_gmt_offset => 0);';

  -- run job at the end of each day

  dbms_job.submit (job => l_job, what => l_what, next_date => sysdate, interval => 'trunc(sysdate + 1) + 23/24');

  commit;

end;
/

Security Tip: You may not like to expose your Amazon password in the job definition. What you can do to reduce the risk is to use the AWS Identity and Access Management (IAM) tools to define a separate backup user which only has upload ("put") permissions on your S3 folder, and no privileges to list file contents, delete or download files. This will limit the damage that someone with a stolen password can do.

2 comments:

Jorge Rimblas said...

Those are some great concepts, thanks for the follow up post.
-Jorge

Jud Williford said...

This looks excellent and I intend to try it. However, I have an additional question. I was also researching RMAN backups to S3 and planned to use the OSB capability for that. The nice thing about that capability is that your AWS credentials get stored in an Oracle wallet. Perhaps as a future enhancement you could extend your package to utilize such a wallet entry?