Skip to end of banner
Go to start of banner

Use Data Migration Jenkins job

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 21 Current »

Recording of presentation:

This document provides instructions on how to use the Jenkins job for data migration. The job allows users to migrate data from one release version to another in the Rancher performance environment.

It supports various parameters and follows a series of stages to ensure a successful data migration process.


Introduction

The Data Migration Pipeline is designed to facilitate the migration process of data from older versions of modules to the latest versions. The pipeline primarily focuses on two main objectives: measuring the time it takes to migrate data for each module and ensuring the consistency of schemas in the database after migration.

The main purpose of the Data Migration Pipeline:

  • Time Measurement: The pipeline aims to measure the time taken for data migration from older module versions to the latest versions. It provides insights into the duration of migration for individual modules as well as the overall migration process. This information helps in identifying any performance bottlenecks, optimizing migration procedures, and setting expectations for future migrations.
  • Schema Comparison: Another key purpose of the pipeline is to compare the schemas of the migrated tenant with the installed schemas. The goal is to identify any discrepancies or differences in the database schemas after the migration. If there are any variations found, the pipeline triggers the creation of a Jira ticket, notifying the team responsible for managing the schemas. This proactive approach ensures that any schema inconsistencies are promptly addressed, leading to a more stable and consistent data environment.

Parameters

The following parameters can be configured when running the data migration job:

Parameter nameMandatoryDescription
folio_repositorytrueSpecifies the repository from which to fetch the versions of the modules.
folio_branch_srctrueSpecifies the branch of the source repository for the migration.
folio_branch_dsttrueSpecifies the branch of the destination repository for the migration.
backup_namefalse

Sets the name of the RDS snapshot for the migration. Provide the name of the DB backup placed in folio-postgresql-backups AWS s3 bucket. 

slackChanneltrueDefines the Slack channel name to receive the migration report (optional, without the '#' symbol).

Data Migration pipeline modes

  1. Data Migration with Database Restoration (if backup_name set value): This mode of data migration involves restoring the target database from a backup before initiating the data migration process. This mode is useful when there is a need to check the time that needed for migration, ensuring a clean slate for the migration process.

  2. Data Migration without Database Restoration (if backup_name is NOT set): In this mode, the data migration process is performed without restoring the target database from a backup. This mode is typically utilized when there is a requirement to make a quick check for Schemas differences.

Differences in the run between these 2 modes: 


with Database Restorationwithout Database Restoration
Parameter backup_name set name of backup from the bucketleft empty value
Costsmore expensive (deployed RDS in AWS)cheaper (all infrastructure run in Rancher)
Speed of run

depends on dataset and modules

(takes more time than without backup)

around 1 hour

By providing two data migration modes, the system accommodates different scenarios and allows flexibility in selecting the most suitable approach for each migration task. 

Data Migration with Database Restoration 

Stages

The data migration job follows the following stages in sequence:

  1. Init: Initializes the data migration process.
  2. Destroy data-migration project: Destroys the existing data migration project, if any.
  3. Restore data-migration project from backup: Restores the data migration project from the specified backup.
  4. Update with dst release versions: Updates the project with the destination versions.
  5. Generate Data Migration Time report: Generates a report on the data migration time.
  6. Create clean tenant: Creates a clean tenant for the data migration with the destination release versions.
  7. Get schemas difference: Retrieves the difference between updated and clear schemas.
  8. Publish HTML Reports: Publish HTML reports related to the data migration process.
  9. Create Jira tickets: Create a Jira ticket to the development team if after the Data Migration pipeline was found some difference in schemas.
  10. Send Slack notification: Sends a notification to the specified Slack channel with the migration report.
  11. Backup DB state: make a backup of fs09000000 and clean tenants. (Stage in development now)
  12. Destroy data-migration project: delete the environment. If in schemas were found some difference will destroy after 6 hours. If not - immediately.

Data Migration without Database Restoration

Stages

The data migration job follows the following stages in sequence:

  1. Init: Initializes the data migration process.
  2. Destroy data-migration project: Destroys the existing data migration project, if any.
  3. Create data-migration project: Create the data migration project from scratch with the source versions.
  4. Update with dst release versions: Updates the project with the destination versions.
  5. Generate Data Migration Time report: Generates a report on the data migration time.
  6. Create clean tenant: Creates a clean tenant for the data migration with the destination release versions.
  7. Get schemas difference: Retrieves the difference between updated and clear schemas.
  8. Publish HTML Reports: Publish HTML reports related to the data migration process.
  9. Create Jira tickets: Create a Jira ticket to the development team if after the Data Migration pipeline was found some difference in schemas.
  10. Send Slack notification: Sends a notification to the specified Slack channel with the migration report.
  11. Backup DB state: make a backup of diku and clean tenants. (Stage in development now)
  12. Destroy data-migration project: delete the environment. If in schemas were found some difference will destroy after 6 hours. If not - immediately.

Data Migration Time Report

After finishing pipeline time report available in the current run build:

The report looks like a table with a list of modules, their version (source and destination), and the time of migration. Also, the end of the table sets the total time of migration for all modules.

Example of the report:

Create Jira tickets

When performing data migration, it's possible to encounter changes in the database structure that require adjustments to the migration process. To address these changes, we have a simple process in place to notify and involve the development team responsible for the affected module.

If we discover any differences between the source and target database structures while migrating data, we automatically create a Jira ticket. This ticket contains information about the specific schema changes and assigns the task to the appropriate development team.

Example of ticket:

If ticket already exists will adding a comment:

Schema difference

The report you can find after build:

The "Get schemas difference" stage in the data migration Jenkins job is responsible for comparing the schemas between the fs09000000 tenant and the clean tenant created in the "Create clean tenant" stage.

This stage utilizes the Atlas tool to perform the schema comparison (a Docker container is created for executing the schema comparison process).

Analyze comparison results: The Atlas tool generates a detailed report highlighting the differences between the schemas. The report may include information such as added tables, modified columns, dropped indexes, and more. This analysis provides insights into the changes that occurred during the data migration process. When Atlas cannot proceed some changes in report you can found message "Changes were found in this scheme, but cannot be processed." - these changes you can check in pgAdmin UI.

Example of report:

Send Slack notification

The "Send Slack notification" stage is responsible for sending a notification to the Slack channel specified during the job setup (using the slackChannel parameter). This notification provides an overview of the job's execution result, including important details and links for further analysis.

Example of message:

Let's break down the components of the notification message:

  • SUCCESS: Indicates the overall result of the data migration job. This could be customized based on the job outcome (e.g., "FAILURE" or "ABORTED" in case of errors).
  • Rancher/Data-migration(kd-test) #354: Specifies the name and number of the data migration job.
  • Please check: Data Migration takes 4 hours!: Provides a brief summary of how much time migration took.
  • List of modules with activation time bigger than 5 minutes: Lists the modules that took longer than 5 minutes to activate during the migration process. 
    • mod-source-record-storage takes 49 minutes: Provides specific details about a module that took a significant amount of time to activate. 
  • Modules with failed activation: Lists any modules that failed to activate during the data migration. Modify this section to include the actual modules that encountered activation failures.
  • Detailed time report: Provides a link to a detailed time report, which contains more comprehensive information about the data migration execution time. 
  • Detailed Schemas Diff: Includes a link to a detailed schema difference report, which highlights the discrepancies between the source and destination schemas.



Run Jenkins Job With Relevant Parameters: What Need




  • No labels