gitlab-org--gitlab-foss/doc/administration/geo/setup/database.md

42 KiB

stage group info type
Systems Geo To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments howto

Geo database replication (PREMIUM SELF)

This document describes the minimal required steps to replicate your primary GitLab database to a secondary node's database. You may have to change some values, based on attributes including your database's setup and size.

NOTE: If your GitLab installation uses external (not managed by Omnibus GitLab) PostgreSQL instances, the Omnibus roles cannot perform all necessary configuration steps. In this case, use the Geo with external PostgreSQL instances process instead.

NOTE: The stages of the setup process must be completed in the documented order. If not, complete all prior stages before proceeding.

Ensure the secondary site is running the same version of GitLab Enterprise Edition as the primary site. Confirm you have added the premium or higher licenses to your primary site.

Be sure to read and review all of these steps before you execute them in your testing or production environments.

Single instance database replication

A single instance database replication is easier to set up and still provides the same Geo capabilities as a clusterized alternative. It's useful for setups running on a single machine or trying to evaluate Geo for a future clusterized installation.

A single instance can be expanded to a clusterized version using Patroni, which is recommended for a highly available architecture.

Follow below the instructions on how to set up PostgreSQL replication as a single instance database. Alternatively, you can look at the Multi-node database replication instructions on setting up replication with a Patroni cluster.

PostgreSQL replication

The GitLab primary node where the write operations happen connects to the primary database server, and secondary nodes connect to their own database servers (which are read-only).

We recommend using PostgreSQL replication slots to ensure that the primary node retains all the data necessary for the secondary nodes to recover. See below for more details.

The following guide assumes that:

  • You are using Omnibus and therefore you are using PostgreSQL 12 or later which includes the pg_basebackup tool.
  • You have a primary node already set up (the GitLab server you are replicating from), running Omnibus' PostgreSQL (or equivalent version), and you have a new secondary server set up with the same versions of PostgreSQL, OS, and GitLab on all nodes.

WARNING: Geo works with streaming replication. Logical replication is not supported at this time. There is an issue where support is being discussed.

Step 1. Configure the primary server

  1. SSH into your GitLab primary server and login as root:

    sudo -i
    
  2. Edit /etc/gitlab/gitlab.rb and add a unique name for your site:

    ##
    ## The unique identifier for the Geo site. See
    ## https://docs.gitlab.com/ee/user/admin_area/geo_nodes.html#common-settings
    ##
    gitlab_rails['geo_node_name'] = '<site_name_here>'
    
  3. Reconfigure the primary node for the change to take effect:

    gitlab-ctl reconfigure
    
  4. Execute the command below to define the node as primary node:

    gitlab-ctl set-geo-primary-node
    

    This command uses your defined external_url in /etc/gitlab/gitlab.rb.

  5. Define a password for the gitlab database user:

    Generate a MD5 hash of the desired password:

    gitlab-ctl pg-password-md5 gitlab
    # Enter password: <your_password_here>
    # Confirm password: <your_password_here>
    # fca0b89a972d69f00eb3ec98a5838484
    

    Edit /etc/gitlab/gitlab.rb:

    # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab`
    postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
    
    # Every node that runs Puma or Sidekiq needs to have the database
    # password specified as below. If you have a high-availability setup, this
    # must be present in all application nodes.
    gitlab_rails['db_password'] = '<your_password_here>'
    
  6. Define a password for the database replication user.

    We will use the username defined in /etc/gitlab/gitlab.rb under the postgresql['sql_replication_user'] setting. The default value is gitlab_replicator, but if you changed it to something else, adapt the instructions below.

    Generate a MD5 hash of the desired password:

    gitlab-ctl pg-password-md5 gitlab_replicator
    # Enter password: <your_password_here>
    # Confirm password: <your_password_here>
    # 950233c0dfc2f39c64cf30457c3b7f1e
    

    Edit /etc/gitlab/gitlab.rb:

    # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab_replicator`
    postgresql['sql_replication_password'] = '<md5_hash_of_your_password>'
    

    If you are using an external database not managed by Omnibus GitLab, you need to create the replicator user and define a password to it manually:

    --- Create a new user 'replicator'
    CREATE USER gitlab_replicator;
    
    --- Set/change a password and grants replication privilege
    ALTER USER gitlab_replicator WITH REPLICATION ENCRYPTED PASSWORD '<replication_password>';
    
  7. Configure PostgreSQL to listen on network interfaces:

    For security reasons, PostgreSQL does not listen on any network interfaces by default. However, Geo requires the secondary node to be able to connect to the primary node's database. For this reason, we need the address of each node.

    NOTE: For external PostgreSQL instances, see additional instructions.

    If you are using a cloud provider, you can lookup the addresses for each Geo node through your cloud provider's management console.

    To lookup the address of a Geo node, SSH in to the Geo node and execute:

    ##
    ## Private address
    ##
    ip route get 255.255.255.255 | awk '{print "Private address:", $NF; exit}'
    
    ##
    ## Public address
    ##
    echo "External address: $(curl --silent "ipinfo.io/ip")"
    

    In most cases, the following addresses are used to configure GitLab Geo:

    Configuration Address
    postgresql['listen_address'] Primary node's public or VPC private address.
    postgresql['md5_auth_cidr_addresses'] Primary and Secondary nodes' public or VPC private addresses.

    If you are using Google Cloud Platform, SoftLayer, or any other vendor that provides a virtual private cloud (VPC) you can use the primary and secondary nodes private addresses (corresponds to "internal address" for Google Cloud Platform) for postgresql['md5_auth_cidr_addresses'] and postgresql['listen_address'].

    The listen_address option opens PostgreSQL up to network connections with the interface corresponding to the given address. See the PostgreSQL documentation for more details.

    NOTE: If you need to use 0.0.0.0 or * as the listen_address, you also must add 127.0.0.1/32 to the postgresql['md5_auth_cidr_addresses'] setting, to allow Rails to connect through 127.0.0.1. For more information, see omnibus-5258.

    Depending on your network configuration, the suggested addresses may not be correct. If your primary node and secondary nodes connect over a local area network, or a virtual network connecting availability zones like Amazon's VPC or Google's VPC you should use the secondary node's private address for postgresql['md5_auth_cidr_addresses'].

    Edit /etc/gitlab/gitlab.rb and add the following, replacing the IP addresses with addresses appropriate to your network configuration:

    ##
    ## Geo Primary role
    ## - Configures Postgres settings for replication
    ## - Prevents automatic upgrade of Postgres since it requires downtime of
    ##   streaming replication to Geo secondary sites
    ## - Enables standard single-node GitLab services like NGINX, Puma, Redis,
    ##   or Sidekiq. If you are segregating services, then you will need to
    ##   explicitly disable unwanted services.
    ##
    roles(['geo_primary_role'])
    
    ##
    ## Primary address
    ## - replace '<primary_node_ip>' with the public or VPC address of your Geo primary node
    ##
    postgresql['listen_address'] = '<primary_node_ip>'
    
    ##
    # Allow PostgreSQL client authentication from the primary and secondary IPs. These IPs may be
    # public or VPC addresses in CIDR format, for example ['198.51.100.1/32', '198.51.100.2/32']
    ##
    postgresql['md5_auth_cidr_addresses'] = ['<primary_node_ip>/32', '<secondary_node_ip>/32']
    
    ##
    ## Replication settings
    ## - set this to be the number of Geo secondary nodes you have
    ##
    postgresql['max_replication_slots'] = 1
    # postgresql['max_wal_senders'] = 10
    # postgresql['wal_keep_segments'] = 10
    
    ##
    ## Disable automatic database migrations temporarily
    ## (until PostgreSQL is restarted and listening on the private address).
    ##
    gitlab_rails['auto_migrate'] = false
    
  8. Optional: If you want to add another secondary node, the relevant setting would look like:

    postgresql['md5_auth_cidr_addresses'] = ['<primary_node_ip>/32', '<secondary_node_ip>/32', '<another_secondary_node_ip>/32']
    

    You may also want to edit the wal_keep_segments and max_wal_senders to match your database replication requirements. Consult the PostgreSQL - Replication documentation for more information.

  9. Save the file and reconfigure GitLab for the database listen changes and the replication slot changes to be applied:

    gitlab-ctl reconfigure
    

    Restart PostgreSQL for its changes to take effect:

    gitlab-ctl restart postgresql
    
  10. Re-enable migrations now that PostgreSQL is restarted and listening on the private address.

    Edit /etc/gitlab/gitlab.rb and change the configuration to true:

    gitlab_rails['auto_migrate'] = true
    

    Save the file and reconfigure GitLab:

    gitlab-ctl reconfigure
    
  11. Now that the PostgreSQL server is set up to accept remote connections, run netstat -plnt | grep 5432 to make sure that PostgreSQL is listening on port 5432 to the primary server's private address.

  12. A certificate was automatically generated when GitLab was reconfigured. This is used automatically to protect your PostgreSQL traffic from eavesdroppers, but to protect against active ("man-in-the-middle") attackers, the secondary node needs a copy of the certificate. Make a copy of the PostgreSQL server.crt file on the primary node by running this command:

    cat ~gitlab-psql/data/server.crt
    

    Copy the output into a clipboard or into a local file. You need it when setting up the secondary node! The certificate is not sensitive data.

    However, this certificate is created with a generic PostgreSQL Common Name. For this, you must use the verify-ca mode when replicating the database, otherwise, the hostname mismatch will cause errors.

  13. Optional. Generate your own SSL certificate and manually configure SSL for PostgreSQL, instead of using the generated certificate.

    You will need at least the SSL certificate and key, and set the postgresql['ssl_cert_file'] and postgresql['ssl_key_file'] values to their full paths, as per the Database SSL docs.

    This allows you to use the verify-full SSL mode when replicating the database and get the extra benefit of verifying the full hostname in the CN.

    You can use this certificate (that you have also set in postgresql['ssl_cert_file']) instead of the certificate from the point above going forward. This will allow you to use verify-full without replication errors if the CN matches.

Step 2. Configure the secondary server

  1. SSH into your GitLab secondary server and login as root:

    sudo -i
    
  2. Stop application server and Sidekiq

    gitlab-ctl stop puma
    gitlab-ctl stop sidekiq
    

    NOTE: This step is important so we don't try to execute anything before the node is fully configured.

  3. Check TCP connectivity to the primary node's PostgreSQL server:

    gitlab-rake gitlab:tcp_check[<primary_node_ip>,5432]
    

    NOTE: If this step fails, you may be using the wrong IP address, or a firewall may be preventing access to the server. Check the IP address, paying close attention to the difference between public and private addresses and ensure that, if a firewall is present, the secondary node is permitted to connect to the primary node on port 5432.

  4. Create a file server.crt in the secondary server, with the content you got on the last step of the primary node's setup:

    editor server.crt
    
  5. Set up PostgreSQL TLS verification on the secondary node:

    Install the server.crt file:

    install \
       -D \
       -o gitlab-psql \
       -g gitlab-psql \
       -m 0400 \
       -T server.crt ~gitlab-psql/.postgresql/root.crt
    

    PostgreSQL now only recognizes that exact certificate when verifying TLS connections. The certificate can only be replicated by someone with access to the private key, which is only present on the primary node.

  6. Test that the gitlab-psql user can connect to the primary node's database (the default Omnibus database name is gitlabhq_production):

    sudo \
       -u gitlab-psql /opt/gitlab/embedded/bin/psql \
       --list \
       -U gitlab_replicator \
       -d "dbname=gitlabhq_production sslmode=verify-ca" \
       -W \
       -h <primary_node_ip>
    

    NOTE: If you are using manually generated certificates and plan on using sslmode=verify-full to benefit of the full hostname verification, make sure to replace verify-ca to verify-full when running the command.

    When prompted enter the plaintext password you set in the first step for the gitlab_replicator user. If all worked correctly, you should see the list of primary node's databases.

    A failure to connect here indicates that the TLS configuration is incorrect. Ensure that the contents of ~gitlab-psql/data/server.crt on the primary node match the contents of ~gitlab-psql/.postgresql/root.crt on the secondary node.

  7. Configure PostgreSQL:

    This step is similar to how we configured the primary instance. We must enable this, even if using a single node.

    Edit /etc/gitlab/gitlab.rb and add the following, replacing the IP addresses with addresses appropriate to your network configuration:

    ##
    ## Geo Secondary role
    ## - configure dependent flags automatically to enable Geo
    ##
    roles(['geo_secondary_role'])
    
    ##
    ## Secondary address
    ## - replace '<secondary_node_ip>' with the public or VPC address of your Geo secondary node
    ##
    postgresql['listen_address'] = '<secondary_node_ip>'
    postgresql['md5_auth_cidr_addresses'] = ['<secondary_node_ip>/32']
    
    ##
    ## Database credentials password (defined previously in primary node)
    ## - replicate same values here as defined in primary node
    ##
    postgresql['sql_replication_password'] = '<md5_hash_of_your_password>'
    postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
    gitlab_rails['db_password'] = '<your_password_here>'
    

    For external PostgreSQL instances, see additional instructions. If you bring a former primary node back online to serve as a secondary node, then you also must remove roles(['geo_primary_role']) or geo_primary_role['enable'] = true.

  8. Reconfigure GitLab for the changes to take effect:

    gitlab-ctl reconfigure
    
  9. Restart PostgreSQL for the IP change to take effect:

    gitlab-ctl restart postgresql
    

Step 3. Initiate the replication process

Below we provide a script that connects the database on the secondary node to the database on the primary node, replicates the database, and creates the needed files for streaming replication.

The directories used are the defaults that are set up in Omnibus. If you have changed any defaults, configure it as you see fit replacing the directories and paths.

WARNING: Make sure to run this on the secondary server as it removes all PostgreSQL's data before running pg_basebackup.

  1. SSH into your GitLab secondary server and login as root:

    sudo -i
    
  2. Choose a database-friendly name to use for your secondary node to use as the replication slot name. For example, if your domain is secondary.geo.example.com, you may use secondary_example as the slot name as shown in the commands below.

  3. Execute the command below to start a backup/restore and begin the replication

    WARNING: Each Geo secondary node must have its own unique replication slot name. Using the same slot name between two secondaries breaks PostgreSQL replication.

    gitlab-ctl replicate-geo-database \
       --slot-name=<secondary_node_name> \
       --host=<primary_node_ip> \
       --sslmode=verify-ca
    

    NOTE: Replication slot names must only contain lowercase letters, numbers, and the underscore character.

    When prompted, enter the plaintext password you set up for the gitlab_replicator user in the first step.

    NOTE: If you have generated custom PostgreSQL certificates, you will want to use --sslmode=verify-full (or omit the sslmode line entirely), to benefit from the extra validation of the full host name in the certificate CN / SAN for additional security. Otherwise, using the automatically created certificate with verify-full will fail, as it has a generic PostgreSQL CN which will not match the --host value in this command.

    This command also takes a number of additional options. You can use --help to list them all, but here are a couple of tips:

    • If PostgreSQL is listening on a non-standard port, add --port= as well.
    • If your database is too large to be transferred in 30 minutes, you need to increase the timeout, for example, --backup-timeout=3600 if you expect the initial replication to take under an hour.
    • Pass --sslmode=disable to skip PostgreSQL TLS authentication altogether (for example, you know the network path is secure, or you are using a site-to-site VPN). It is not safe over the public Internet!
    • You can read more details about each sslmode in the PostgreSQL documentation; the instructions above are carefully written to ensure protection against both passive eavesdroppers and active "man-in-the-middle" attackers.
    • Change the --slot-name to the name of the replication slot to be used on the primary database. The script attempts to create the replication slot automatically if it does not exist.
    • If you're repurposing an old server into a Geo secondary node, you must add --force to the command line.
    • When not in a production machine you can disable backup step if you really sure this is what you want by adding --skip-backup

The replication process is now complete.

PgBouncer support (optional)

PgBouncer may be used with GitLab Geo to pool PostgreSQL connections, which can improve performance even when using in a single instance installation.

We recommend using PgBouncer if you use GitLab in a highly available configuration with a cluster of nodes supporting a Geo primary site and two other clusters of nodes supporting a Geo secondary site. One for the main database and the other for the tracking database. For more information, see High Availability with Omnibus GitLab.

Changing the replication password

To change the password for the replication user when using Omnibus-managed PostgreSQL instances:

On the GitLab Geo primary server:

  1. The default value for the replication user is gitlab_replicator, but if you've set a custom replication user in your /etc/gitlab/gitlab.rb under the postgresql['sql_replication_user'] setting, make sure to adapt the following instructions for your own user.

    Generate an MD5 hash of the desired password:

    sudo gitlab-ctl pg-password-md5 gitlab_replicator
    # Enter password: <your_password_here>
    # Confirm password: <your_password_here>
    # 950233c0dfc2f39c64cf30457c3b7f1e
    

    Edit /etc/gitlab/gitlab.rb:

    # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab_replicator`
    postgresql['sql_replication_password'] = '<md5_hash_of_your_password>'
    
  2. Save the file and reconfigure GitLab to change the replication user's password in PostgreSQL:

    sudo gitlab-ctl reconfigure
    
  3. Restart PostgreSQL for the replication password change to take effect:

    sudo gitlab-ctl restart postgresql
    

Until the password is updated on any secondary servers, the PostgreSQL log on the secondaries will report the following error message:

FATAL:  could not connect to the primary server: FATAL:  password authentication failed for user "gitlab_replicator"

On all GitLab Geo secondary servers:

  1. The first step isn't necessary from a configuration perspective, because the hashed 'sql_replication_password' is not used on the GitLab Geo secondary. However in the event that secondary needs to be promoted to the GitLab Geo primary, make sure to match the 'sql_replication_password' in the secondary server configuration.

    Edit /etc/gitlab/gitlab.rb:

    # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab_replicator` on the Geo primary
    postgresql['sql_replication_password'] = '<md5_hash_of_your_password>'
    
  2. During the initial replication setup, the gitlab-ctl replicate-geo-database command writes the plaintext password for the replication user account to two locations:

    • gitlab-geo.conf: Used by the PostgreSQL replication process, written to the PostgreSQL data directory, by default at /var/opt/gitlab/postgresql/data/gitlab-geo.conf.
    • .pgpass: Used by the gitlab-psql user, located by default at /var/opt/gitlab/postgresql/.pgpass.

    Update the plaintext password in both of these files, and restart PostgreSQL:

    sudo gitlab-ctl restart postgresql
    

Multi-node database replication

In GitLab 14.0, Patroni replaced repmgr as the supported highly available PostgreSQL solution.

NOTE: If you still haven't migrated from repmgr to Patroni you're highly advised to do so.

Patroni support

Patroni is the official replication management solution for Geo. It can be used to build a highly available cluster on the primary and a secondary Geo site. Using Patroni on a secondary site is optional and you don't have to use the same amount of nodes on each Geo site.

For instructions about how to set up Patroni on the primary site, see the PostgreSQL replication and failover with Omnibus GitLab page.

Configuring Patroni cluster for a Geo secondary site

In a Geo secondary site, the main PostgreSQL database is a read-only replica of the primary site's PostgreSQL database.

If you are currently using repmgr on your Geo primary site, see these instructions for migrating from repmgr to Patroni.

A production-ready and secure setup requires at least:

  • 3 Consul nodes (primary and secondary sites)
  • 2 Patroni nodes (primary and secondary sites)
  • 1 PgBouncer node (primary and secondary sites)
  • 1 internal load-balancer (primary site only)

The internal load balancer provides a single endpoint for connecting to the Patroni cluster's leader whenever a new leader is elected, and it is required for enabling cascading replication from the secondary sites.

Be sure to use password credentials and other database best practices.

Step 1. Configure Patroni permanent replication slot on the primary site

To set up database replication with Patroni on a secondary node, we must configure a permanent replication slot on the primary node's Patroni cluster, and ensure password authentication is used.

For each Patroni instance on the primary site starting on the Patroni Leader instance:

  1. SSH into your Patroni instance and login as root:

    sudo -i
    
  2. Edit /etc/gitlab/gitlab.rb and add the following:

    roles(['patroni_role'])
    
    consul['services'] = %w(postgresql)
    consul['configuration'] = {
      retry_join: %w[CONSUL_PRIMARY1_IP CONSUL_PRIMARY2_IP CONSUL_PRIMARY3_IP]
    }
    
    # You need one entry for each secondary, with a unique name following PostgreSQL slot_name constraints:
    #
    # Configuration syntax is: 'unique_slotname' => { 'type' => 'physical' },
    # We don't support setting a permanent replication slot for logical replication type
    patroni['replication_slots'] = {
      'geo_secondary' => { 'type' => 'physical' }
    }
    
    patroni['use_pg_rewind'] = true
    patroni['postgresql']['max_wal_senders'] = 8 # Use double of the amount of patroni/reserved slots (3 patronis + 1 reserved slot for a Geo secondary).
    patroni['postgresql']['max_replication_slots'] = 8 # Use double of the amount of patroni/reserved slots (3 patronis + 1 reserved slot for a Geo secondary).
    patroni['username'] = 'PATRONI_API_USERNAME'
    patroni['password'] = 'PATRONI_API_PASSWORD'
    patroni['replication_password'] = 'PLAIN_TEXT_POSTGRESQL_REPLICATION_PASSWORD'
    
    # Add all patroni nodes to the allowlist
    patroni['allowlist'] = %w[
      127.0.0.1/32
      PATRONI_PRIMARY1_IP/32 PATRONI_PRIMARY2_IP/32 PATRONI_PRIMARY3_IP/32
      PATRONI_SECONDARY1_IP/32 PATRONI_SECONDARY2_IP/32 PATRONI_SECONDARY3_IP/32
    ]
    
    # We list all secondary instances as they can all become a Standby Leader
    postgresql['md5_auth_cidr_addresses'] = %w[
      PATRONI_PRIMARY1_IP/32 PATRONI_PRIMARY2_IP/32 PATRONI_PRIMARY3_IP/32 PATRONI_PRIMARY_PGBOUNCER/32
      PATRONI_SECONDARY1_IP/32 PATRONI_SECONDARY2_IP/32 PATRONI_SECONDARY3_IP/32 PATRONI_SECONDARY_PGBOUNCER/32
    ]
    
    postgresql['pgbouncer_user_password'] = 'PGBOUNCER_PASSWORD_HASH'
    postgresql['sql_replication_password'] = 'POSTGRESQL_REPLICATION_PASSWORD_HASH'
    postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
    postgresql['listen_address'] = '0.0.0.0' # You can use a public or VPC address here instead
    
  3. Reconfigure GitLab for the changes to take effect:

    gitlab-ctl reconfigure
    
Step 2. Configure the internal load balancer on the primary site

To avoid reconfiguring the Standby Leader on the secondary site whenever a new Leader is elected on the primary site, we must set up a TCP internal load balancer which gives a single endpoint for connecting to the Patroni cluster's Leader.

The Omnibus GitLab packages do not include a Load Balancer. Here's how you could do it with HAProxy.

The following IPs and names are used as an example:

  • 10.6.0.21: Patroni 1 (patroni1.internal)
  • 10.6.0.22: Patroni 2 (patroni2.internal)
  • 10.6.0.23: Patroni 3 (patroni3.internal)
global
    log /dev/log local0
    log localhost local1 notice
    log stdout format raw local0

defaults
    log global
    default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions

frontend internal-postgresql-tcp-in
    bind *:5000
    mode tcp
    option tcplog

    default_backend postgresql

backend postgresql
    option httpchk
    http-check expect status 200

    server patroni1.internal 10.6.0.21:5432 maxconn 100 check port 8008
    server patroni2.internal 10.6.0.22:5432 maxconn 100 check port 8008
    server patroni3.internal 10.6.0.23:5432 maxconn 100 check port 8008

Refer to your preferred Load Balancer's documentation for further guidance.

Step 3. Configure a PgBouncer node on the secondary site

A production-ready and highly available configuration requires at least three Consul nodes, a minimum of one PgBouncer node, but it's recommended to have one per database node. An internal load balancer (TCP) is required when there is more than one PgBouncer service nodes. The internal load balancer provides a single endpoint for connecting to the PgBouncer cluster. For more information, see High Availability with Omnibus GitLab.

Follow the minimal configuration for the PgBouncer node:

  1. SSH into your PgBouncer node and login as root:

    sudo -i
    
  2. Edit /etc/gitlab/gitlab.rb and add the following:

    # Disable all components except Pgbouncer and Consul agent
    roles(['pgbouncer_role'])
    
    # PgBouncer configuration
    pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
    pgbouncer['users'] = {
    'gitlab-consul': {
       # Generate it with: `gitlab-ctl pg-password-md5 gitlab-consul`
       password: 'GITLAB_CONSUL_PASSWORD_HASH'
     },
      'pgbouncer': {
        # Generate it with: `gitlab-ctl pg-password-md5 pgbouncer`
        password: 'PGBOUNCER_PASSWORD_HASH'
      }
    }
    
    # Consul configuration
    consul['watchers'] = %w(postgresql)
    consul['configuration'] = {
      retry_join: %w[CONSUL_SECONDARY1_IP CONSUL_SECONDARY2_IP CONSUL_SECONDARY3_IP]
    }
    consul['monitoring_service_discovery'] =  true
    
  3. Reconfigure GitLab for the changes to take effect:

    gitlab-ctl reconfigure
    
  4. Create a .pgpass file so Consul is able to reload PgBouncer. Enter the PLAIN_TEXT_PGBOUNCER_PASSWORD twice when asked:

    gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
    
  5. Reload the PgBouncer service:

    gitlab-ctl hup pgbouncer
    
Step 4. Configure a Standby cluster on the secondary site

NOTE: If you are converting a secondary site to a Patroni Cluster, you must start on the PostgreSQL instance. It becomes the Patroni Standby Leader instance, and then you can switchover to another replica if you need.

For each Patroni instance on the secondary site:

  1. SSH into your Patroni node and login as root:

    sudo -i
    
  2. Edit /etc/gitlab/gitlab.rb and add the following:

    roles(['consul_role', 'patroni_role'])
    
    consul['enable'] = true
    consul['configuration'] = {
      retry_join: %w[CONSUL_SECONDARY1_IP CONSUL_SECONDARY2_IP CONSUL_SECONDARY3_IP]
    }
    
    postgresql['md5_auth_cidr_addresses'] = [
      'PATRONI_SECONDARY1_IP/32', 'PATRONI_SECONDARY2_IP/32', 'PATRONI_SECONDARY3_IP/32', 'PATRONI_SECONDARY_PGBOUNCER/32',
      # Any other instance that needs access to the database as per documentation
    ]
    
    
    # Add patroni nodes to the allowlist
    patroni['allowlist'] = %w[
      127.0.0.1/32
      PATRONI_SECONDARY1_IP/32 PATRONI_SECONDARY2_IP/32 PATRONI_SECONDARY3_IP/32
    ]
    
    patroni['standby_cluster']['enable'] = true
    patroni['standby_cluster']['host'] = 'INTERNAL_LOAD_BALANCER_PRIMARY_IP'
    patroni['standby_cluster']['port'] = INTERNAL_LOAD_BALANCER_PRIMARY_PORT
    patroni['standby_cluster']['primary_slot_name'] = 'geo_secondary' # Or the unique replication slot name you setup before
    patroni['username'] = 'PATRONI_API_USERNAME'
    patroni['password'] = 'PATRONI_API_PASSWORD'
    patroni['replication_password'] = 'PLAIN_TEXT_POSTGRESQL_REPLICATION_PASSWORD'
    patroni['use_pg_rewind'] = true
    patroni['postgresql']['max_wal_senders'] = 5 # A minimum of three for one replica, plus two for each additional replica
    patroni['postgresql']['max_replication_slots'] = 5 # A minimum of three for one replica, plus two for each additional replica
    
    postgresql['pgbouncer_user_password'] = 'PGBOUNCER_PASSWORD_HASH'
    postgresql['sql_replication_password'] = 'POSTGRESQL_REPLICATION_PASSWORD_HASH'
    postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
    postgresql['listen_address'] = '0.0.0.0' # You can use a public or VPC address here instead
    
    gitlab_rails['dbpassword'] = 'POSTGRESQL_PASSWORD'
    gitlab_rails['enable'] = true
    gitlab_rails['auto_migrate'] = false
    
  3. Reconfigure GitLab for the changes to take effect. This is required to bootstrap PostgreSQL users and settings.

    • If this is a fresh installation of Patroni:

      gitlab-ctl reconfigure
      
    • If you are configuring a Patroni standby cluster on a site that previously had a working Patroni cluster:

      gitlab-ctl stop patroni
      rm -rf /var/opt/gitlab/postgresql/data
      /opt/gitlab/embedded/bin/patronictl -c /var/opt/gitlab/patroni/patroni.yaml remove postgresql-ha
      gitlab-ctl reconfigure
      gitlab-ctl start patroni
      

Migrating from repmgr to Patroni

  1. Before migrating, we recommend that there is no replication lag between the primary and secondary sites and that replication is paused. In GitLab 13.2 and later, you can pause and resume replication with gitlab-ctl geo-replication-pause and gitlab-ctl geo-replication-resume on a Geo secondary database node.
  2. Follow the instructions to migrate repmgr to Patroni. When configuring Patroni on each primary site database node, add patroni['replication_slots'] = { '<slot_name>' => 'physical' } to gitlab.rb where <slot_name> is the name of the replication slot for your Geo secondary. This ensures that Patroni recognizes the replication slot as permanent and not drop it upon restarting.
  3. If database replication to the secondary was paused before migration, resume replication once Patroni is confirmed working on the primary.

Migrating a single PostgreSQL node to Patroni

Before the introduction of Patroni, Geo had no Omnibus support for HA setups on the secondary node.

With Patroni it's now possible to support that. To migrate the existing PostgreSQL to Patroni:

  1. Make sure you have a Consul cluster setup on the secondary (similar to how you set it up on the primary).
  2. Configure a permanent replication slot.
  3. Configure the internal load balancer.
  4. Configure a PgBouncer node
  5. Configure a Standby Cluster on that single node machine.

You end up with a "Standby Cluster" with a single node. That allows you to later on add additional Patroni nodes by following the same instructions above.

Configuring Patroni cluster for the tracking PostgreSQL database

Secondary sites use a separate PostgreSQL installation as a tracking database to keep track of replication status and automatically recover from potential replication issues. Omnibus automatically configures a tracking database when roles(['geo_secondary_role']) is set.

If you want to run this database in a highly available configuration, don't use the geo_secondary_role above. Instead, follow the instructions below.

A production-ready and secure setup requires at least three Consul nodes, two Patroni nodes and one PgBouncer node on the secondary site.

Because of omnibus-6587, Consul can't track multiple services, so these must be different than the nodes used for the Standby Cluster database.

Be sure to use password credentials and other database best practices.

Step 1. Configure a PgBouncer node on the secondary site

Follow the minimal configuration for the PgBouncer node for the tracking database:

  1. SSH into your PgBouncer node and login as root:

    sudo -i
    
  2. Edit /etc/gitlab/gitlab.rb and add the following:

    # Disable all components except Pgbouncer and Consul agent
    roles(['pgbouncer_role'])
    
    # PgBouncer configuration
    pgbouncer['users'] = {
      'pgbouncer': {
        password: 'PGBOUNCER_PASSWORD_HASH'
      }
    }
    
    pgbouncer['databases'] = {
      gitlabhq_geo_production: {
        user: 'pgbouncer',
        password: 'PGBOUNCER_PASSWORD_HASH'
      }
    }
    
    # Consul configuration
    consul['watchers'] = %w(postgresql)
    
    consul['configuration'] = {
      retry_join: %w[CONSUL_TRACKINGDB1_IP CONSUL_TRACKINGDB2_IP CONSUL_TRACKINGDB3_IP]
    }
    
    consul['monitoring_service_discovery'] =  true
    
    # GitLab database settings
    gitlab_rails['db_database'] = 'gitlabhq_geo_production'
    gitlab_rails['db_username'] = 'gitlab_geo'
    
  3. Reconfigure GitLab for the changes to take effect:

    gitlab-ctl reconfigure
    
  4. Create a .pgpass file so Consul is able to reload PgBouncer. Enter the PLAIN_TEXT_PGBOUNCER_PASSWORD twice when asked:

    gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
    
  5. Restart the PgBouncer service:

    gitlab-ctl restart pgbouncer
    

Step 2. Configure a Patroni cluster

For each Patroni instance on the secondary site for the tracking database:

  1. SSH into your Patroni node and login as root:

    sudo -i
    
  2. Edit /etc/gitlab/gitlab.rb and add the following:

    # Disable all components except PostgreSQL, Patroni, and Consul
    roles(['patroni_role'])
    
    # Consul configuration
    consul['services'] = %w(postgresql)
    
    consul['configuration'] = {
      server: true,
      retry_join: %w[CONSUL_TRACKINGDB1_IP CONSUL_TRACKINGDB2_IP CONSUL_TRACKINGDB3_IP]
    }
    
    # PostgreSQL configuration
    postgresql['listen_address'] = '0.0.0.0'
    postgresql['hot_standby'] = 'on'
    postgresql['wal_level'] = 'replica'
    
    postgresql['pgbouncer_user_password'] = 'PGBOUNCER_PASSWORD_HASH'
    postgresql['sql_replication_password'] = 'POSTGRESQL_REPLICATION_PASSWORD_HASH'
    postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
    
    postgresql['md5_auth_cidr_addresses'] = [
       'PATRONI_TRACKINGDB1_IP/32', 'PATRONI_TRACKINGDB2_IP/32', 'PATRONI_TRACKINGDB3_IP/32', 'PATRONI_TRACKINGDB_PGBOUNCER/32',
       # Any other instance that needs access to the database as per documentation
    ]
    
    # Add patroni nodes to the allowlist
    patroni['allowlist'] = %w[
      127.0.0.1/32
      PATRONI_TRACKINGDB1_IP/32 PATRONI_TRACKINGDB2_IP/32 PATRONI_TRACKINGDB3_IP/32
    ]
    
    # Patroni configuration
    patroni['username'] = 'PATRONI_API_USERNAME'
    patroni['password'] = 'PATRONI_API_PASSWORD'
    patroni['replication_password'] = 'PLAIN_TEXT_POSTGRESQL_REPLICATION_PASSWORD'
    patroni['postgresql']['max_wal_senders'] = 5 # A minimum of three for one replica, plus two for each additional replica
    
    # GitLab database settings
    gitlab_rails['db_database'] = 'gitlabhq_geo_production'
    gitlab_rails['db_username'] = 'gitlab_geo'
    gitlab_rails['enable'] = true
    
    # Disable automatic database migrations
    gitlab_rails['auto_migrate'] = false
    
  3. Reconfigure GitLab for the changes to take effect. This is required to bootstrap PostgreSQL users and settings:

    gitlab-ctl reconfigure
    

Step 3. Configure the tracking database on the secondary nodes

For each node running the gitlab-rails, sidekiq, and geo-logcursor services:

  1. SSH into your node and login as root:

    sudo -i
    
  2. Edit /etc/gitlab/gitlab.rb and add the following attributes. You may have other attributes set, but the following must be set.

    # Tracking database settings
    geo_secondary['db_username'] = 'gitlab_geo'
    geo_secondary['db_password'] = 'PLAIN_TEXT_PGBOUNCER_PASSWORD'
    geo_secondary['db_database'] = 'gitlabhq_geo_production'
    geo_secondary['db_host'] = 'PATRONI_TRACKINGDB_PGBOUNCER_IP'
    geo_secondary['db_port'] = 6432
    geo_secondary['auto_migrate'] = false
    
    # Disable the tracking database service
    geo_postgresql['enable'] = false
    
  3. Reconfigure GitLab for the changes to take effect.

    gitlab-ctl reconfigure
    
  4. Run the tracking database migrations:

    gitlab-rake db:migrate:geo
    

Migrating a single tracking database node to Patroni

Before the introduction of Patroni, Geo had no Omnibus support for HA setups on the secondary node.

With Patroni, it's now possible to support that. Due to some restrictions on the Patroni implementation on Omnibus that do not allow us to manage two different clusters on the same machine, we recommend setting up a new Patroni cluster for the tracking database by following the same instructions above.

The secondary nodes backfill the new tracking database, and no data synchronization is required.

Troubleshooting

Read the troubleshooting document.