62 KiB
type | stage | group | info |
---|---|---|---|
reference, concepts | Enablement | Alliances | To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments |
{::options parse_block_html="true" /}
Provision GitLab Cloud Native Hybrid on AWS EKS (FREE SELF)
GitLab "Cloud Native Hybrid" is a hybrid of the cloud native technology Kubernetes (EKS) and EC2. While as much of the GitLab application as possible runs in Kubernetes or on AWS services (PaaS), the GitLab service Gitaly must still be run on Ec2. Gitaly is a layer designed to overcome limitations of the Git binaries in a horizontally scaled architecture. You can read more here about why Gitaly was built and why the limitations of Git mean that it must currently run on instance compute in Git Characteristics That Make Horizontal Scaling Difficult
Amazon provides a managed Kubernetes service offering known as Amazon Elastic Kubernetes Service (EKS).
Tested AWS Bill of Materials by reference architecture size
Available Infrastructure as Code for GitLab Cloud Native Hybrid
The AWS Quick Start for GitLab Cloud Native Hybrid on EKS is developed by AWS, GitLab, and the community that contributes to AWS Quick Starts, whether directly to the GitLab Quick Start or to the underlying Quick Start dependencies GitLab inherits (for example, EKS Quick Start).
NOTE: This automation is in Developer Preview. GitLab is working with AWS on these outstanding issues before it is fully released.
The GitLab Environment Toolkit (GET) is an effort made by GitLab to create a multi-cloud, multi-GitLab (Omnibus + Cloud Native Hybrid) toolkit to provision GitLab. GET is developed by GitLab developers and is open to community contributions. It is helpful to review the GitLab Environment Toolkit (GET) Issues to understand if any of them may affect your provisioning plans.
AWS Quick Start for GitLab Cloud Native Hybrid on EKS | GitLab Environment Toolkit (GET) | |
---|---|---|
GitLab Reference Architecture Compliant | Yes | Yes |
GitLab Performance Tool (GPT) Tested | Yes | Yes |
Amazon Well Architected Compliant | Yes (via Quick Start program) |
Critical portions reviewed by AWS |
Target Cloud Platforms | AWS | AWS, Google, Azure |
IaC Languages | CloudFormation (Quick Starts) | Terraform, Ansible |
Community Contributions and Participation (EcoSystem) | GitLab QSG: Getting Started For QSG Dependencies (e.g. EKS): Substantial |
Getting Started |
Compatible with AWS Meta-Automation Services (via CloudFormation) | Service Catalog (Direct Import), Control Tower Quick Starts, SaaS Factory |
No |
Results in a Ready-to-Use instance | Yes | Manual Actions or Supplemental IaC Required |
Configuration Features | ||
Can deploy Omnibus GitLab (non-Kubernetes | No | Yes |
Complete Internal Encryption | 85%, Targeting 100% | Manual |
AWS GovCloud Support | Yes | TBD |
Streamlined Performance Testing of AWS Quick Start Prepared GitLab Instances
A set of performance testing instructions have been abbreviated for testing a GitLab instance prepared using the AWS Quick Start for GitLab Cloud Native Hybrid on EKS. They assume zero familiarity with GitLab Performance Tool. They can be accessed here: Performance Testing an Instance Prepared using AWS Quick Start for GitLab Cloud Native Hybrid on EKS.
AWS GovCloud Support for AWS Quick Start for GitLab CNH on EKS
The AWS Quick Start for GitLab Cloud Native Hybrid on EKS has been tested with GovCloud and works with the following restrictions and understandings.
-
GovCloud does not have public Route53 hosted zones, so you must set the following parameters:
CloudFormation Quick Start form field CloudFormation Parameter Setting Create Route 53 hosted zone CreatedHostedZone No Request AWS Certificate Manager SSL certificate CreateSslCertificate No -
The Quick Start creates public load balancer IPs, so that you can easily configure your local hosts file to get to the GUI for GitLab when deploying tests. However, you may need to manually alter this if public load balancers are not part of your provisioning plan. We are planning to make non-public load balancers a configuration option issue link: Short Term: Documentation and/or Automation for private GitLab instance with no internet Ingress
-
As of 2021-08-19, AWS GovCloud has Graviton instances for Aurora PostgreSQL available, but does not for ElastiCache Redis.
-
It is challenging to get the Quick Start template to load in GovCloud from the Standard Quick Start URL, so the generic ones are provided here:
AWS PaaS qualified for all GitLab implementations
For both Omnibus GitLab or Cloud Native Hybrid implementations, the following GitLab Service roles can be performed by AWS Services (PaaS). Any PaaS solutions that require preconfigured sizing based on the scale of your instance will also be listed in the per-instance size Bill of Materials lists. Those PaaS that do not require specific sizing, are not repeated in the BOM lists (for example, AWS Certification Manager).
These services have been tested with GitLab.
Some services, such as log aggregation, outbound email are not specified by GitLab, but where provided are noted.
GitLab Services | AWS PaaS (Tested) | Provided by AWS Cloud Native Hybrid Quick Start |
---|---|---|
Tested PaaS Mentioned in Reference Architectures | ||
PostgreSQL Database | Aurora RDS | Yes. |
Redis Caching | Redis Elasticache | Yes. |
Gitaly Cluster (Git Repository Storage) (Including Praefect and PostgreSQL) |
ASG and Instances | Yes - ASG and Instances Note: Gitaly cannot be put into a Kubernetes Cluster. |
All GitLab storages besides Git Repository Storage (Includes Git-LFS which is S3 Compatible) |
AWS S3 | Yes |
Tested PaaS for Supplemental Services | ||
Front End Load Balancing | AWS ELB | Yes |
Internal Load Balancing | AWS ELB | Yes |
Outbound Email Services | AWS Simple Email Service (SES) | Yes |
Certificate Authority and Management | AWS Certificate Manager (ACM) | Yes |
DNS | AWS Route53 (tested) | Yes |
GitLab and Infrastructure Log Aggregation | AWS CloudWatch Logs | Yes (ContainerInsights Agent for EKS) |
Infrastructure Performance Metrics | AWS CloudWatch Metrics | Yes |
Supplemental Services and Configurations (Tested) | ||
Prometheus for GitLab | AWS EKS (Cloud Native Only) | Yes |
Grafana for GitLab | AWS EKS (Cloud Native Only) | Yes |
Administrative Access to GitLab Backend | Bastion Host in VPC | Yes - HA - Preconfigured for Cluster Management. |
Encryption (In Transit / At Rest) | AWS KMS | Yes |
Secrets Storage for Provisioning | AWS Secrets Manager | Yes |
Configuration Data for Provisioning | AWS Parameter Store | Yes |
AutoScaling Kubernetes | EKS AutoScaling Agent | Yes |
GitLab Cloud Native Hybrid on AWS
2K Cloud Native Hybrid on EKS
2K Cloud Native Hybrid on EKS Bill of Materials (BOM)
GPT Test Results
Deploy Now Deploy Now links leverage the AWS Quick Start automation and only prepopulate the number of instances and instance types for the Quick Start based on the Bill of Meterials below. You must provide appropriate input for all other parameters by following the guidance in the Quick Start documentation's Deployment steps section.
- Deploy Now: AWS Quick Start for 2 AZs
- Deploy Now: AWS Quick Start for 3 AZs
NOTE: On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the "GitLab on AWS Compute" table above and customize it with your desired savings plan.
BOM Total: = Bill of Materials Total - this is what you use when building this configuration
Ref Arch Raw Total: = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.
Idle Configuration (Scaled-In) = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.
Service | Ref Arch Raw (Full Scaled) | AWS BOM | Example Full Scaled Cost (On Demand, US East) |
---|---|---|---|
Webservice | 12 vCPU,16 GB | ||
Sidekiq | 2 vCPU, 8 GB | ||
Supporting services such as NGINX, Prometheus, etc | 2 vCPU, 8 GB | ||
GitLab Ref Arch Raw Total K8s Node Capacity | 16 vCPU, 32 GB | ||
One Node for Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc) | + 8 vCPU, 16GB | ||
Grand Total w/ Overheads Minimum hosts = 3 |
24 vCPU, 48 GB | c5.2xlarge (8vcpu/16GB) x 3 nodes 24 vCPU, 48 GB |
$1.02/hr |
Idle Configuration (Scaled-In) | 16 vCPU, 32 GB | c5.2xlarge x 2 | $0.68/hr |
NOTE: If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
Non-Kubernetes Compute | Ref Arch Raw Total | AWS BOM (Directly Usable in AWS Quick Start) |
Example Cost US East, 3 AZ |
Example Cost US East, 2 AZ |
---|---|---|---|---|
Bastion Host (Quick Start) | 1 HA instance in ASG | t2.micro for prod, m4.2xlarge for perf. testing | ||
PostgreSQL AWS Aurora RDS Nodes Configuration (GPT tested) |
2vCPU, 7.5 GB Tested with Graviton ARM |
db.r6g.large x 3 nodes (6vCPU, 48 GB) |
3 nodes x $0.26 = $0.78/hr | 3 nodes x $0.26 = $0.78/hr (Aurora is always 3) |
Redis | 1vCPU, 3.75GB (across 12 nodes for Redis Cache, Redis Queues/Shared State, Sentinel Cache, Sentinel Queues/Shared State) |
cache.m6g.large x 3 nodes (6vCPU, 19GB) |
3 nodes x $0.15 = $0.45/hr | 2 nodes x $0.15 = $0.30/hr |
Gitaly Cluster Details | Gitaly & Praefect Must Have an Uneven Node Count for HA | |||
Gitaly Instances (in ASG) | 12 vCPU, 45GB (across 3 nodes) |
m5.xlarge x 3 nodes (48 vCPU, 180 GB) |
$0.192 x 3 = $0.58/hr | $0.192 x 3 = $0.58/hr |
The GitLab Reference architecture for 2K is not Highly Available and therefore has a single Gitaly no Praefect. AWS Quick Starts MUST be HA, so it implements Prafect from the 3K Ref Architecture to meet that requirement | ||||
Praefect (Instances in ASG with load balancer) | 6 vCPU, 10 GB (across 3 nodes) |
c5.large x 3 nodes (6 vCPU, 12 GB) |
$0.09 x 3 = $0.21/hr | $0.09 x 3 = $0.21/hr |
Praefect PostgreSQL(1) (AWS RDS) | 6 vCPU, 5.4 GB (across 3 nodes) |
N/A Reuses GitLab PostgreSQL | $0 | $0 |
Internal Load Balancing Node | 2 vCPU, 1.8 GB | AWS ELB | $0.10/hr | $0.10/hr |
3K Cloud Native Hybrid on EKS
3K Cloud Native Hybrid on EKS Bill of Materials (BOM)
GPT Test Results
Deploy Now
Deploy Now links leverage the AWS Quick Start automation and only prepopulate the number of instances and instance types for the Quick Start based on the Bill of Meterials below. You must provide appropriate input for all other parameters by following the guidance in the Quick Start documentation's Deployment steps section.
NOTE: On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the "GitLab on AWS Compute" table above and customize it with your desired savings plan.
BOM Total: = Bill of Materials Total - this is what you use when building this configuration
Ref Arch Raw Total: = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.
Idle Configuration (Scaled-In) = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.
Service | Ref Arch Raw (Full Scaled) | AWS BOM | Example Full Scaled Cost (On Demand, US East) |
---|---|---|---|
Webservice | 4 pods x (5 vCPU & 6.25 GB) = 20 vCPU, 25 GB |
||
Sidekiq | 8 pods x (1 vCPU & 2 GB) = 8 vCPU, 16 GB |
||
Supporting services such as NGINX, Prometheus, etc | 2 allocations x (2 vCPU and 7.5 GB) = 4 vCPU, 15 GB |
||
GitLab Ref Arch Raw Total K8s Node Capacity | 32 vCPU, 56 GB | ||
One Node for Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc) | + 16 vCPU, 32GB | ||
Grand Total w/ Overheads Full Scale Minimum hosts = 3 |
48 vCPU, 88 GB | c5.2xlarge (8vcpu/16GB) x 5 nodes 40 vCPU, 80 GB Full Scale GPT Test Results |
$1.70/hr |
Possible Idle Configuration (Scaled-In 75% - round up) Pod autoscaling must be also adjusted to enable lower idling configuration. |
24 vCPU, 48 GB | c5.2xlarge x 4 | $1.36/hr |
Other combinations of node type and quantity can be used to meet the Grand Total. Due to the properties of pods, hosts that are overly small may have significant unused capacity.
NOTE: If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
Non-Kubernetes Compute | Ref Arch Raw Total | AWS BOM (Directly Usable in AWS Quick Start) |
Example Cost US East, 3 AZ |
Example Cost US East, 2 AZ |
---|---|---|---|---|
Bastion Host (Quick Start) | 1 HA instance in ASG | t2.micro for prod, m4.2xlarge for perf. testing | ||
PostgreSQL AWS Aurora RDS Nodes Configuration (GPT tested) |
18vCPU, 36 GB (across 9 nodes for PostgreSQL, PgBouncer, Consul) Tested with Graviton ARM |
db.r6g.xlarge x 3 nodes (12vCPU, 96 GB) |
3 nodes x $0.52 = $1.56/hr | 3 nodes x $0.52 = $1.56/hr (Aurora is always 3) |
Redis | 6vCPU, 18GB (across 6 nodes for Redis Cache, Sentinel) |
cache.m6g.large x 3 nodes (6vCPU, 19GB) |
3 nodes x $0.15 = $0.45/hr | 2 nodes x $0.15 = $0.30/hr |
Gitaly Cluster Details | ||||
Gitaly Instances (in ASG) | 12 vCPU, 45GB (across 3 nodes) |
m5.large x 3 nodes (12 vCPU, 48 GB) |
$0.192 x 3 = $0.58/hr | Gitaly & Praefect Must Have an Uneven Node Count for HA |
Praefect (Instances in ASG with load balancer) | 6 vCPU, 5.4 GB (across 3 nodes) |
c5.large x 3 nodes (6 vCPU, 12 GB) |
$0.09 x 3 = $0.21/hr | Gitaly & Praefect Must Have an Uneven Node Count for HA |
Praefect PostgreSQL(1) (AWS RDS) | 6 vCPU, 5.4 GB (across 3 nodes) |
N/A Reuses GitLab PostgreSQL | $0 | |
Internal Load Balancing Node | 2 vCPU, 1.8 GB | AWS ELB | $0.10/hr | $0.10/hr |
5K Cloud Native Hybrid on EKS
5K Cloud Native Hybrid on EKS Bill of Materials (BOM)
GPT Test Results
Deploy Now
Deploy Now links leverage the AWS Quick Start automation and only prepopulate the number of instances and instance types for the Quick Start based on the Bill of Meterials below. You must provide appropriate input for all other parameters by following the guidance in the Quick Start documentation's Deployment steps section.
NOTE: On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the "GitLab on AWS Compute" table above and customize it with your desired savings plan.
BOM Total: = Bill of Materials Total - this is what you use when building this configuration
Ref Arch Raw Total: = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.
Idle Configuration (Scaled-In) = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.
Service | Ref Arch Raw (Full Scaled) | AWS BOM | Example Full Scaled Cost (On Demand, US East) |
---|---|---|---|
Webservice | 10 pods x (5 vCPU & 6.25GB) = 50 vCPU, 62.5 GB |
||
Sidekiq | 8 pods x (1 vCPU & 2 GB) = 8 vCPU, 16 GB |
||
Supporting services such as NGINX, Prometheus, etc | 2 allocations x (2 vCPU and 7.5 GB) = 4 vCPU, 15 GB |
||
GitLab Ref Arch Raw Total K8s Node Capacity | 62 vCPU, 96.5 GB | ||
One Node for Quick Start Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc) | + 8 vCPU, 16GB | ||
Grand Total w/ Overheads Full Scale Minimum hosts = 3 |
70 vCPU, 112.5 GB | c5.2xlarge (8vcpu/16GB) x 9 nodes 72 vCPU, 144 GB Full Scale GPT Test Results |
$2.38/hr |
Possible Idle Configuration (Scaled-In 75% - round up) Pod autoscaling must be also adjusted to enable lower idling configuration. |
24 vCPU, 48 GB | c5.2xlarge x 7 | $1.85/hr |
Other combinations of node type and quantity can be used to meet the Grand Total. Due to the cpu and memory requirements of pods, hosts that are overly small may have significant unused capacity.
NOTE: If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
Non-Kubernetes Compute | Ref Arch Raw Total | AWS BOM (Directly Usable in AWS Quick Start) |
Example Cost US East, 3 AZ |
Example Cost US East, 2 AZ |
---|---|---|---|---|
Bastion Host (Quick Start) | 1 HA instance in ASG | t2.micro for prod, m4.2xlarge for perf. testing | ||
PostgreSQL AWS Aurora RDS Nodes Configuration (GPT tested) |
21vCPU, 51 GB (across 9 nodes for PostgreSQL, PgBouncer, Consul) Tested with Graviton ARM |
db.r6g.2xlarge x 3 nodes (24vCPU, 192 GB) |
3 nodes x $1.04 = $3.12/hr | 3 nodes x $1.04 = $3.12/hr (Aurora is always 3) |
Redis | 9vCPU, 27GB (across 6 nodes for Redis, Sentinel) |
cache.m6g.xlarge x 3 nodes (12vCPU, 39GB) |
3 nodes x $0.30 = $0.90/hr | 2 nodes x $0.30 = $0.60/hr |
Gitaly Cluster Details | ||||
Gitaly Instances (in ASG) | 24 vCPU, 90GB (across 3 nodes) |
m5.2xlarge x 3 nodes (24 vCPU, 96GB) |
$0.384 x 3 = $1.15/hr | Gitaly & Praefect Must Have an Uneven Node Count for HA |
Praefect (Instances in ASG with load balancer) | 6 vCPU, 5.4 GB (across 3 nodes) |
c5.large x 3 nodes (6 vCPU, 12 GB) |
$0.09 x 3 = $0.21/hr | Gitaly & Praefect Must Have an Uneven Node Count for HA |
Praefect PostgreSQL(1) (AWS RDS) | 6 vCPU, 5.4 GB (across 3 nodes) |
N/A Reuses GitLab PostgreSQL | $0 | |
Internal Load Balancing Node | 2 vCPU, 1.8 GB | AWS ELB | $0.10/hr | $0.10/hr |
10K Cloud Native Hybrid on EKS
10K Cloud Native Hybrid on EKS Bill of Materials (BOM)
GPT Test Results
Deploy Now
Deploy Now links leverage the AWS Quick Start automation and only prepopulate the number of instances and instance types for the Quick Start based on the Bill of Meterials below. You must provide appropriate input for all other parameters by following the guidance in the Quick Start documentation's Deployment steps section.
NOTE: On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the "GitLab on AWS Compute" table above and customize it with your desired savings plan.
BOM Total: = Bill of Materials Total - this is what you use when building this configuration
Ref Arch Raw Total: = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.
Idle Configuration (Scaled-In) = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.
Service | Ref Arch Raw (Full Scaled) | AWS BOM (Directly Usable in AWS Quick Start) |
Example Full Scaled Cost (On Demand, US East) |
---|---|---|---|
Webservice | 20 pods x (5 vCPU & 6.25 GB) = 100 vCPU, 125 GB |
||
Sidekiq | 14 pods x (1 vCPU & 2 GB) 14 vCPU, 28 GB |
||
Supporting services such as NGINX, Prometheus, etc | 2 allocations x (2 vCPU and 7.5 GB) 4 vCPU, 15 GB |
||
GitLab Ref Arch Raw Total K8s Node Capacity | 128 vCPU, 158 GB | ||
One Node for Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc) | + 16 vCPU, 32GB | ||
Grand Total w/ Overheads Fully Scaled Minimum hosts = 3 |
142 vCPU, 190 GB | c5.4xlarge (16vcpu/32GB) x 9 nodes 144 vCPU, 288GB Full Scale GPT Test Results |
$6.12/hr |
Possible Idle Configuration (Scaled-In 75% - round up) Pod autoscaling must be also adjusted to enable lower idling configuration. |
40 vCPU, 80 GB | c5.4xlarge x 7 AutoScale GPT Test Results |
$4.76/hr |
Other combinations of node type and quantity can be used to meet the Grand Total. Due to the cpu and memory requirements of pods, hosts that are overly small may have significant unused capacity.
NOTE: If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
Non-Kubernetes Compute | Ref Arch Raw Total | AWS BOM | Example Cost US East, 3 AZ |
Example Cost US East, 2 AZ |
---|---|---|---|---|
Bastion Host (Quick Start) | 1 HA instance in ASG | t2.micro for prod, m4.2xlarge for perf. testing | ||
PostgreSQL AWS Aurora RDS Nodes Configuration (GPT tested) |
36vCPU, 102 GB (across 9 nodes for PostgreSQL, PgBouncer, Consul) |
db.r6g.2xlarge x 3 nodes (24vCPU, 192 GB) |
3 nodes x $1.04 = $3.12/hr | 3 nodes x $1.04 = $3.12/hr (Aurora is always 3) |
Redis | 30vCPU, 114GB (across 12 nodes for Redis Cache, Redis Queues/Shared State, Sentinel Cache, Sentinel Queues/Shared State) |
cache.m5.2xlarge x 3 nodes (24vCPU, 78GB) |
3 nodes x $0.62 = $1.86/hr | 2 nodes x $0.62 = $1.24/hr |
Gitaly Cluster Details | ||||
Gitaly Instances (in ASG) | 48 vCPU, 180GB (across 3 nodes) |
m5.4xlarge x 3 nodes (48 vCPU, 180 GB) |
$0.77 x 3 = $2.31/hr | Gitaly & Praefect Must Have an Uneven Node Count for HA |
Praefect (Instances in ASG with load balancer) | 6 vCPU, 5.4 GB (across 3 nodes) |
c5.large x 3 nodes (6 vCPU, 12 GB) |
$0.09 x 3 = $0.21/hr | Gitaly & Praefect Must Have an Uneven Node Count for HA |
Praefect PostgreSQL(1) (AWS RDS) | 6 vCPU, 5.4 GB (across 3 nodes) |
N/A Reuses GitLab PostgreSQL | $0 | |
Internal Load Balancing Node | 2 vCPU, 1.8 GB | AWS ELB | $0.10/hr | $0.10/hr |
50K Cloud Native Hybrid on EKS
50K Cloud Native Hybrid on EKS Bill of Materials (BOM)
GPT Test Results
Deploy Now
Deploy Now links leverage the AWS Quick Start automation and only prepopulate the number of instances and instance types for the Quick Start based on the Bill of Meterials below. You must provide appropriate input for all other parameters by following the guidance in the Quick Start documentation's Deployment steps section.
NOTE: On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the "GitLab on AWS Compute" table above and customize it with your desired savings plan.
BOM Total: = Bill of Materials Total - this is what you use when building this configuration
Ref Arch Raw Total: = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.
Idle Configuration (Scaled-In) = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.
Service | Ref Arch Raw (Full Scaled) | AWS BOM (Directly Usable in AWS Quick Start) |
Example Full Scaled Cost (On Demand, US East) |
---|---|---|---|
Webservice | 80 pods x (5 vCPU & 6.25 GB) = 400 vCPU, 500 GB |
||
Sidekiq | 14 pods x (1 vCPU & 2 GB) 14 vCPU, 28 GB |
||
Supporting services such as NGINX, Prometheus, etc | 2 allocations x (2 vCPU and 7.5 GB) 4 vCPU, 15 GB |
||
GitLab Ref Arch Raw Total K8s Node Capacity | 428 vCPU, 533 GB | ||
One Node for Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc) | + 16 vCPU, 32GB | ||
Grand Total w/ Overheads Fully Scaled Minimum hosts = 3 |
444 vCPU, 565 GB | c5.4xlarge (16vcpu/32GB) x 28 nodes 448 vCPU, 896GB Full Scale GPT Test Results |
$19.04/hr |
Possible Idle Configuration (Scaled-In 75% - round up) Pod autoscaling must be also adjusted to enable lower idling configuration. |
40 vCPU, 80 GB | c5.2xlarge x 10 AutoScale GPT Test Results |
$6.80/hr |
Other combinations of node type and quantity can be used to meet the Grand Total. Due to the cpu and memory requirements of pods, hosts that are overly small may have significant unused capacity.
NOTE: If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
Non-Kubernetes Compute | Ref Arch Raw Total | AWS BOM | Example Cost US East, 3 AZ |
Example Cost US East, 2 AZ |
---|---|---|---|---|
Bastion Host (Quick Start) | 1 HA instance in ASG | t2.micro for prod, m4.2xlarge for perf. testing | ||
PostgreSQL AWS Aurora RDS Nodes Configuration (GPT tested) |
96vCPU, 360 GB (across 3 nodes) |
db.r6g.8xlarge x 3 nodes (96vCPU, 768 GB total) |
3 nodes x $4.15 = $12.45/hr | 3 nodes x $4.15 = $12.45/hr (Aurora is always 3) |
Redis | 30vCPU, 114GB (across 12 nodes for Redis Cache, Redis Queues/Shared State, Sentinel Cache, Sentinel Queues/Shared State) |
cache.m6g.2xlarge x 3 nodes (24vCPU, 78GB total) |
3 nodes x $0.60 = $1.80/hr | 2 nodes x $0.60 = $1.20/hr |
Gitaly Cluster Details | ||||
Gitaly Instances (in ASG) | 64 vCPU, 240GB x 3 nodes | m5.16xlarge x 3 nodes (64 vCPU, 256 GB each) |
$3.07 x 3 = $9.21/hr | Gitaly & Praefect Must Have an Uneven Node Count for HA |
Praefect (Instances in ASG with load balancer) | 4 vCPU, 3.6 GB x 3 nodes | c5.xlarge x 3 nodes (4 vCPU, 8 GB each) |
$0.17 x 3 = $0.51/hr | Gitaly & Praefect Must Have an Uneven Node Count for HA |
Praefect PostgreSQL(1) (AWS RDS) | 2 vCPU, 1.8 GB x 3 nodes | N/A Reuses GitLab PostgreSQL | $0 | |
Internal Load Balancing Node | 2 vCPU, 1.8 GB | AWS ELB | $0.10/hr | $0.10/hr |