Setting up a Private CICD Solution in Azure
Table of Contents
- 1 Introduction
- 2 Solution Overview
- 3 Main Components
- 4 Creating the CICD Solution
- 4.1 Create Resource Group
- 4.2 Create Virtual Network
- 4.3 Create VPN Gateway
- 4.4 Setup VPN Client
- 4.5 Create DMZ Network Security Group
- 4.6 Create Private Network #1 Security Group
- 4.7 Create Private Network #2 Security Group
- 4.8 Create Private VM Network Security Group
- 4.9 Create Private NAS Network Security Group
- 4.10 Create Ubuntu VMs
- 4.11 Setup Docker CE on Each VM
- 4.12 Setup Docker Swarm Cluster
- 4.13 Create Encrypted Overlay Network for Swarm Services
- 4.14 Setup SoftNAS VMs – Part 1
- 4.15 Setup SoftNAS VMs – Part 2 (With Replication and HA)
- 4.16 Setup NFS Mount Point on Each VM
- 4.17 Setup Private Docker Image Registry
- 4.18 Setup Private Primary DNS
- 4.19 Setup Private Secondary DNS
- 4.20 Change Ubuntu VMs to Use Private DNS Servers
- 4.21 Add Additional Encrypted Storage for CICD Builds
- 4.22 Install Test Container on Each VM
- 4.23 Setup Public Load Balancer
- 4.24 Setup Private Load Balancer
- 4.25 Setup Portainer for Docker Swarm Management
- 4.26 Setup Jenkins and Ephemeral Build Slaves
- 4.27 Setup GitLab
- 4.28 Setup Jenkins to Know About GitLab
- 4.29 Add Private Docker Registry Credentials to Jenkins
- 4.30 Setup Nexus
- 5 Maintaining the CICD Solution
- 6 Using the CICD Solution
- 7 Conclusion
- 8 Related
Introduction
Recently, I have been tasked with creating a private CICD solution in the cloud. As more companies adopt DevOps, rapid iteration, Agile, and Lean Startup principles, having a versatile CICD solution that covers the whole gamut of software development is extremely important. One of my constraints was to create this solution in the cloud; I chose Azure for its simplicity and power. In this post, see how to create an entire CICD solution in Azure which is private to the outside world and see how Azure makes building this solution a snap.
Keep in mind that the exact steps in this article may change as Microsoft updates Azure. However, the concepts within should be easily adaptable to any UI changes Microsoft may make.
Solution Overview
The big picture of this solution encompasses various services, networks, network security groups, load balancers, VMs, etc. The main CICD magic comes from the combination of GitLab, Jenkins, and Ephemeral Build Slaves. All of this has been modified to support a clustered HA environment which is important to any enterprise.
Keep in mind that due to the retention of control, all VMs are created manually with all software explicitly installed and configured. This is certainly more work than using native services but comes with the benefit of ultimate control. Feel free to adapt these instructions to use native services if so desired.

Main Components
Note About Names: I have prefixed everything with spacely. You should replace that with a name that is significant to you. Feel free to change the naming scheme entirely. However, keep in mind that it’s important to have a good naming scheme so resources can be found much easier.
Note About Locations: Feel free to use a different location for your services other than South Central US.
The Bottom Line: Feel free to customize your solution anyway you want. You aren’t bound by the instructions contained within. Thus, use a different naming convention, region, subscription, etc. Whatever makes sense to customize please do so.
Creating the CICD Solution
With all the high-level overview details out of the way, it’s time to start creating everything in Azure. I’d recommend setting aside a few hours of your time to do this and remember that you can always take breaks. Be sure to have an Azure account created with an active subscription before proceeding.
Also, where NGINX is used the example.com domain will need to be changed in its configuration files (e.g. nginx.conf, jenkins.conf, nexus.conf, etc.).
In addition, for all names used which reference **spacely** or **Spacely**, replace it with the name of your choice (e.g. MyCompany).
Create Resource Group
- In the Azure Portal, on the left side, click Resource Groups.
- Click the Create Resource Group button to add a new resource group. Fill out the required fields by taking inspiration from the below examples.Resource Group Name: Spacely-Engineering-US-South-Central
Subscription: Visual Studio Enterprise
Resource Group Location: South Central USFYIResource groups should be organized by business domain and Azure region. - For easy asset location, check the Pin to Dashboard option and then click the Create button.
Create Virtual Network
- In the Azure Portal, on the left side, click the plus to create a new service. Under the Networking section, select Virtual Network and then click the Create button.
- Fill out the required fields by taking inspiration from the below examples.Name: Spacely-Engineering-VNet
Address Space: 10.0.0.0/20
Subnet Name: Spacely-Engineering-Private-Network-001
Subnet Address Range: 10.0.0.0/24
Subscription: Visual Studio Enterprise
Resource Group: Spacely-Engineering-US-South-Central
Location: South Central US - For easy asset location, check the Pin to Dashboard option and then click the Create button.
- After deployment succeeds, under the Settings section click on Address Space.
- Add the address ranges below and then click on the Save button.
10.0.255.224/27
10.0.250.0/24
- Under the Settings section, click on Subnets. Click the plus button with label Gateway Subnet to add the gateway subnet.
- Under the Address Range (CIDR Block) field, enter
10.0.255.224/27
and then click the Save button. - Click the plus button with label Subnet to add a new subnet. Fill out the required fields by taking inspiration from the below examples.Name: Spacely-Engineering-DMZ-001
Address Range (CIDR Block): 10.0.250.0/24 - Click the OK button to add the subnet.
- After adding the subnet, click the plus button with label Subnet to add a new subnet. Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Private-Network-002
Address Range (CIDR Block): 10.0.1.0/24 - Click the OK button to add the subnet.
Create VPN Gateway
- In the Azure Portal, on the left side, click the plus button to create a new service. Under the Networking section, select Virtual Network Gateway and then click the Create button.
- Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Private-Gateway
Gateway Type: VPN
VPN Type: Route-Based
SKU: VpnGw1
Virtual Network: Spacely-Engineering-VNet
Public IP Address: Spacely-Engineering-Private-Gateway-Public-IP - Click on the plus button with label Create New. Under the Name field, enter Spacely-Engineering-Private-Gateway-Public-IP.
Subscription: Visual Studio Enterprise
Resource Group: Spacely-Engineering-US-South-Central
Location: South Central US - For easy asset location, check the Pin to Dashboard option and then click the Create button.
FYIIt can take upward of 45 minutes to create the VPN Gateway.
- After deployment succeeds, under the Settings section click on Point-to-Site Configuration. Under Address Pool field, enter
172.16.0.0/24
. - Follow this article on creating an Admin root certificate, and under the Name field, enter
Admin
and under the Public Certificate Data field, enter the certificate info (refer to the mentioned article).Be sure to avoid pasting —–BEGIN CERTIFICATE—– and —–END CERTIFICATE—– or an error will occur.
FYIIt is recommended to create individual Root Certificates for each user so that they can be revoked easily. If this is too much work, a shared Root Certificate can be issued to everyone but if a user is no longer desired to have access, the shared Root Certificate would need to be revoked and a new one created and distributed to all who use it. - Click the Save button when finished.
- Under the Settings section of the VPN Gateway, click Properties. Record the Public IP Address which will be needed for the VPN Client to connect.
Setup VPN Client
- Select the created Virtual Network Gateway and return to the Point-to-Site Configuration section and then click the Download VPN Client button.
- Execute the installer to create the VPN connection on the local Windows machine.
FYIThe Windows operating systems below are currently supported.
- Windows 7 (32-bit and 64-bit)
- Windows Server 2008 R2 (64-bit only)
- Windows 8 (32-bit and 64-bit)
- Windows 8.1 (32-bit and 64-bit)
- Windows Server 2012 (64-bit only)
- Windows Server 2012 R2 (64-bit only)
- Windows 10
- Locate the VPN connection settings depending on OS (in Windows 10, clicking on the network icon in the system tray and then selecting Network and Internet Settings will allow changing the VPN settings).
Find the VPN name (e.g. Spacely-Engineering-VNet) and enter the VPN Gateway public IP for the server address. For the authentication method, change it from Username and Password to Certificate.
Be sure to Save when finished.
FYIThe VPN client will not work properly if the client certificate wasn’t properly imported. See this article for more details. - Connect to the VPN Gateway by clicking the Connect button and accept any security prompts. Private network resources should now be remotely accessible.
Create DMZ Network Security Group
- In the Azure Portal, on the left side, click the plus button to create a new service. Under the Networking section, select Network Security Group and then click the Create button.
- Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-DMZ-001-NSG
Resource Group: Spacely-Engineering-US-South-Central
Location: South Central US - For easy asset location, check the Pin to Dashboard option and then click the Create button.
- After deployment succeeds, under the Settings section, click Inbound Security Rules.
- Click the plus button with the label Add. Click on Advanced and fill out the required fields by taking inspiration from the below examples.
Source: Address Range (CIDR Block)
Source IP Address Range: 172.16.0.0/24
Source Port Range: *
Destination: Address Range (CIDR Block)
Destination IP Address Range: 10.0.250.0/24
Destination Port Range: *
Protocol: Any
Action: Allow
Priority: 100
Name: Spacely-Engineering-DMZ-001-Allow-Inbound-VPN-Clients - Click the OK button to add the Inbound Security Rule.
FYIThe above rule will explicitly allow VPN clients to access the DMZ and be allowed through to the Private Network. This isn’t explicitly required since VPN clients will have access to the entire Virtual Network but this rule is added to be very clear and specific about intentions. It is also a good idea in-case any of the default rules are changed by Microsoft.
Also, there is no reason to create a rule to prevent internet traffic from hitting the DMZ since there is already a default rule achieving this. If a use case requires internet traffic hitting the DMZ, all that will be needed is to add a rule to allow this traffic with a higher priority so as to override the default DenyAllInBound rule.
- After saving the Inbound Security Rule, under the Settings section, click Outbound Security Rules.
- Click the plus button with the label Add. Click on Advanced and fill out the required fields by taking inspiration from the below examples.
Source: Address Range (CIDR Block)
Source IP Address Range: 10.0.250.0/24
Source Port Range: *
Destination: Address Range (CIDR Block)
Destination IP Address Range: 10.0.0.0/24
Destination Port Range: *
Protocol: Any
Action: Allow
Priority: 100
Name: Spacely-Engineering-DMZ-001-Allow-Outbound-Private-Network-001 - Click the OK button to add the Outbound Security Rule.
- After saving the Outbound Security Rule, click the plus button with the label Add. Click on Advanced and fill out the required fields by taking inspiration from the below examples.
Source: Address Range (CIDR Block)
Source IP Address Range: 10.0.250.0/24
Source Port Range: *
Destination: Address Range (CIDR Block)
Destination IP Address Range: 10.0.1.0/24
Destination Port Range: *
Protocol: Any
Action: Allow
Priority: 200
Name: Spacely-Engineering-DMZ-001-Allow-Outbound-Private-Network-002 - Click the OK button to add the Outbound Security Rule.
- After saving the Outbound Security Rule, under the Settings section, click Subnets.
- Click the plus button with the label Associate. Select the Virtual Network Spacely-Engineering-VNet and the subnet Spacely-Engineering-DMZ-001.
- Click the OK button to associate the Network Security Group with the DMZ subnet.
Create Private Network #1 Security Group
- In the Azure Portal, on the left side, click the plus button to create a new service. Under the Networking section, select Network Security Group and then click the Create button.
- Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Private-Network-001-NSG
Resource Group: Spacely-Engineering-US-South-Central
Location: South Central US - For easy asset location, check the Pin to Dashboard option and then click the Create button.
- After deployment succeeds, under the Settings section, click Inbound Security Rules.
- Click the plus button with the label Add. Click on Advanced and fill out the required fields by taking inspiration from the below examples.
Source: Address Range (CIDR Block)
Source IP Address Range: 10.0.250.0/24
Source Port Range: *
Destination: Address Range (CIDR Block)
Destination IP Address Range: 10.0.0.0/24
Destination Port Range: *
Protocol: Any
Action: Allow
Priority: 100
Name: Spacely-Engineering-Private-Network-001-Allow-Inbound-DMZ-001 - Click the OK button to add the Inbound Security Rule.
- After saving the Inbound Security Rule, under the Settings section, click Outbound Security Rules.
- Click the plus button with the label Add. Click on Advanced and fill out the required fields by taking inspiration from the below examples.
Source: Address Range (CIDR Block)
Source IP Address Range: 10.0.0.0/24
Source Port Range: *
Destination: Address Range (CIDR Block)
Destination IP Address Range: 10.0.1.0/24
Destination Port Range: *
Protocol: Any
Action: Allow
Priority: 100
Name: Spacely-Engineering-Private-Network-001-Allow-Outbound-Private-Network-002 - Click the OK button to add the Outbound Security Rule.
- After saving the Outbound Security Rule, under the Settings section, click Subnets.
- Click the plus button with the label Associate. Select the Virtual Network Spacely-Engineering-VNet and the subnet Spacely-Engineering-Private-Network-001.
- Click the OK button to associate the Network Security Group with the DMZ subnet.
Create Private Network #2 Security Group
- In the Azure Portal, on the left side, click the plus button to create a new service. Under the Networking section, select Network Security Group and then click the Create button.
- Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Private-Network-002-NSG
Resource Group: Spacely-Engineering-US-South-Central
Location: South Central US - For easy asset location, check the Pin to Dashboard option and then click the Create button.
- After deployment succeeds, under the Settings section, click Inbound Security Rules.
- Click the plus button with the label Add. Click on Advanced and fill out the required fields by taking inspiration from the below examples.
Source: Address Range (CIDR Block)
Source IP Address Range: 10.0.250.0/24
Source Port Range: *
Destination: Address Range (CIDR Block)
Destination IP Address Range: 10.0.1.0/24
Destination Port Range: *
Protocol: Any
Action: Allow
Priority: 100
Name: Spacely-Engineering-Private-Network-002-Allow-Inbound-DMZ-001 - Click the OK button to add the Inbound Security Rule.
- After saving the Inbound Security Rule, click the plus button with the label Add. Click on Advanced and fill out the required fields by taking inspiration from the below examples.
Source: Address Range (CIDR Block)
Source IP Address Range: 10.0.0.0/24
Source Port Range: *
Destination: Address Range (CIDR Block)
Destination IP Address Range: 10.0.1.0/24
Destination Port Range: *
Protocol: Any
Action: Allow
Priority: 200
Name: Spacely-Engineering-Private-Network-002-Allow-Inbound-Private-Network-001 - Click the OK button to add the Inbound Security Rule.
- After saving the Inbound Security Rule, under the Settings section, click Subnets.
- Click the plus button with the label Associate. Select the Virtual Network Spacely-Engineering-VNet and the subnet Spacely-Engineering-Private-Network-002.
- Click the OK button to associate the Network Security Group with the DMZ subnet.
Create Private VM Network Security Group
- In the Azure Portal, on the left side, click the plus button to create a new service. Under the Networking section, select Network Security Group and then click the Create button.
- Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Private-VM-001-NSG
Resource Group: Spacely-Engineering-US-South-Central
Location: South Central US - For easy asset location, check the Pin to Dashboard option and then click the Create button.
- After deployment succeeds, under the Settings section, click Inbound Security Rules.
- Click the plus button with the label Add. Click on Advanced and fill out the required fields by taking inspiration from the below examples.
Source: Address Range (CIDR Block)
Source IP Address Range: 10.0.0.0/24
Source Port Range: 22
Destination: Address Range (CIDR Block)
Destination IP Address Range: 10.0.0.0/24
Destination Port Range: 22
Protocol: Any
Action: Allow
Priority: 200
Name: Spacely-Engineering-Private-Swarm-001-Allow-Inbound-Private-Network-001-SSH - Click the OK button to add the Inbound Security Rule.
- After saving the Inbound Security Rule, click the plus button with the label Add. Click on Advanced and fill out the required fields by taking inspiration from the below examples.
Source: Address Range (CIDR Block)
Source IP Address Range: 10.0.1.0/24
Source Port Range: 22
Destination: Address Range (CIDR Block)
Destination IP Address Range: 10.0.0.0/24
Destination Port Range: 22
Protocol: Any
Action: Allow
Priority: 300
Name: Spacely-Engineering-Private-Swarm-001-Allow-Inbound-Private-Network-002-SSH - Click the OK button to add the Inbound Security Rule.
Create Private NAS Network Security Group
- In the Azure Portal, on the left side, click the plus button to create a new service. Under the Networking section, select Network Security Group and then click the Create button.
- Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Private-NAS-001-NSG
Resource Group: Spacely-Engineering-US-South-Central
Location: South Central US - For easy asset location, check the Pin to Dashboard option and then click the Create button.
- After deployment succeeds, under the Settings section, click Inbound Security Rules.
- Click the plus button with the label Add. Click on Advanced and fill out the required fields by taking inspiration from the below examples.
Source: Address Range (CIDR Block)
Source IP Address Range: 10.0.0.0/24
Source Port Range: 22
Destination: Address Range (CIDR Block)
Destination IP Address Range: 10.0.0.0/24
Destination Port Range: 22
Protocol: Any
Action: Allow
Priority: 100
Name: Spacely-Engineering-Private-NAS-001-Allow-Inbound-Private-Network-001-SSH-001 - Click the OK button to add the Inbound Security Rule.
- After saving the Inbound Security Rule, click the plus button with the label Add. Click on Advanced and fill out the required fields by taking inspiration from the below examples.
Source: Address Range (CIDR Block)
Source IP Address Range: 10.0.0.0/24
Source Port Range: 22
Destination: Address Range (CIDR Block)
Destination IP Address Range: 10.0.1.0/24
Destination Port Range: 22
Protocol: Any
Action: Allow
Priority: 200
Name: Spacely-Engineering-Private-NAS-001-Allow-Inbound-Private-Network-001-SSH-002 - Click the OK button to add the Inbound Security Rule.
- After saving the Inbound Security Rule, click the plus button with the label Add. Click on Advanced and fill out the required fields by taking inspiration from the below examples.
Source: Address Range (CIDR Block)
Source IP Address Range: 10.0.1.0/24
Source Port Range: 22
Destination: Address Range (CIDR Block)
Destination IP Address Range: 10.0.0.0/24
Destination Port Range: 22
Protocol: Any
Action: Allow
Priority: 300
Name: Spacely-Engineering-Private-NAS-001-Allow-Inbound-Private-Network-002-SSH-001 - Click the OK button to add the Inbound Security Rule.
- After saving the Inbound Security Rule, click the plus button with the label Add. Click on Advanced and fill out the required fields by taking inspiration from the below examples.
Source: Address Range (CIDR Block)
Source IP Address Range: 10.0.1.0/24
Source Port Range: 22
Destination: Address Range (CIDR Block)
Destination IP Address Range: 10.0.1.0/24
Destination Port Range: 22
Protocol: Any
Action: Allow
Priority: 400
Name: Spacely-Engineering-Private-NAS-001-Allow-Inbound-Private-Network-002-SSH-002 - Click the OK button to add the Inbound Security Rule.
Create Ubuntu VMs
- In the Azure Portal, on the left side, click the plus button to create a new service. Under the Compute section, select Ubuntu Server 16.04 LTS and then click the Create button.
- Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-VM-001
VM Disk Type: SSD
Username: spacely-eng-admin
Authentication Type: SSH Public Key
SSH Public Key: redacted – see this article for creating public and private SSH keys
Subscription: Visual Studio Enterprise
Resource Group: Spacely-Engineering-US-South-Central
Location: South Central US - Click the OK button to proceed to the next step.
- Choose DS2_V2 Standard for the VM size and click the Select button.
- For Availability Set, select Create New and give it a name (e.g. Spacely-Engineering-VM-001-HAS). For Fault Domains, enter 2. For Update Domains, enter 5. Finally, for Use Managed Disks select Yes and then click the OK button to proceed to the next step.
- Fill out the rest of the required fields by taking inspiration from the below examples.
Storage: Use Managed Disks – Yes
Virtual Network: Spacely-Engineering-VNet
Subnet: Spacely-Engineering-Private-Network-001
Public IP Address: none
Network Security Group: Spacely-Engineering-Private-VM-001-NSG
Extensions: No Extensions
Auto-Shutdown: Off
Monitoring: Boot Diagnostics – Enabled
Guest OS Diagnostics: EnabledFor Diagnostics Storage Account, select Create New and provide the following information below.
Name: spacelyengmainstorage001
Performance: Standard
Replication: Locally-redundant storage (LRS)Click OK to save the storage account details.
- Click the OK button to proceed to the next step.
- Click the Purchase button to provision the VM.
- Repeat the same steps above to setup at least 4 more VMs (total of 5 nodes). Feel free to add more if desired. Be sure to increment the node number for the VM name and choose the exact same settings as before but this time use the newly created High Availability Set and Storage Account for diagnostics.
FYIThe Docker Swarm cluster will consist of 5 nodes. There will be 2 nodes which will be used by Jenkins for CICD Docker builds, etc.
- Pin the storage account to the dashboard by selecting the storage account from all resources, clicking on the Files link under the File Service section, and then clicking on the thumb tack image next to the X in the upper right corner.
Setup Docker CE on Each VM
- SSH into the first VM as the admin account.
- Run the command
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
to add Docker’s official GPG key. - Verify that the fingerprint is 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88 by running the command
sudo apt-key fingerprint 0EBFCD88
. - Add the stable repository by running the following command:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- Run the command
sudo apt-get update
. - Install the latest version of Docker CE by running the command
sudo apt-get install docker-ce
. - Secure the Docker Daemon by creating a folder named .docker in the home directory by running the following commands below.
cd ~
sudo mkdir .docker
cd .docker
- Run the command
sudo openssl genrsa -aes256 -out ca-key.pem 4096
. Enter a passphrase and store that in a secure location (e.g. KeePass). - Run the command
sudo openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem
and enter the password created in the previous step. Fill out the rest of the required fields by taking inspiration from the below examples.Country Name: US
State or Province Name: Texas
Locality Name: Austin
Organization Name: Spacely Space Sprockets
Organizational Unit Name: Engineering
Common Name: Spacely-Engineering-VM-001
Email Address: spacely@spacely-space-sprockets.com - Run the command
sudo openssl genrsa -out server-key.pem 4096
. - Run the command
sudo openssl req -subj "/CN=$HOST" -sha256 -new -key server-key.pem -out server.csr
. Replace $HOST with the hostname of the VM (e.g. Spacely-Engineering-VM-001). - Run the command
sudo su
then the commandecho subjectAltName = DNS:$HOST,IP:10.0.0.4,IP:127.0.0.1 > extfile.cnf
. Replace $HOST with the hostname of the VM (e.g. Spacely-Engineering-VM-001). Replace the IP 10.0.0.4 with the IP of the VM. When done, run the commandexit
. - Run the command
sudo openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile extfile.cnf
. Enter the password that was created earlier. - Run the command
sudo openssl genrsa -out key.pem 4096
. - Run the command
sudo openssl req -subj '/CN=client' -new -key key.pem -out client.csr
. - Run the command
sudo su
then the commandecho extendedKeyUsage = clientAuth > extfile.cnf
. Now, run the commandexit
. - Run the command
sudo openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem -extfile extfile.cnf
. Enter the password that was created earlier. - Run the command
sudo rm -v client.csr server.csr
. - Run the command
sudo mv ca-key.pem ca-key.bak
and then the commandsudo mv key.pem key.bak
. - Run the command
sudo su
and thenopenssl rsa -in ca-key.bak -text > ca-key.pem
. Enter the password that was created earlier. - Run the command
openssl rsa -in key.bak -text > key.pem
and then the commandexit
. - Run the command
sudo rm ca-key.bak key.bak extfile.cnf
. - Run the command
sudo chmod -v 0400 ca-key.pem key.pem server-key.pem
. - Run the command
sudo chmod -v 0444 ca.pem server-cert.pem cert.pem
. - Run the command
sudo vim /etc/docker/daemon.json
and then press i to enter Insert Mode. - Paste the following into the file (e.g. right click if using Putty) as shown below.
1234567{"tls": true,"tlsverify": true,"tlscacert": "/home/spacely-eng-admin/.docker/ca.pem","tlscert": "/home/spacely-eng-admin/.docker/server-cert.pem","tlskey": "/home/spacely-eng-admin/.docker/server-key.pem"}
Press ESC then : (colon) and wq, then press enter to save and exit.
- Run the command
sudo vim /lib/systemd/system/docker.service
and then press i to enter Insert Mode. Scroll down by pressing the down arrow and find the line ExecStart=/usr/bin/dockerd -H fd://. - Remove -H fd:// from the line and then press ESC then : (colon) and wq, then press enter to save and exit.
- Run the following command
sudo su
and then the commandcd /etc/systemd/system
. Run the commandmkdir docker.service.d
. - Run the command
cd docker.service.d
and thenvim docker.conf
. Press i to enter Insert Mode and paste the following into the file (e.g. right click if using Putty) as shown below.123[Service]ExecStart=ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2376Press ESC then : (colon) and wq, then press enter to save and exit.
- Run the command
vim /etc/hosts
. Press i to enter insert mode and then add a new line under the first line (first entry is for localhost). The new line should be as shown below.127.0.0.1 Spacely-Engineering-VM-001
Be sure to change the IP address and host name to match the VM. Press ESC then : (colon) and wq, then press enter to save and exit.
- Run the command
exit
and then the commandsudo reboot
to reboot the server. - Log back into the server via SSH and then type the following command
sudo docker ps
. If no error has occurred, the Docker Daemon is now running in TLS mode. - Set the proper time zone by running the command
sudo timedatectl set-timezone America/Chicago
. Be sure to replace America/Chicago with the proper time zone. To get a list of valid time zones run the commandtimedatectl list-timezones
and for information on the current timezone, run the commandtimedatectl
by itself.FYINetwork time syncing should already be enabled by default. - Repeat the same steps above on the remaining VMs but ensure the hostname and IP are entered correctly for the given VM.
Setup Docker Swarm Cluster
- SSH into the first VM (e.g. Spacely-Engineering-VM-001).
- Run the command
sudo docker swarm init --advertise-addr <MANAGER-IP>
. Replace<MANAGER-IP>
with the VM’s private IP address (e.g. 10.0.0.4). Now this VM is a Swarm Manager node. - Run the command
sudo docker swarm join-token manager
.FYIBe sure to copy the command to add a manager to the Swarm which is output on the screen (e.g. docker swarm join –token redacted). This will be needed later. - Run the command
sudo docker swarm join-token worker
.FYIBe sure to copy the command to add a worker to the Swarm which is output on the screen (e.g. docker swarm join –token redacted). This will be needed later. - Login to the other VMs where a manager role is needed (e.g. Spacely-Engineering-VM-002 and Spacely-Engineering-VM-003) and run the command outputted on the screen earlier.
FYIThe first three VMs will be used for Docker Swarm as Managers. No workers will be added so as to obtain proper quorum.
- Login to the other VMs where a worker role is needed (e.g. Spacely-Engineering-VM-004 and Spacely-Engineering-VM-005) and run the command outputted on the screen earlier.
- When finished, SSH into a Manager node and run the command
sudo docker node update --label-add cicdBuildsOnly=true Spacely-Engineering-VM-004
and then run the commandsudo docker node update --label-add cicdBuildsOnly=true Spacely-Engineering-VM-005
. - While still logged into the Manager node, run the command
sudo docker info
. Look for a section named Swarm and verify it is active with the right amount of nodes.
For more information on what is happening in the above steps, please see this article.
Create Encrypted Overlay Network for Swarm Services
- SSH into one of the Swarm VMs.
- Run the command
sudo docker network create --driver overlay --subnet=172.16.255.0/24 --opt encrypted spacely-eng-core
.
Failing to specify the subnet option may result in the overlay network using the same subnet as the Docker host network. This would prevent proper communications from Swarm services to other Azure Private Network resources or communications to the outside internet.
Setup SoftNAS VMs – Part 1
It is possible to use Azure File Storage but the only mounting option is CIFS which isn’t as granular as NFS. I have found that this solution will not work properly with GitLab since there are various folders (some nested) GitLab requires with special permissions. It appears the nested folders are the problem with CIFS mounts as when you mount the parent folder of a nested folder, you cannot change the nested folder permissions to differ from the parent folder.
There is good news on the horizon in that Azure will soon have a native NFS solution. I will eventually be investigating that when it’s available.
- Begin by creating the necessary Azure Service Administrator account that SoftNAS will need by following this article. Scroll down and begin at the section titled Adding a Service Administrator Account.FYIWhen following the directions in the referenced article to add the user (for the Service Administrator account) in the classic Azure Manager, it is possible that the new Azure Manager may be required to complete the steps. See this article for more details. What ultimately needs to be done is to create another Azure user named softnas and assign it a the Service Admin role. Then make that user account a Co-Administrator. Also, ensure multi-factor authentication is disabled on this account.
- In the Azure Portal, on the left side, click the plus button to create a new service. Under the Storage section, select SoftNAS Cloud NAS 1TB – General Purpose and then click the Create button.
- Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-NAS-VM-001
VM Disk Type: SSD
Username: softnas
Authentication Type: SSH Public Key
SSH Public Key: redacted – see this article for creating public and private SSH keys
Subscription: Visual Studio Enterprise
Resource Group: Spacely-Engineering-US-South-Central
Location: South Central USClick the OK button to continue.
- Choose DS3_V2 Standard for the VM size and click the Select button.
- For Availability Set, select Create New and give it a name (e.g. Spacely-Engineering-NAS-001-HAS). For Fault Domains, enter 2. For Update Domains, enter 5. Finally, for Use Managed Disks select Yes and then click the OK button to proceed to the next step.
- Fill out the rest of the required fields by taking inspiration from the below examples.
Availability Set: Spacely-Engineering-NAS-001-HAS
Storage: Use Managed Disks – Yes
Virtual Network: Spacely-Engineering-VNet
Subnet: Spacely-Engineering-Private-Network-001
Public IP Address: none
Network Security Group: Spacely-Engineering-Private-NAS-001-NSG
Extensions: No Extensions
Auto-Shutdown: Off
Monitoring: Boot Diagnostics – Enabled
Guest OS Diagnostics: Enabled
Diagnostics Storage Account: spacelyengmainstorage001 - Click the OK button to proceed to the next step.
- Click the Purchase button to provision the VM.
- After the VM is provisioned and selected, under the Settings section, select Disks. Click the Add Data Disk button. Under Name, click the dropdown and select Create Disk.
- Fill out the required fields by taking inspiration from the examples below.
Name: Spacely-Engineering-NAS-VM-001-Disk-001
Resource Group: Spacely-Engineering-US-South-Central
Account Type: Premium (SSD)
Source Type: None (empty disk)
Size: 1023 GBClick the Create button when finished.
- When the Data Disks appears showing the newly created disk, under Host Caching select Read/Write and then click the Save button.
- Click the Add Data Disk button. Under Name, click the dropdown and select Create Disk.
- Fill out the required fields by taking inspiration from the examples below.
Name: Spacely-Engineering-NAS-VM-001-Disk-002
Resource Group: Spacely-Engineering-US-South-Central
Account Type: Premium (SSD)
Source Type: None (empty disk)
Size: 1023 GBClick the Create button when finished.
- When the Data Disks appears showing the newly created disk, under Host Caching select Read/Write and then click the Save button.
- SSH into the VM and then run the command
sudo passwd softnas
. Assign a password and make note of it as it will be needed shortly.FYIIf the above command to assign a password asks for the current password of the user softnas, you will need to use the Reset Password feature in the Azure Portal since when this VM was created no password was used and instead a public SSH key. To do this, select the VM and then on the left side under Support + Troubleshooting, click on Reset Password. Tick the radial for Reset Password instead of Reset SSH Public Key. Under the username field, enter softnas and under the password field, enter the password of your choice. Confirm the password in the next field and then click the update button.Keep in mind this will not clear out any SSH public keys you have assigned to the VM. Following this method can be considered an alternate approach to the above step which uses the passwd command.
- Open a browser and login to the SoftNAS GUI using the username softnas and password just set in the previous step by going to https://10.0.0.9 (replace the IP address with the applicable one if need be). Be sure ignore any certificate warnings in order to continue.
- Click the I Agree button to accept the license agreement and continue.
- On the left side under the Settings section, select Software Updates. If a new update exists, immediately apply it.
- On the left side under the Settings section, select Administrator. On the General Settings tab, fill out the required fields by taking inspiration from the below examples.
SMTP Mailserver: smtp.example.com
SMTP Port: 587
Authentication: checked
SMTP Username: admin@example.com
SMTP Password: redactedWhen finished click the Save Settings button.
- On the Monitoring tab, under the field titled Notification Email, enter a desired email address to receive notifications, not send them (e.g. spacely@example.com). Check the Use SMTP box and fill out the required fields by using the same settings from the previous step. Keep SMTP Encryption set to TLSV1.
When finished click the Save Settings button.
- On the left side under the Settings section, select System Time. Under the Change Timezone tab, select the desired timezone and then click the Save button.
- Under the Time Server Sync tab, keep the server time.windows.com and be sure to set Synchronize When Webmin Starts to Yes and Synchronize on Schedule to Yes and keep the default options. Click the Sync and Apply button when finished.
- On the left side under the Storage section, select Disk Devices. Click each 1023 GB disk with message under Device Usage that says Device Needs Partition and click the Create Partition button.
- On the left side click on Storage Pools and then click Create. Select Standard and then click the Next button. For pool name enter spacely-eng-nas-main-pool. Keep the Raid Level set to RAID 1/10 = 0.1 mirror/striped mirrors. Select both newly created disks from the list to be used for this storage pool.
Optional: Check the box for LUKS Encryption and then provide the chosen encryption details. This is recommended for sensitive data.
Click on Create when finished and select Yes to confirm.
- (Skip this and remaining steps on NAS Node 2) On the left side click on Volumes and LUNs and then click Create. Under Volume Name, enter spacely-eng-nas-main-vol. Click the Storage Pool button and click on spacely-eng-nas-main-pool and then click the Select Pool button.
- For the Volume Type ensure Export via NFS is selected and others (e.g. CIFS) are unchecked.
- Under the Storage Provisioning Options select Thin Provision.
- Under Storage Optimization Options, uncheck Compression and ensure Deduplication is also left unchecked. Keep the default option of Sync Mode to Standard and then click the Snapshots tab. Select Enable Scheduled Volume Snapshots and for Snapshot Schedule select Default. Under Scheduled Snapshot Retention, for hourly, enter 4383. For daily, enter 182. Finally, for weekly, enter 26. When finished, click the Create button.
FYIThe above values for snapshot retention will ensure snapshots are retained for at least 6 months. It is important to ensure the storage being used for the SoftNAS VMs is enough to support this snapshot retention policy. Also, keep in mind the snapshot data will have the same level of redundancy as the disk setup (in this case it’s RAID 1 for the VM itself but the actual disks on Azure use Premium Locally Redundant Storage which keeps 3 copies of data within a single region).
If additional redundancy is preferred, you may want to use a disk type of HDD instead of SSD since Geo-Redundant Storage can be chosen which spans data copies across multiple regions. Keep in mind that the Azure VM Backup option is currently unsupported for SoftNAS VMs. This would have backed up the data to a Storage Vault with Geo-Redundant Storage and further added to the amount of data copies and redundancy. You may want to use an additional backup solution to periodically copy the data in the NFS to another location. For example, you could have another VM connected to the NAS copy all the data from the NFS mount to an Azure Storage Account. Duplicity coupled with a wrapper script would be a great tool for this as it can archive the files with compression and encryption and also can delete old backups, etc.
- (Come back to this and remaining steps after following the steps in the next section if HA is desired, otherwise proceed right away) Still logged into the first NAS VM, on the left side under the Storage section, click NFS Exports.
- Two exports should appear. One is called /export (pseudo filesystem) and the other is called /export/spacely-eng-nas-main-pool/spacely-eng-nas-main-vol (pseudo filesystem). On the first export click the link to the right under Exported To titled Everyone.
FYIBe sure to document the second export as it will be needed later to mount the share to each VM.
- In the Edit Export screen, under Active select No and then click the Save button.
- Back at the NFS Exports screen, on the second export under Exported To click Everyone. Under the Export To section, in the Host(s) field remove the asterisk and then enter the IP addresses of each VM (e.g. 10.0.0.4, 10.0.0.5, 10.0.0.6, 10.0.0.7, 10.0.0.8).
FYIAn alternative option would be to select IPv4 Network and specify the network (e.g. 10.0.0.0) and the netmask (e.g. 255.255.255.0). This would allow the share to be available to anything on that specific network.
- Under the Export Security section, for Clients Must Be on Secure Port select Yes. Click the Save button when finished.
Setup SoftNAS VMs – Part 2 (With Replication and HA)
- Repeat all the steps in the previous section (except for the last two steps since the NFS exports will be carried over when setting up HA) and be sure to increment the names (e.g. Spacely-Engineering-NAS-VM-001 becomes Spacely-Engineering-NAS-VM-002), etc. Also, be sure to provision the second VM into the second private subnet (e.g. Spacely-Private-Network-002). This means the second VM will likely have a private IP address of 10.0.1.4.FYIEnsure the Storage Pool is the same as it was on NAS Node Also, do not create the volume on this VM as it will use a replication technique.
- Return to the first node (e.g. https://10.0.0.9) and on the left side under the Storage section, click SnapReplicate. Click on Add Replication and then click the Next button. Under the Hostname or IP Address field enter the IP address of the second NAS node (e.g. 10.0.1.4) and then click the Next button.
- Enter the admin user ID and password for the second NAS node and then click the Next button and then click the Finish button.
FYIThis will create the volume on the second node which will be a mirror of the volume on the first node.
- Once the replication is complete while still logged into the first node, click Add SNAP HA and then click the Next button. Enter the account details for the SoftNAS Azure account created earlier and then click the Next button.
- Specify a Virtual IP that will be used to refer to the HA NAS that is not in the same CIDR block as the virtual machines (e.g. 50.50.50.50) and then click the Next button and then click the Finish button.
- Return to the previous section Setup Cloud NAS VMs – Part 1 and follow steps 29 – 33 to configure the NFS exports.
Setup NFS Mount Point on Each VM
- SSH into the first VM (e.g. Spacely-Engineering-VM-001).
- Switch to the root directory by running the command
cd /
. - Make a new directory for the NFS mount by running the command
sudo mkdir nfs
. - Run the command
sudo apt-get install nfs-common
to install the necessary tools to mount the NFS share. - Run the command
sudo mount -o rsize=32768,wsize=32768,noatime,intr <ip-address>:<export-path> /nfs
to mount the NFS share from the NAS. Replace<ip-address>
with the virtual IP (e.g. 50.50.50.50) of the NAS if HA is enabled or the private IP address of the first NAS VM (e.g. 10.0.0.9). Replace<export-path>
with the NFS export desired (e.g. /export/spacely-eng-nas-main-pool/spacely-eng-nas-main-vol). - Switch to the nfs directory by running the command
cd nfs
and then create a file there by running the commandsudo touch test.txt
. - Ensure after a system restart the NFS share is mounted automatically by running the command
sudo vim /etc/fstab
. Press i to enter Insert Mode and add a new line to the end of the file with the following contents below.1<ip-address>:<export-path> /nfs nfs4 rsize=32768,wsize=32768,noatime,intr 0 0Again, be sure to replace
<ip-address>
and<export-path>
with the appropriate information. - Restart the VM by running the command
sudo reboot
and then SSH back into it. Ensure the test file created earlier can be found in the /nfs directory. - Repeat the above steps on each VM. However, creating the test file only needs to happen once to properly verify NFS is working.
Using this approach doesn’t require installing a Docker NFS Volume Plugin or specifying options for it such as NFS4, etc. All of this has already been taken care of since each VM has a NFS mount already with those details specified which Docker will then just use for the bind mount.
This approach works great with a small amount of VMs but for situations where the amount of VMs is numerous, it’s recommended to use a Docker NFS Volume Plugin and let it take care of the details. This will work great for situations where a volume is used by a Docker Swarm Service so those volumes are automatically created on each applicable VM. This means there would be no need to manually add the NFS mount to each VM.
Setup Private Docker Image Registry
- In the Azure Portal, on the left side, click the plus button to create a new service. Under the Containers section, select Azure Container Registry and then click the Create button.
- Fill out the rest of the required fields by taking inspiration from the below examples.
Registry Name: spacelydockerimageregistry
Subscription: Visual Studio Enterprise
Resource Group: Spacely-Engineering-US-South-Central
Location: South Central US
Admin User: Enable
SKU: Classic
Storage Account: spacelyengmainstorage001 - For easy asset location, check the Pin to Dashboard option and then click the Create button.
- After the successful deployment, under the Settings section, click Access Keys. Make note of the Login Server, Username, and Password used to login to the registry to push and pull Docker images. This information will be needed later when deploying Docker Swarm services.
Setup Private Primary DNS
- SSH into the first VM (e.g. Spacely-Engineering-VM-001).
- Run the command
sudo mkdir -p /nfs/docker-data/build-data/backbone/bind9
. - Copy the build files for the Bind9 Private DNS server to /nfs/docker-data/build-data/backbone/bind9.
- Run the command
sudo mkdir -p /nfs/docker-data/container-data/backbone/bind9/ns1/data
. - Ensure Docker Compose is installed by following the instructions found at its GitHub Repo. For example, run the command
sudo su
then the following command:1curl -L https://github.com/docker/compose/releases/download/1.15.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-composeThen run the command
chmod +x /usr/local/bin/docker-compose,
then the commandexit
, and then the commandsudo docker-compose version
. - Create a disparate network for the service by running the command
sudo docker network create spacely-eng-disparate
. Swap the network name with the one desired but be sure to update the build files. - Run the command
sudo docker login https://<container-registry-url>
. Replace the<container-registry-url>
with the one notated earlier and enter the username and password to login. - Edit the file docker-compose-ns1.yml in /nfs/docker-data/build-data/backbone/bind9 and under image, change the URL before /backbone/bind:9.10.3 to the container registry URL with the one notated earlier.
Be sure to also change the ROOT_PASSWORD environment variable which will be used later to login.
- Run the command
cd /nfs/docker-data/build-data/backbone/bind9
and then the commandsudo docker-compose -f docker-compose-ns1.yml build bind-ns1
. - Run the command
sudo docker push https://<container-registry-url>/backbone/bind:9.10.3
. Replace the<container-registry-url>
with the one notated earlier. - Run the command
sudo docker-compose -f docker-compose-ns1.yml up -d bind-ns1
. - Open a browser and type the private IP address of the node along with the port 10001 (e.g. https://10.0.0.4:10001). Accept certificate warnings to proceed.
- Enter the username as root and the password which was specified in the docker-compose-ns1.yml file as an environment variable.
- Once logged in, click on Servers then BIND DNS Server.
- Under Access Control Lists, add an ACL named trusted and enter the following:
10.0.250.0/24
172.16.0.0/24
10.0.0.0/24
localhost
localnets
- Under Zone Defaults, then under Default Zone Settings, change Allow Queries From to Listed. In the edit box specify the ACL created earlier in the previous step (e.g. trusted) then click Save.
- Click Edit Config File then in the edit config file dropdown, select /etc/bind/named.conf.options and then click the Edit button.
- Replace the contents of that file with that of the contents listed below.
12345678910111213141516171819options {directory "/var/cache/bind";dnssec-enable yes; # enables DNS Security Extensionsdnssec-validation auto; # indicates that a resolver (a caching or caching-only name server) will attempt to validate# replies from DNSSEC enabled (signed) zonesrecursion yes; # allows recursive queriesallow-recursion { trusted; }; # allows recursive queries only from clients defined in the "trusted" aclallow-query { trusted; }; # allows queries only from clients defined in the "trusted" aclallow-transfer { none; }; # do not allow zone transfersauth-nxdomain no; # conform to RFC1035version "Elvis has left the building."; # hides the version information of Bind enabling security by obscurityforwarders {8.8.8.8;8.8.4.4;};};
Click on the Save button when finished and then click on the Apply Configuration button. Then click the Return to Zone List button.
- Test the DNS server by running the command host google.com 10.0.0.4. Be sure to replace the IP address with that of the desired VM. If everything went well, the below output (or similar) will be shown.
Using domain server:
Name: 10.0.0.4
Address: 10.0.0.4#53
Aliases:
google.com has address 216.58.198.174
google.com has IPv6 address 2a00:1450:4009:80f::200e
google.com mail is handled by 10 aspmx.l.google.com.
google.com mail is handled by 30 alt2.aspmx.l.google.com.
google.com mail is handled by 20 alt1.aspmx.l.google.com.
google.com mail is handled by 50 alt4.aspmx.l.google.com.
google.com mail is handled by 40 alt3.aspmx.l.google.com.
- Create a Reverse Zone by clicking on Create Master Zone and then click Reverse.
- In the Domain Name / Network field, enter the private IP address of the desired VM (e.g. 10.0.0.4).
- In the Master Server field, enter ns1.example.com. Be sure to replace this domain with the one desired. Then in the Email Address field, enter the desired email address.
FYIIf a wildcard certificate has been obtained for use throughout the Docker Swarm Cluster, keep in mind that it will only be valid for *.example.com. If later creating A records and choosing something like server.qa.example.com, a certificate warning will occur if using HTTPS because the wildcard certificate will not match the server subdomain. Granted, this depends on how the wildcard certificate has been issued and depends on the issuer.
- Click the Create button to create the Reverse Zone and then click the Return to Zone List button.
- Create a Forward Zone by clicking on Create Master Zone. For Zone Type, select Forward. In the Domain Name / Network box, enter the desired domain (e.g. example.com).
- In the Master Server field, enter ns1.example.com. Be sure to replace this domain with the one desired. Then in the Email Address field, enter the desired email address.
- Click the Create button to create the Forward Zone and then click the Address button.
- In the Name field, enter ns1 and then in the Address field, enter the private IP address of the desired VM (e.g. 10.0.0.4).
- Click the Create button to create the A record. The next screen will allow for adding additional A records.
- Create additional A records for all the VMs. For example, in the Name field enter spacely-eng-vm-001 and in the Address field enter the private IP address of the desired VM (e.g. 10.0.0.4).
FYIDo this for the rest of the VMs. Also, add any additional A records here, perhaps for a load balancer created later.
- When finished, click the Apply Configuration button.
Setup Private Secondary DNS
- SSH into the second VM (e.g. Spacely-Engineering-VM-002).
- Run the command
sudo mkdir -p /nfs/docker-data/container-data/backbone/bind9/ns2/data
. - Ensure Docker Compose is installed by following the instructions found at its GitHub Repo. For example, run the command
sudo su
then the following command:1curl -L https://github.com/docker/compose/releases/download/1.15.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-composeThen run the command
chmod +x /usr/local/bin/docker-compose,
then the commandexit
, and then the commandsudo docker-compose version
. - Create a disparate network for the service by running the command
sudo docker network create spacely-eng-disparate
. Swap the network name with the one desired but be sure to update the build files. - Run the command
sudo docker login https://<container-registry-url>
. Replace the<container-registry-url>
with the one notated earlier and enter the username and password to login. - Run the command
sudo docker pull https://<container-registry-url>/backbone/bind:9.10.3
. Replace the<container-registry-url>
with the one notated earlier. - Run the command
cd /nfs/docker-data/build-data/backbone/bind9
and then run the commandsudo docker-compose -f docker-compose-ns2.yml up -d bind-ns2
. - Open a browser and type the private IP address of the node along with the port 10001 (e.g. https://10.0.0.5:10001). Accept certificate warnings to proceed.
- Enter the username as root and the password which was specified in the docker-compose-ns2.yml file as an environment variable.
- Once logged in, click on Servers then BIND DNS Server.
- Under Access Control Lists, add an ACL named trusted and enter the following:
10.0.250.0/24
172.16.0.0/24
10.0.0.0/24
localhost
localnets
- Under Zone Defaults, then under Default Zone Settings, change Allow Queries From to Listed. In the edit box specify the ACL created earlier in the previous step (e.g. trusted) then click Save.
- Click Edit Config File then in the edit config file dropdown, select /etc/bind/named.conf.options and then click the Edit button.
- Replace the contents of that file with that of the contents listed below.
12345678910111213141516171819options {directory "/var/cache/bind";dnssec-enable yes; # enables DNS Security Extensionsdnssec-validation auto; # indicates that a resolver (a caching or caching-only name server) will attempt to validate# replies from DNSSEC enabled (signed) zonesrecursion yes; # allows recursive queriesallow-recursion { trusted; }; # allows recursive queries only from clients defined in the "trusted" aclallow-query { trusted; }; # allows queries only from clients defined in the "trusted" aclallow-transfer { none; }; # do not allow zone transfersauth-nxdomain no; # conform to RFC1035version "Elvis has left the building."; # hides the version information of Bind enabling security by obscurityforwarders {8.8.8.8;8.8.4.4;};};
Click on the Save button when finished and then click on the Apply Configuration button. Then click the Return to Zone List button.
- Create a Reverse Zone by clicking on Create Slave Zone and then click Reverse.
- In the Domain Name / Network field, enter the private IP address of the desired VM (e.g. 10.0.0.5).
- In the Master Server field, enter the private IP address of the NS1 VM (e.g. 10.0.0.4).
- Click the Create button to create the Reverse Zone and then click the Return to Zone List button.
- Create a Forward Zone by clicking on Create Slave Zone. For Zone Type, select Forward. In the Domain Name / Network box, enter the desired domain (e.g. example.com).
- In the Master Server field, enter the private IP address of the NS1 VM (e.g. 10.0.0.4).
- Click the Create button to create the Forward Zone and then click the Return to Zone List button.
- When finished, click the Apply Configuration button.
- Open a browser and type the private IP address of the NS1 node along with the port 10001 (e.g. https://10.0.0.4:10001). Accept certificate warnings to proceed.
- Enter the username as root and the password which was specified in the docker-compose-ns1.yml file as an environment variable.
- Once logged in, click on Servers then BIND DNS Server.
- Click on the zone button matching the domain specified earlier (e.g. example.com). Click on the Edit shortcut.
- Click on the Name Server button. Under the Zone Name field, enter the domain specified earlier (e.g. example.com). Under the Name Server field, enter ns2.example.com (change example.com to match the desired domain) and then click the Create button.
- Click the Return to Record Types button and then click the Address button. Under the Name field, enter ns2.example.com (change example.com to match the desired domain) and under the Address field, enter the private IP address of the NS2 node (e.g. 10.0.0.5) and click the Create button.
- Click the Return to Zone List button and the click the Zone Defaults button.
- In the Allow Transfers From field, select Listed and enter the private IP address of the NS2 node (e.g. 10.0.0.5).
- In the Also Notify Slaves field, select Listed and enter the private IP address of the NS2 node (e.g. 10.0.0.5).
- Change the option Notify Slaves of Changes to Yes.
- When finished, click the Save button and then the Apply Configuration button.
Change Ubuntu VMs to Use Private DNS Servers
- On the Azure Portal dashboard, under the All Resources tile, click on See More. Locate the desired Virtual Machine (e.g. Spacely-Engineering-VM-001) and click on it.
- Under the Settings section, click Network Interfaces. Click on the only network interface that shows up (if there is more than one something went wrong).
- Under the Settings section, click DNS Servers. Click the Custom option and then add the following DNS servers created earlier:
10.0.0.4
10.0.0.5
Press ESC then : (colon) and wq, then press enter to save and exit.
- Click the Save button and then SSH into the VM.
- Run the command
sudo vim /etc/docker/daemon.json
. Add the dns property with the two DNS server IP addresses shown earlier. Press i to enter Insert Mode and then update the file to look like the one shown below.12345678{"tls": true,"tlsverify": true,"tlscacert": "/home/spacely-eng-admin/.docker/ca.pem","tlscert": "/home/spacely-eng-admin/.docker/server-cert.pem","tlskey": "/home/spacely-eng-admin/.docker/server-key.pem","dns": ["10.0.0.4", "10.0.0.5"]}Press ESC then : (colon) and wq, then press enter to save and exit.
FYILike before, the above IP addresses may be different depending on the specific configuration of the private name servers. - Test it by running the command host yahoo.com and host ns1.example.com (replace example.com with the desired domain setup in the previous Private DNS sections).
- If a response is received then everything worked correctly. Follow the same steps on the remaining VMs.
Add Additional Encrypted Storage for CICD Builds
Later when the last two VMs are setup to be dedicated CICD nodes used for building Docker images, amongst other things, it will be important to have extra storage to support the catalog of built images. In addition, some of these images will likely contain source code. Therefore, this extra storage should be encrypted.
The good news is that as of June 10th, 2017 this is done by default (encrypted-at-rest) and handled by Microsoft which requires no intervention. They manage everything including the keys, etc. Feel free to add additional encryption to meet your requirements or make any modifications.
In a later step in this section, Docker will be customized on these last two VMs to store all image data to the extra encrypted storage.
- In the Azure Portal, on the dashboard, select the second to last VM (e.g. Spacely-Engineering-VM-004).
- On the left side under the Settings section, scroll down and select Disks.
- Click the Add Data Disk button and under the Name dropdown, select Create Disk.
- Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-VM-004-Disk-001
Resource Group: Spacely-Engineering-US-South-Central
Account Type: Premium (SSD)
Source Type: None (empty disk)
Size: 1023 (GiB) - Click the Create button to proceed.
- Under the list that shows the newly created disk, under Host Caching, select Read/Write.
- Save the changes by clicking the Save button in the upper left.
- SSH into the VM and then run the command
sudo fdisk -l
to show the current disks. Verify that the newly created disk is present. It should be listed as shown below.12345Disk /dev/sdc: 1023 GiB, 1098437885952 bytes, 2145386496 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytes - Create a partition by running the command
sudo fdisk /dev/sdc
. When prompted, press n to create a new partition. - When prompted, press p to make the partition primary. Press 1 to make it the first partition, and then press enter to accept the remaining default values.
- Press p to see information about the disk that is to be partitioned. If satisfied, press w to persist everything to disk. Be sure to take note of the UUID of the new partition as it will be needed later.
- Create a file system for the partition by running the command
sudo mkfs -t ext4 /dev/sdc1
. - Create the directory where the partition will be mounted by running the command
sudo mkdir /graph
. - Mount the drive by running the command
sudo mount /dev/sdc1 /graph
. - Make the drive writeable by running the command
sudo chmod go+w /graph
. - Run the command
sudo vim /etc/fstab
and then press i to enter Insert Mode. PasteUUID=33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e /graph ext4 defaults,nofail 1 2
at the end of the file. Be sure to replace the UUID of 33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e with the UUID recorded earlier. Press ESC then : (colon), wq and press enter to save and exit. - Modify the Docker configuration file daemon.json and change the graph location to use the encrypted disk by running the command
sudo vim /etc/docker/daemon.json
. Press i to enter Insert Mode and then right click to paste and replace the below contents. Press ESC then : (colon), wq and press enter to save and exit.12345678910{"tls": true,"tlsverify": true,"tlscacert": "/home/spacely-eng-admin/.docker/ca.pem","tlscert": "/home/spacely-eng-admin/.docker/server-cert.pem","tlskey": "/home/spacely-eng-admin/.docker/server-key.pem","dns": ["10.0.0.4", "10.0.0.5"],"graph": "/graph","storage-driver": "overlay2"} - Copy all the contents from /var/lib/docker to /graph by running the command
sudo cp -R /var/lib/docker/* /graph
. - Reboot the machine and verify that /graph is accessible. Also, run the command
sudo docker ps
to ensure the Docker Daemon is up and running. - Repeat the above steps on the last VM (e.g. Spacely-Engineering-VM-005).
Install Test Container on Each VM
The reason this is so useful is that when setting up the private load balancer later, a health probe can be created to test port 80 on each VM. In contrast, the health probe could be setup to test the Docker Daemon via port 2376 on each VM but this causes the Docker Daemon log to output several TLS handshake error messages on a constant basis. Even though that health probe works, for this reason it is not recommended. See this article for more details.
- On each VM running Docker, run the command
sudo docker run -d -p 80:80 --name Hello-World --restart=always dockercloud/hello-world:latest
. There is no need to pull the image first as if it isn’t there already, it will be pulled automatically. - Test it works by opening a browser and going to the VM’s IP (e.g. http://10.0.0.4).
Setup Public Load Balancer
- In the Azure Portal, on the left side, click the plus button to create a new service. Under the Networking section, select Load Balancer and then click the Create button.
- Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Public-LB
Type: PublicFor Public IP Address, click the Create New link. Under the Name field, enter Spacely-Engineering-Public-LB-Public-IP (change this to something that matches the desired settings) and then select Static.
Subscription: Visual Studio Enterprise
Resource Group: Spacely-Engineering-US-South-Central
Location: South Central US - For easy asset location, check the Pin to Dashboard option and then click the Create button.
- Take note of the Public IP Address assigned to this load balancer as it may be needed if a service is later provisioned which requires an ACL for security.
- Under the Settings section, click Backend Pools. Click the plus button with the label Add. Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Private-Backend-Pool
IP Version: IPv4
Associated To: Availability Set
Availability Set: Spacely-Engineering-VM-001-HASClick the Add a Target Network IP Configuration button. Under Target Virtual Machine, select the first VM. Under Network IP Configuration, select ipconfig1 (10.0.0.4).
- Click the Add a Target Network IP Configuration button again and add the remaining VMs. When finished, click the OK button.
- Under the Settings section, click Health Probes. Click the plus button with the label Add. Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Docker-Hello-World-Probe
IP Version: IPv4
Protocol: HTTP
Port: 80
Path: /
Interval: 5
Unhealthy Threshold: 2When finished, click the OK button.
- Under the Settings section, click Load Balancing Rules. Click the plus button with the label Add. Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Docker-Hello-World-LB-Rule
IP Version: IPv4
Frontend IP Address: redacted (LoadBalancerFrontEnd)
Protocol: HTTP
Port: 80
Backend Port: 80
Backend Pool: Spacely-Engineering-Private-Backend-Pool (5 virtual machines)
Health Probe: Spacely-Engineering-Docker-Hello-World-Probe (HTTP:80)
Session Persistence: None
Idle Timeout (Minutes): 4
Floating IP (Direct Server Return): DisabledWhen finished, click the OK button.
- SSH into each VM and run the command
curl ipinfo.io/ip
. Ensure the IP address matches the public IP address of the load balancer.
Even though this load balancer is internet facing, the health probe and load balancing rule defined cannot be reached by the outside world. In order for that to happen, an Inbound Rule in the applicable Network Security Group allowing access from the public to port 80 would need to be created. If the health probe and load balancing rule weren’t created then the public IP address of the load balancer wouldn’t be used by the VMs for outbound connections.
To confirm this, open a browser and go to the address http://
. The connection should time out. Even though the Docker Hello World container is running at port 80 on each VM running Docker, it is unreachable since there isn’t an Inbound Rule for it in the applicable Network Security Group.
Setup Private Load Balancer
- In the Azure Portal, on the left side, click the plus button to create a new service. Under the Networking section, select Load Balancer and then click the Create button.
- Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Private-LB
Type: Internal
Virtual Network: Spacely-Engineering-VNet
Subnet: Spacely-Engineering-Private-Network-001
IP Address Assignment: Dynamic
Subscription: Visual Studio Enterprise
Resource Group: Spacely-Engineering-US-South-Central
Location: South Central US - For easy asset location, check the Pin to Dashboard option and then click the Create button.
- Under the Settings section, click Backend Pools. Click the plus button with the label Add. Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Private-Backend-Pool
IP Version: IPv4
Associated To: Availability Set
Availability Set: Spacely-Engineering-VM-001-HASClick the Add a Target Network IP Configuration button. Under Target Virtual Machine, select the first VM. Under Network IP Configuration, select ipconfig1 (10.0.0.4).
- Click the Add a Target Network IP Configuration button again and add the remaining VMs. When finished, click the OK button.
- Under the Settings section, click Health Probes. Click the plus button with the label Add. Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Docker-Hello-World-Probe
IP Version: IPv4
Protocol: HTTP
Port: 80
Path: /
Interval: 5
Unhealthy Threshold: 2Click the OK button when finished.
- Under the Settings section, click Load Balancing Rules. Click the plus button with the label Add. Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-GitLab-HTTPS-LB-Rule
IP Version: IPv4
Frontend IP Address: 10.0.0.10 (LoadBalancerFrontEnd)
Protocol: TCP
Port: 51443
Backend Port: 51443
Backend Pool: Spacely-Engineering-Private-Backend-Pool (5 Virtual Machines)
Health Probe: Spacely-Engineering-Docker-Hello-World-Probe (HTTP:80)
Session Persistence: Client IP and Protocol
Idle Timeout (Minutes): 4
Floating IP (Direct Server Return): DisabledClick the OK button when finished.
- Click the plus button with the label Add. Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-GitLab-SSH-LB-Rule
IP Version: IPv4
Frontend IP Address: 10.0.0.10 (LoadBalancerFrontEnd)
Protocol: TCP
Port: 51022
Backend Port: 51022
Backend Pool: Spacely-Engineering-Private-Backend-Pool (5 Virtual Machines)
Health Probe: Spacely-Engineering-Docker-Hello-World-Probe (HTTP:80)
Session Persistence: Client IP and Protocol
Idle Timeout (Minutes): 4
Floating IP (Direct Server Return): DisabledClick the OK button when finished.
- Click the plus button with the label Add. Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Portainer-HTTPS-LB-Rule
IP Version: IPv4
Frontend IP Address: 10.0.0.10 (LoadBalancerFrontEnd)
Protocol: TCP
Port: 51443
Backend Port: 51443
Backend Pool: Spacely-Engineering-Private-Backend-Pool (5 Virtual Machines)
Health Probe: Spacely-Engineering-Docker-Hello-World-Probe (HTTP:80)
Session Persistence: Client IP and Protocol
Idle Timeout (Minutes): 4
Floating IP (Direct Server Return): DisabledClick the OK button when finished.
- Click the plus button with the label Add. Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Jenkins-HTTPS-LB-Rule
IP Version: IPv4
Frontend IP Address: 10.0.0.10 (LoadBalancerFrontEnd)
Protocol: TCP
Port: 52443
Backend Port: 52443
Backend Pool: Spacely-Engineering-Private-Backend-Pool (5 Virtual Machines)
Health Probe: Spacely-Engineering-Docker-Hello-World-Probe (HTTP:80)
Session Persistence: Client IP and Protocol
Idle Timeout (Minutes): 4
Floating IP (Direct Server Return): DisabledClick the OK button when finished.
- Click the plus button with the label Add. Fill out the required fields by taking inspiration from the below examples.
Name: Spacely-Engineering-Nexus-HTTPS-LB-Rule
IP Version: IPv4
Frontend IP Address: 10.0.0.10 (LoadBalancerFrontEnd)
Protocol: TCP
Port: 53443
Backend Port: 53443
Backend Pool: Spacely-Engineering-Private-Backend-Pool (5 Virtual Machines)
Health Probe: Spacely-Engineering-Docker-Hello-World-Probe (HTTP:80)
Session Persistence: Client IP and Protocol
Idle Timeout (Minutes): 4
Floating IP (Direct Server Return): DisabledClick the OK button when finished.
Setup Portainer for Docker Swarm Management
- Run the command
sudo mkdir -p /nfs/docker-data/build-data/administration/portainer
. - Copy the build files for Portainer to
/nfs/docker-data/build-data/administration/portainer
. - Run the command
sudo mkdir -p /nfs/docker-data/container-data/administration/portainer
. Then run the commandcd /nfs/docker-data/container-data/administration/portainer
. Finally, run the commandsudo mkdir certs data
. - Obtain a certificate and install it in /nfs/docker-data/container-data/administration/portainer/certs. See the build files for more details.
- Run the command
cd /nfs/docker-data/build-data/administration/portainer
and thensudo docker stack deploy --compose-file docker-stack.yml spacely-engineering
. - Find out which VM is running the container by running the command
sudo docker service ps spacely-engineering_portainer
. - Open a browser and type https:// along with the private IP address of the VM running the container and the port 50443 (e.g. https://10.0.0.4:50443).
This service later will be accessible via the Azure Private Load Balancer.
Setup Jenkins and Ephemeral Build Slaves
While the major CICD components of this solution run as Docker Swarm Services, the build slaves which run in the containers will not. This is because the plugin (as of the writing of this post) doesn’t yet support new Docker Swarm Services.
There is a discussion to add Docker Swarm functionality to this plugin and the future state seems promising. The instructions within will be updated later when Docker Swarm support is an option for ephemeral build slaves. The existing solution should be easily migrated over at that time.
Regardless of the above, this solution will improvise to obtain a highly available load balanced Jenkins Build Slave. This is done by using the last two VMs specifically for Jenkins builds (via Jenkins Ephemeral Slave containers). In addition, the Jenkinsfile will use custom logic to obtain HA and load balancing.
All Jenkins Slave containers will have access to all Docker commands that are exposed to the host machine as root. In other words, it could run commands to delete images, run containers, etc. Great care should be taken inside each applicable Jenkinsfile when running commands so the Docker host is not negatively affected. If enough care is taken, the security implications can be mitigated. In addition, since the Jenkins Slaves are short lived, the window of opportunity to potentially hack one will be reduced.
For more information on potential security vulnerabilities, visit this article and this one as well. With the addition of convenience, reduced security is usually the trade-off. Think carefully about what potential problems this could cause and if the lessened security is justified. If not, consider a different approach.
Configure Last Two VMs to Allow Connection to Docker Socket
- SSH into the first of the last two VMs (e.g. Spacely-Engineering-VM-004).
- Run the command
sudo crontab -e
and then select an editor (e.g. /usr/bin/vim.basic) then press i to enter Insert Mode. - Paste the text below into the editor at the very bottom.
@reboot sleep 30 && setfacl -m u:1000:rw /var/run/docker.sock
Press ESC then : (colon) and wq, then press enter to save and exit.
FYIIf this step is missed then the jenkins user in the Slave container will not have the necessary permissions to access the Docker Daemon running on the host (it will be accessible later by bind mounting it from the container – more info here). This will be used later for pushing built Docker images to the private Docker registry, deploying built apps onto the Docker host, etc. For more context on the above command, see this article and this one as well.Another talked about option (also found in official Docker docs) is to add the jenkins user to the Docker group which would then accomplish the same thing above. However, doing this means that the jenkins user can do anything to applicable files belonging to the Docker group. The setfacl command above will ensure the jenkins user which is UID 1000 only has access to docker.sock and nothing else.
The reason this has been added to a cron job invoked at system reboot is so that the ACL that is set on the file is added again. Without this, on each reboot the added ACL is lost. The command has a 30 second delay to ensure all previous startup commands have the chance to run so as not to set the ACL prematurely.
As a final note, the approach this solution uses for connecting to the Docker socket works well for a single connection to the same Docker host the container is running on. However, if a remote connection to another Docker Daemon is needed, a different approach will need to be taken.
- Reboot the VM by running the command
sudo reboot
. When it comes back up, SSH into it again. - Verify the setfacl command has taken effect by running the command
sudo getfacl /var/run/docker.sock
. If the command was successful, the below output should be shown (pay attention to the second user entry).getfacl: Removing leading '/' from absolute path names
# file: var/run/docker.sock
# owner: root
# group: docker
user::rw-
user:spacely-eng-admin:rw-
group::rw-
mask::rw-
other::---
FYIThe username above shows as spacely-eng-admin instead of jenkins because this user has a UID of 1000, the same that is used by the Jenkins Docker containers. In other words, the UID of 1000 on the Docker host is assigned to the spacely-eng-admin user. Don’t worry if it doesn’t say jenkins as it will still work just fine. - Repeat the above steps on the last VM (e.g. Spacely-Engineering-VM-005).
Setup Disparate Docker Network for Build Nodes
- SSH into the second to last VM (e.g. Spacely-Engineering-VM-004).
- Run the command
sudo docker network create spacely-eng-disparate
. Swap the network name with the one desired but be sure to reference the appropriate network when later configuring Yet Another Docker Plugin for Jenkins. - Repeat these steps on the last VM (e.g. Spacely-Engineering-VM-005).
Build and Setup Jenkins
- SSH into a Docker Swarm Manager VM (e.g. Spacely-Engineering-VM-001).
- Run the command
sudo mkdir -p /nfs/docker-data/build-data/cicd/jenkins
. - Copy the build files for Jenkins and Ephemeral Build Slaves to
/nfs/docker-data/build-data/cicd/jenkins
. - Edit the Jenkins Slave ./jenkins-slave/config/resolv.conf file to ensure it uses the private name servers setup earlier (e.g. 10.0.0.4, 10.0.0.5).
- Edit the Jenkins NGINX ./jenkins-nginx/config/jenkins.conf so that line 21 uses the appropriate chosen domain (this article uses example.com).
- Edit the Jenkins NGINX ./jenkins-nginx/config/nginx.conf so that line 20 uses the appropriate chosen domain (this article uses example.com).
- Run the command
sudo mkdir -p /nfs/docker-data/container-data/cicd/jenkins/jenkins-master/home
. Then run the commandsudo mkdir -p /nfs/docker-data/container-data/cicd/jenkins/jenkins-master/logs
and then the commandsudo mkdir -p /nfs/docker-data/container-data/cicd/jenkins/jenkins-nginx/certs
. - Run the command
sudo chown -R 1000:1000 /nfs/docker-data/container-data/cicd/jenkins/jenkins-master/home
and then the commandsudo chown -R 1000:1000 /nfs/docker-data/container-data/cicd/jenkins/jenkins-master/logs
. - Ensure Docker Compose is installed by following the instructions found at its GitHub Repo. For example, run the command
sudo su
then the following command:curl -L https://github.com/docker/compose/releases/download/1.15.0/docker-compose-
uname -s-
uname -m> /usr/local/bin/docker-compose
Then run the command
chmod +x /usr/local/bin/docker-compose
, then the commandexit
, and then the commandsudo docker-compose version
. If everything was installed correctly, no errors should occur. - Run the command
sudo docker login https://<container-registry-url>
. Replace the<container-registry-url>
with the one notated earlier and enter the username and password to login. - Run the command
cd /nfs/docker-data/build-data/cicd/jenkins
. Then run the commandsudo apt-get install make
. - Run the command
sudo make build
. - Run the following commands:
sudo docker push https://<container-registry-url>/cicd-tools/jenkins-master:2.101-alpine
sudo docker push https://<container-registry-url>/cicd-tools/jenkins-nginx:1.13.8-alpine
sudo docker push https://<container-registry-url>/cicd-tools/jenkins-slave:8u151-jre-alpine
Replace the
<container-registry-url>
with the one notated earlier. - Copy the certificate files (server-cert.pem and server-key.pem) obtained from a self-signed or trusted CA to /nfs/docker-data/container-data/cicd/jenkins/jenkins-nginx/certs.
FYIIt is recommended to obtain a wildcard certificate from a trusted CA using a desired registered domain name. Even though the domain is publicly registered, no entries will exist in its public name servers that point to private assets within the Azure Virtual Network. The private name servers setup earlier will align to the private assets.
Keep in mind that registering a domain is required when using a certificate obtained from a trusted CA since domain ownership must be verified.
- Run the command
sudo docker stack deploy --compose-file docker-stack.yml --with-registry-auth spacely-engineering
to deploy everything as a Docker Swarm Service. - SSH into the second to last VM (e.g. Spacely-Engineering-VM-004) and run the command
sudo docker login https://<container-registry-url>
. Replace the<container-registry-url>
with the one notated earlier and enter the username and password to login. - Run the command
sudo docker pull https://<container-registry-url>/cicd-tools/jenkins-slave:8u151-jre-alpine
. - Repeat steps 16 – 17 on the last VM (e.g. Spacely-Engineering-VM-005).
FYIIt is possible to configure Jenkins to autmatically pull the slave image from the Private Docker Registry when needed. For the sake of simplicity this has been avoided.
- On the main development machine being used to VPN into the Virtual Network, modify the hosts file to point to Jenkins. The same FQDN entered here should also be entered into the private DNS setup earlier to be succinct (this will be done later). Be sure to change the IP and FQDN to match the private load balancer IP pointing to Jenkins.
Windows
Hosts File Location: C:\Windows\System32\drivers\etc\hosts
Entry to Add: 10.0.0.10 jenkins-spacely-engineering.example.com
Example CMD (elevated prompt): notepad C:\Windows\System32\drivers\etc\hosts
Flush DNS CMD (elevated prompt): ipconfig /flushdnsLinux
Hosts File Location: /etc/hosts
Entry to Add: 10.0.0.10 jenkins-spacely-engineering.example.com
Example CMD: sudo vim /etc/hosts
Reload Networking CMD: sudo /etc/init.d/networking restartThe hosts file info for Linux is being shown here even though the VPN client to access the solution only works on Windows. It is shown here in-case there is another access mechanism being used to get the private resources in the cloud (e.g. OpenVPN, Site-to-Site VPN, etc.).
FYIIf using Windows, ensure the hosts file is being edited with administrator privileges. Also, be sure to flush the DNS after saving the hosts file.If using Linux, be sure to reload networking after modifying the hosts file.
FYIThe reason for editing the hosts file has to do with DNS. The amount of configuration involved to use the Virtual Network’s private name servers is greater than editing the hosts file. In addition, if each development machine is connected to a corporate network, overriding DNS settings may cause problems connecting to corporate assets. Certainly there are ways around this but this is out of the scope of this article.
- Login to the primary DNS server and add an A record for the FQDN added in the previous step and apply the configuration.
- Open a browser and type the address using HTTPS on the FQDN given to Jenkins in the previous steps (e.g. https://jenkins-spacely-engineering.example.com:52443). Be sure not to forget https:// in the beginning and the port :55443 at the end.
FYIIf a self-signed certificate has been used, certificate warnings will occur. In some situations the certificate can be imported into the certificate store to prevent this from happening. If the certificate was received from a trusted CA no warnings should be shown.
- After receiving the message to unlock Jenkins, run the following command
cat /nfs/docker-data/container-data/cicd/jenkins/jenkins-master/home/secrets/initialAdminPassword
. Copy the value and then return to the browser to paste it into the required field and then click the Continue button. - Instead of clicking either Install Suggested Plugins or Select Plugins to Install, click the X in the upper right corner. Then click the Start Using Jenkins button.
- On the left side, click Manage Jenkins and then click Manage Users. Then again on the left side click Create User.
- Fill out the required fields by taking inspiration from the below examples.
Username: spacely-eng-admin
Password: redacted
Confirm Password: redacted
Full Name: Spacely Engineering Administrator
Email Address: spacely-eng-admin@example.com - Click the Create User button and then click the Log Out link in the upper right.
- Login using the newly created user and then on the left side click Manage Jenkins. Then click on Manage Users and locate the admin user (not the newly created user). Click on the red circle with a line running through it to delete the old admin user and then click the Yes button to confirm.
- On the left side click on Manage Jenkins and then click Configure Global Security.
- Setup the necessary options by taking inspiration from the below examples.
Enable Security: checked
Security Realm: Jenkins’ Own User Database
Authorization: Matrix-Based SecurityEnsure the Jenkins Admin user has access to everything. For Anonymous Users, enable the following:
Agent: Configure, Connect, Create
For Authenticated Users, enable the following:
Overall: Read
Job: Build, Configure, Create, Discover, Read, WorkspaceMarkup Formatter: Safe HTML
Agents: Fixed – port 50000CSRF Protection
Prevent Cross Site Request Forgery Exploits: checked
Crumb Algorithm: Default Crumb Issuer
Enable Proxy Compatibility: checkedCLI
Enable CLI Over Remoting: unchecked
Hidden Security Warnings
Enable Agent -> Master Access Control: checked
Click the Save button when finished.
- Back in Manage Jenkins, click Configure System. Scroll down to the section titled Jenkins Location and under the field Jenkins URL, ensure the proper URL is populated (e.g. https://jenkins-spacely-engineering.example.com:52443).
- Under the field titled System Admin Email Address, enter the email address that will be used to send messages to users of Jenkins (e.g. admin@example.com).
- Scroll down to the section titled Extended Email Notification and then fill out the required fields by taking inspiration from the below examples.
SMTP Server: smtp.example.com
Default Content Type: HTML (text/html)
Use List-ID Email Header: unchecked
Add ‘Precedence: bulk’ Email Header: checked - Scroll down to the next section titled Email Notification and fill out the SMTP Server field with the one used in the previous step. Then click the Advanced button and fill out the required fields by taking inspiration from the below examples.
Use SMTP Authentication: checked
Username: admin@example.com
Password: redacted
Use SSL: unchecked
SMTP Port: 587
Reply-To Address: admin@example.com
Test Configuration By Sending Test Email: checked
Test Email Recipient: spacely@example.comWhen done entering in the information, click the Test Configuration button.
FYIIf the Use SSL box is checked, the email will not send. This is because the above configuration is for TLS instead of SSL. This is the recommended configuration. - To ensure all settings are saved up until this point, click the Apply button.
- Scroll up to the top of the page and on the left side click Credentials. Click on (global) and then on the left side click Add Credentials.
- Under the dropdown field titled Kind, select Docker Host Certificate Authentication. Keep the Scope set to the default of Global (Jenkins, Nodes, Items, All Child Items, Etc).
- SSH into to the second to last VM (e.g. Spacely-Engineering-VM-004) and then run the following commands. Be sure to copy the output of each cat command and notate which file the output came from (this will be needed shortly).
cd ~/.docker
sudo cat key.pem
sudo cat cert.pem
sudo cat ca.pem
- Return to the browser Jenkins is running in and then paste the contents of key.pem into the Client Key field. Then paste the contents of cert.pem into the Client Certificate field. Next, paste the contents of ca.pem into the Server CA Certificate field.
- Fill out the Description field with a name that describes these credentials (e.g. Spacely-Engineering-VM-004-Docker-Daemon). Click the OK button to add the credentials.
- Repeat steps 35 – 39 for the last VM (e.g. Spacely-Engineering-VM-005).
FYIThese credentials will allow the Yet Another Docker Plugin to securely and successfully communicate with the Docker Daemon on the last two VMs. This is needed so the plugin can spawn Docker containers to service as ephemeral build slaves.
- Click on the Jenkins logo in the upper left corner to return to the main page.
- On the left side click Manage Jenkins and then click Configure System.
- Scroll down to the section titled Cloud and then click the dropdown titled Add a New Cloud. Select Yet Another Docker and then fill out the required fields by taking inspiration from the below examples.
Cloud Name: Spacely-Engineering-VM-004
Docker URL: tcp://10.0.0.7:2376
Host Credentials: Spacely-Engineering-VM-004-Docker-Daemon
Type: NettyClick the Test Connection button before proceeding to the next step.
- Under the Max Containers field, enter 50.
FYIDepending on the size of the VM, the maximum amount of containers may need to be adjusted. It is recommended to adjust this when necessary based on empirical data.
- Under Images, click Add Docker Template and then select Docker Template. Fill out the required fields by taking inspiration from the below examples.
Max Instances: 50
Docker Container Lifecycle Section:
Docker Image Name:
<container-registry-url>/cicd-tools/jenkins-slave:8u151-jre-alpine
Pull Image Settings Section:
Pull Strategy: Pull Never
FYIThe Max Instances value should match the Max Containers value set in the previous step. - Click the Create Container Settings button and then fill out the required fields by taking inspiration from the below examples.
Create Container Settings Section:
Volumes: /var/run/docker.sock:/var/run/docker.sock
Extra Hosts:jenkins-spacely-engineering.example.com:127.0.0.1
FYIThe reason the FQDN of jenkins-spacely-engineering.example.com is being mapped to the IP address 127.0.0.1 is to prevent name resolution of the private load balancer only in the context of the Jenkins Slave container. Even though later on a different Jenkins Master URL will be used to bypass the private load balancer, Jenkins remoting will still attempt to locate a server among the entries: Main Jenkins URL and Different Jenkins Master URL.The Slave Agent will produce this in the logs: INFO: Locating server among [https://jenkins-spacely-engineering.example.com:52443/, https://spacely-eng-vm-004.example.com:52443/]. The second entry is for the URL used as a different Jenkins Master URL (more on this later). Mapping the main FQDN (which resolves to the IP address of the private load balancer) jenkins-spacely-engineering.example.com to 127.0.0.1 will force the Slave Agent to use the second entry, which is exactly what is necessary otherwise timeouts may occur from time to time which is not desired.
Network Mode: spacely-eng-disparate
FYIMore information on bind mounting the Docker socket can be found here.Remove Container Settings Section:
Remove Volumes: checked
Jenkins Slave Config Settings Section:
Labels: spacely-engineering-vm-004
Slave (slave.jar) Options: -workDir /home/jenkins
Slave JVM Options: -Xmx8192m -Djava.awt.headless=true -Duser.timezone=America/Chicago
Different Jenkins Master URL: https://spacely-eng-vm-004.example.com:52443FYIBe sure to use the correct URL for Different Jenkins Master URL. In the above example, the URL being used is the FQDN of VM 004 (this was entered into the primary name server earlier) which resolves to the IP address 10.0.0.7. In order to understand why this is being done, let’s walk through what happens when the Jenkins Slave container attempts to contact Jenkins Master using this URL.When the Jenkins Slave container runs, it will attempt to connect to Jenkins Master through the NGINX reverse proxy. The FQDN of spacely-eng-vm-004.example.com will be resolved to the private IP address specified in the primary name server, in this case 10.0.0.7. This IP address is for Spacely-Engineering-VM-004. Keep in mind, that node has been dedicated to running only Jenkins jobs and will not have Jenkins Master or Jenkins NGINX running on it. However, through the awesomeness of mesh networking, since the port 52443 was specified, Docker will automatically resolve the connection to the proper node.
The reason this matters is because it’s essential to bypass the private load balancer for JNLP connections since they will often fail when working through a load balancer. Now the load balancer can be bypassed and everything will work fine.
- Click the Apply button when finished to save.
- Repeat steps 41 – 47 for the last VM (e.g. Spacely-Engineering-VM-005). Be sure the enter the right IP address for the VM, name, FQDN, etc.
FYIIt’s important to note that the Docker Template settings are identical for both clouds. This may seem confusing because as of right now, which VM will spawn the slave containers? The answer is the first one. However, through some added logic to the Jenkinsfile used for each project, both can be used.
- Click the Jenkins logo in the upper left corner to return to the main page and then on the left side click New Item. Under the field titled Enter an Item Name, enter Test. Then click on Pipeline and then click the OK button.
- Scroll down to the Pipeline section and keep the Definition field set to Pipeline Script. In the Script box, paste the following:
1234node ('spacely-engineering-vm-004') {stage 'Stage 1'sh 'echo "Hello from your favorite test slave!"'}
- Click the Save button and then on the left side click Build Now.
- Once the build is finished, find it under the Build History section and then click on it. On the left side click Console Output. If everything went well, the following output will be shown:
[Test] Running shell script
+ echo Hello from your favorite test slave!
Hello from your favorite test slave!
FYIAs mentioned before, this test script will only execute on the second to last VM (e.g. Spacely-Engineering-VM-004). Once the added logic is used in the Jenkinsfile for other projects, both VMs will be used.
Setup GitLab
- SSH into the first VM (e.g. Spacely-Engineering-VM-001).
- Run the command
sudo mkdir -p /nfs/docker-data/build-data/cicd/gitlab
. - Copy the build files for GitLab to /nfs/docker-data/build-data/cicd/gitlab.
- Customize docker-stack.yml and change the environment variables for external_url, time_zone, and desired email settings.
FYIThe external_url environment variable should be the same that will be setup as a DNS record and hosts file entry done in later steps (e.g. https://gitlab-spacely-engineering.example.com:51443).
- Run the command
sudo mkdir -p /nfs/docker-data/container-data/cicd/gitlab/data
. Then run the commandcd /nfs/docker-data/container-data/cicd/gitlab
. Next, run the commandsudo mkdir config certs logs
. - Switch to the GitLab data folder by running the command
cd data
. Then run the commands below to create additional folders.sudo mkdir -p git-data/repositories
sudo mkdir -p gitlab-rails/shared/artifacts
sudo mkdir -p gitlab-rails/shared/lfs-objects
sudo mkdir -p gitlab-rails/uploads
sudo mkdir -p gitlab-rails/shared/pages
sudo mkdir -p gitlab-ci/builds
sudo mkdir .ssh
- Change the owner and permissions of these folders by running the commands below.
sudo chown -R 998:0 /nfs/docker-data/container-data/cicd/gitlab/data/git-data
sudo chmod -R 0700 /nfs/docker-data/container-data/cicd/gitlab/data/git-data
sudo chown -R 998:998 /nfs/docker-data/container-data/cicd/gitlab/data/git-data/repositories
sudo chmod -R 2770 /nfs/docker-data/container-data/cicd/gitlab/data/git-data/repositories
sudo chown -R 998:999 /nfs/docker-data/container-data/cicd/gitlab/data/gitlab-rails/shared
sudo chmod -R 0751 /nfs/docker-data/container-data/cicd/gitlab/data/gitlab-rails/shared
sudo chown -R 998:0 /nfs/docker-data/container-data/cicd/gitlab/data/gitlab-rails/shared/artifacts
sudo chmod -R 0700 /nfs/docker-data/container-data/cicd/gitlab/data/gitlab-rails/shared/artifacts
sudo chown -R 998:0 /nfs/docker-data/container-data/cicd/gitlab/data/gitlab-rails/shared/lfs-objects
sudo chmod -R 0700 /nfs/docker-data/container-data/cicd/gitlab/data/gitlab-rails/shared/lfs-objects
sudo chown -R 998:0 /nfs/docker-data/container-data/cicd/gitlab/data/gitlab-rails/uploads
sudo chmod -R 0700 /nfs/docker-data/container-data/cicd/gitlab/data/gitlab-rails/uploads
sudo chown -R 998:999 /nfs/docker-data/container-data/cicd/gitlab/data/gitlab-rails/shared/pages
sudo chmod -R 0750 /nfs/docker-data/container-data/cicd/gitlab/data/gitlab-rails/shared/pages
sudo chown -R 998:0 /nfs/docker-data/container-data/cicd/gitlab/data/gitlab-ci/builds
sudo chmod -R 0700 /nfs/docker-data/container-data/cicd/gitlab/data/gitlab-ci/builds
sudo chown -R 998:998 /nfs/docker-data/container-data/cicd/gitlab/data/.ssh
sudo chmod -R 0700 /nfs/docker-data/container-data/cicd/gitlab/data/.ssh
FYINormally, GitLab would handle the creation of these folders and permissions automatically. However, these folders have been explicitly created so as to ensure no permissions errors occur. This is an extra precaution which is especially helpful when using a network share (e.g. NFS). GitLab is not handling this anymore due to the manage_storage_directories[‘enable’] option set to false in the docker-stack.yml file.For more information on GitLab configuration options, see this article.
- Copy the certificate files (server-cert.pem and server-key.pem) obtained from a self-signed or trusted CA to /nfs/docker-data/container-data/cicd/gitlab/certs.
FYIAs mentioned before, it is recommended to obtain a wildcard certificate from a trusted CA using a desired registered domain name. Even though the domain is publicly registered, no entries will exist in its public name servers that point to private assets within the Azure Virtual Network. The private name servers setup earlier will align to the private assets.
Keep in mind that registering a domain is required when using a certificate obtained from a trusted CA since domain ownership must be verified.
- On the main development machine being used to VPN into the Virtual Network, modify the hosts file to point to GitLab. The same FQDN entered here should also be entered into the private DNS setup earlier to be succinct (this will be done later). Be sure to change the IP and FQDN to match the private load balancer IP pointing to GitLab.
Windows
Hosts File Location: C:\Windows\System32\drivers\etc\hosts
Entry to Add: 10.0.0.10 gitlab-spacely-engineering.example.com
Example CMD (elevated prompt): notepad C:\Windows\System32\drivers\etc\hosts
Flush DNS CMD (elevated prompt): ipconfig /flushdnsLinux
Hosts File Location: /etc/hosts
Entry to Add: 10.0.0.10 gitlab-spacely-engineering.example.com
Example CMD: sudo vim /etc/hosts
Reload Networking CMD: sudo /etc/init.d/networking restartAs mentioned before, the hosts file info for Linux is being shown here even though the VPN client to access the solution only works on Windows. It is shown here in-case there is another access mechanism being used to get the private resources in the cloud (e.g. OpenVPN, Site-to-Site VPN, etc.).
FYIIf using Windows, ensure the hosts file is being edited with administrator privileges. Also, be sure to flush the DNS after saving the hosts file.If using Linux, be sure to reload networking after modifying the hosts file.
FYIAs mentioned before, the reason for editing the hosts file has to do with DNS. The amount of configuration involved to use the Virtual Network’s private name servers is greater than editing the hosts file. In addition, if each development machine is connected to a corporate network, overriding DNS settings may cause problems connecting to corporate assets. Certainly there are ways around this but this is out of the scope of this article.
- Login to the primary DNS server and add an A record for the FQDN added in the previous step and apply the configuration.
- In the first VM while in the directory /nfs/docker-data/build-data/cicd/gitlab, run the command
sudo docker stack deploy --compose-file docker-stack.yml spacely-engineering
.FYIPlease be patient as the first time setup for GitLab make take a little while. Also, the service will be deployed with the name spacely-engineering-gitlab. In addition, if using a custom image for GitLab other than what is publicly available, be sure it can be found on an internal private image registry and add the –with-registry-auth option to the stack command after logging into the private registry (e.g. docker login url). - Open a web browser and open the GitLab URL (https://gitlab-spacely-engineering.example.com:51443). Immediately change the root password as prompted.
- In the upper right corner, select the profile icon with the down arrow and then select Settings. Under the Name field, enter Spacely Engineering Administrator. Under the Email field, enter admin@example.com. Change any additional desired settings and then click the Update Profile Settings button.
- In the upper right corner, click the icon that looks like a wrench (admin area). Click on Users and then click on the Edit button under the administrator account. In the Username field, change it to something other than root (e.g. spacely-eng-admin). Scroll to the bottom and then click the Save Changes button.
- In the upper right, click License and then click the Upload New License button. Click the Browse button and find the license file and then click the Upload License button.
- In the upper right, click the cog with an arrow pointing down and then select Settings. Under Restricted Visibility Levels, select Public. Uncheck the setting titled Allow Users to Register Any Application to Use GitLab as an OAuth Provider. Check the box titled Send Confirmation Email on Sign-Up. Under the box titled Whitelisted Domains for Sign-Ups, paste the domain name currently being used (e.g. example.com). Under the Abuse Reports Notification Email field, enter a valid email address (e.g. admin@example.com).
Click the Save button when finished.
- In the upper right under the account dropdown, select Settings. Click on the Account tab and then copy the Private Token. This will be used by Jenkins later.
FYIIt is possible to use a Personal Access Token instead. These types of tokens can be revoked if needed. This is very useful if the token became public.
Also, it is important that the user account associated with the token have the necessary admin rights and permissions or else Jenkins pipeline jobs may fail to work properly.
Setup Jenkins to Know About GitLab
- Open a web browser and login to Jenkins.
- On the left side, select Credentials then click on global.
- On the left side, select Add Credentials and then under Kind select GitLab API Token. Paste the API token obtained from GitLab in the API Token field, provide a valid description (e.g. GitLab Admin User API Token), and then click the OK button.
- On the left side, select Manage Jenkins then click Configure System.
- Scroll down to the section titled GitLab and fill out the necessary options by taking inspiration from the examples below.
Enable Authentication for ‘/project’ End-Point: checked
Connection Name: Spacely Engineering GitLab
GitLab Host URL: https://gitlab-docker-only.example.com
Credentials: GitLab API Token (GitLab Admin User API Token)FYIBe sure to use the correct URL for GitLab Host URL. In the above example, the URL being used is a network alias of the GitLab service defined in the docker-stack.yml file. This is being done on purpose so the internal Docker DNS FQDN matches the domain on the certificate being used by GitLab. This will ensure the Ignore SSL Certificate Errors box is left unchecked.However, in some situations this box will need to be checked depending on how the certificate was provisioned. Using this method, Jenkins will connect to GitLab through the internal port of 443.
As mentioned before, keep in mind the Azure Private Load Balancer is being bypassed and instead all the details are being handled directly by the Docker Overlay Network. In other words, the entry for GitLab Host URL uses a mesh network alias and takes care of the details (e.g. which node the service can be accessed at, etc.).
- Click the Test Connection button and then click the Save button when finished.
Add Private Docker Registry Credentials to Jenkins
- Open a web browser and login to Jenkins.
- On the left side, select Credentials then click on global.
- On the left side, select Add Credentials and then under Kind select Username with Password. Keep the Scope set to Global and enter the username and password for the Private Docker Registry.
- Enter a valid Description for easy identification (e.g. Spacely-Engineering-Private-Docker-Registry).
- Click OK when finished.
- Select the added credentials from the list and on the left side, click Update.
- Make note of the ID as it will be needed for each project’s Jenkinsfile.
Setup Nexus
Also, certain dependencies could be considered internal-only and shouldn’t be published for the public to consume (e.g. Maven dependencies). However, these can be published to Nexus for private consumption.
- SSH into a Docker Swarm Manager VM (e.g. Spacely-Engineering-VM-001).
- Run the command
sudo mkdir -p /nfs/docker-data/build-data/cicd/nexus
. - Copy the build files for Nexus to
/nfs/docker-data/build-data/cicd/nexus
. - Edit the Nexus NGINX ./nexus-nginx/config/nexus.conf so that all references to example.com use the appropriate chosen domain.
- Edit the Nexus NGINX ./nexus-nginx/config/nginx.conf so that all references to example.com use the appropriate chosen domain.
- Run the command
sudo mkdir -p /nfs/docker-data/container-data/cicd/nexus/nexus/data
. Then run the commandsudo mkdir -p /nfs/docker-data/container-data/cicd/nexus/nexus-nginx/certs
. - Run the command
sudo chown -R 200:200 /nfs/docker-data/container-data/cicd/nexus/nexus/data
. - Ensure Docker Compose is installed by following the instructions found at its GitHub Repo. For example, run the command
sudo su
then the following command:curl -L https://github.com/docker/compose/releases/download/1.15.0/docker-compose-
uname -s-
uname -m> /usr/local/bin/docker-compose
Then run the command
chmod +x /usr/local/bin/docker-compose
, then the commandexit
, and then the commandsudo docker-compose version
. If everything was installed correctly, no errors should occur. - Run the command
sudo docker login https://<container-registry-url>
. Replace the<container-registry-url>
with the one notated earlier and enter the username and password to login. - Run the command
cd /nfs/docker-data/build-data/cicd/nexus
. Then run the commandsudo docker-compose build nexus
. Then run the commandsudo docker-compose build nexus-nginx
. - Run the following commands:
sudo docker push https://<container-registry-url>/cicd-tools/nexus:3.7.1-02
sudo docker push https://<container-registry-url>/cicd-tools/nexus-nginx:1.13.8-alpine
Replace the
<container-registry-url>
with the one notated earlier. - Copy the certificate files (server-cert.pem and server-key.pem) obtained from a self-signed or trusted CA to /nfs/docker-data/container-data/cicd/nexus/nexus-nginx/certs.
FYIAs mentioned before, it is recommended to obtain a wildcard certificate from a trusted CA using a desired registered domain name. Even though the domain is publicly registered, no entries will exist in its public name servers that point to private assets within the Azure Virtual Network. The private name servers setup earlier will align to the private assets.
Keep in mind that registering a domain is required when using a certificate obtained from a trusted CA since domain ownership must be verified.
- On the main development machine being used to VPN into the Virtual Network, modify the hosts file to point to Nexus. The same FQDN entered here should also be entered into the private DNS setup earlier to be succinct (this will be done later). Be sure to change the IP and FQDN to match the private load balancer IP pointing to GitLab.
Windows
Hosts File Location: C:\Windows\System32\drivers\etc\hosts
Entry to Add: 10.0.0.10 nexus-spacely-engineering.example.com
Example CMD (elevated prompt): notepad C:\Windows\System32\drivers\etc\hosts
Flush DNS CMD (elevated prompt): ipconfig /flushdnsLinux
Hosts File Location: /etc/hosts
Entry to Add: 10.0.0.10 nexus-spacely-engineering.example.com
Example CMD: sudo vim /etc/hosts
Reload Networking CMD: sudo /etc/init.d/networking restartAs mentioned before, the hosts file info for Linux is being shown here even though the VPN client to access the solution only works on Windows. It is shown here in-case there is another access mechanism being used to get the private resources in the cloud (e.g. OpenVPN, Site-to-Site VPN, etc.).
FYIIf using Windows, ensure the hosts file is being edited with administrator privileges. Also, be sure to flush the DNS after saving the hosts file.If using Linux, be sure to reload networking after modifying the hosts file.
FYIAs mentioned before, the reason for editing the hosts file has to do with DNS. The amount of configuration involved to use the Virtual Network’s private name servers is greater than editing the hosts file. In addition, if each development machine is connected to a corporate network, overriding DNS settings may cause problems connecting to corporate assets. Certainly there are ways around this but this is out of the scope of this article.
- Login to the primary DNS server and add an A record for the FQDN added in the previous step and apply the configuration.
- Run the command
sudo docker stack deploy --compose-file docker-stack.yml --with-registry-auth spacely-engineering
to deploy everything as a Docker Swarm Service. - Open a web browser and go to the Nexus GUI (e.g. https://nexus-spacely-engineering.example.com:53443).
- In the upper right corner, click on Sign In.
- Login using the default username admin and password admin123.
- In the upper right corner, click on the username admin.
- Under the Email field, change it to something useful (e.g. admin@example.com) then click the Save button.
- Click on the Change Password button and enter the current default password to authenticate.
- Enter a new password and again in the second field to confirm it matches and then click the Change Password button.
- Consult with the official Nexus documentation for assistance on creating the desired repositories.
Port 5000 is used for pushing Docker images and port 5001 is used for pull them. Unfortunately, due to Nexus Docker image registry implementation details, different ports must be used for both functions instead of one. For this reason, the Azure Container Service is preferred. See this for more details.
Maintaining the CICD Solution
Since most of the CICD solution relies on non-native services to function, the vast majority of maintenance will not come for free. We’ll need to manually install OS and software updates, perform backups, rotate logs, and other important tasks so as to ensure the long-term viability of the solution.
Setting Up Backups
Just as Steve Ballmer once famously said “Developers, Developers, Developers!”, I will say to you “Backups, Backups, Backups!” The importance of backups cannot be overstated and therefore is a huge priority.
Setting Up Ubuntu VM Backups
- Login to the Azure Portal and on the dashboard, select the first VM (e.g. Spacely-Engineering-VM-001).
- Under the Operations section on the left side, click on Backup.
- Fill out the required fields by taking inspiration from the below examples.
Recovery Services Vault: Create New
Vault Name: spacelyengvault001
Resource Group: Spacely-Engineering-US-South-Central - Click on the default Backup Policy and under the Choose Backup Policy dropdown, select Create new.
- Fill out the required fields by taking inspiration from the below examples.
Policy Name: WeeklyPolicy
Backup Frequency: Weekly, Saturday, 11:00AM, UTC
Retention of Weekly Backup Point: checked, 27 weeksClick the OK button to save the new backup policy.
FYIIt’s important to choose a backup time where the solution is least used. This may vary depending on your current circumstances. - Click the Enable Backup button when finished.
- Navigate back to the Operations section and click on Backup.
- Create an initial backup by clicking on Backup Now. Under the Retain Backup Till field, select a date that is about 6 months from now then click on the Backup button.
- Repeat the above steps on the remaining Ubuntu VMs. Be sure to select the newly created weekly backup policy.
Enable VM Backup Notifications
It is always a good idea to know what is going on in your environment, especially when it comes to backups. However, keep in mind the steps to enable notifications below only result in messages when failures occur.
- Login to the Azure portal and on the dashboard, under All Resources, click on See More.
- Click on the applicable vault (e.g. spacelyengvault001).
- On the left side under the section Monitoring and Reports, click on Alerts and Events.
- Click on Backup Alerts and then click on Configure Notifications.
- Under Email Notifications, click On to enable them.
- Under Recipients (Email), add the email addresses where the notifications will be sent.
- Under Notify, select Per Alert.
- Under Severity, check all options.
- Click on Save when finished.
Managing Software Updates
It’s important to update software on a regular basis to ensure any potential vulnerabilities are patched ASAP. However, it’s important to ensure software updates are handled in the proper fashion. I like to describe the proper way to update software running in a cluster as domino updating. The first domino falls and then gravity causes it to knock into the second domino, and so on (they don’t all fall at once). In other words, update one machine then proceed to the next, repeating this until all machines are updated.
It is recommended to have a maintenance plan where software updates are performed on a regular frequency during a time when the system is least used. The frequency is up to you but depending on the severity of a patch that becomes available, it may be wise to perform an early update before the normally scheduled maintenance. Tools such as apticron will help with this so emails are received when patches are available for each machine (more on this later).
Get Notified of Software Updates
Although it is possible to setup automatic updates, it isn’t recommended due to the issues outlined above. Rather, an alternative proactive measure is to be informed of new updates on a daily basis. That way, you can make a decision to update the machine early if necessary or wait for the regular scheduled maintenance window.
This can be achieved with apticron. However, it may not be a good idea to install and setup on every machine. This could cause some serious redundant messages and spam if each machine is running identical software. Rather, consider setting it up only where it needs to be. If five machines are running the same software, apticron only needs to be installed on one of them.
- SSH into the VM of your choice where apticron will be setup.
- Run the the command
sudo apt install apticron
. - Edit the file /etc/apticron/apticron.conf and change the email address (along with the from address) to the appropriate address where notifications should be sent (e.g. admin@example.com).
EMAIL="admin@example.com"
CUSTOM_FROM="admin@example.com"
- Install ssmtp in order for apticron to send email notifications. Do this by running the command
sudo apt-get install ssmtp
. - Edit the file /etc/ssmtp/ssmtp.conf and change it by taking inspiration from the below example.
1234567891011121314151617181920212223242526272829# Config file for sSMTP sendmail## The person who gets all mail for userids < 1000# Make this empty to disable rewriting.#root=postmasterroot=admin@example.com# The place where the mail goes. The actual machine name is required no# MX records are consulted. Commonly mailhosts are named mail.domain.com#mailhub=mailmailhub=smtp.example.com:587AuthUser=admin@example.comAuthPass=ChangeMeUseTLS=YESUseSTARTTLS=YES# Where will the mail seem to come from?#rewriteDomain=rewriteDomain=example.com# The full hostname#hostname=MyMediaServer.homehostname=Spacely-Engineering-VM-001# Are users allowed to set their own From: address?# YES - Allow the user to specify their own From: address# NO - Use the system generated From: addressFromLineOverride=YES
- Edit the file /etc/ssmtp/revaliases and add the following line:
root:admin@example.com:smtp.example.com:587
.FYIFor more informaiton on installing ssmtp and for information on testing it, see this article.
Install Software Updates on Ubuntu VMs
As mentioned previously, software updates should be installing following the domino updating approach. Also, each machine that is a part of the Docker Swarm will need to be drained or put into an inactive state before any updates are applied.
Moreover, it is important to test the machine after updates have been applied to ensure it is working as intended. That way, if something is no longer working then you can prevent installing updates on other machines until a solution is identified.
- SSH into the first VM (e.g. Spacely-Engineering-VM-001).
- Drain the node so it is put into an inactive state so Docker Swarm no longer uses it. Run the command
sudo docker node ls
and record the ID of the node which will be used for the next command.FYIFor more information on draining a node, see this article. - Run the command
sudo docker node update --availability drain <NODE-ID>
. Replace<NODE-ID>
with the ID recorded earlier.FYIAny command which updates nodes will have to run on a manager node. Keep this in mind when working with a worker node. In this situation, a manager node will need to be used to drain the worker node and then the standard update commands can be run directly against the worker. - Run the command
sudo docker node ls
to verify the node has been drained. Under Availability, it should show Drain. - Run the command
sudo apt-get update
to get the list of latest updates. - Run the command
sudo apt-get dist-upgrade
to install the latest updates.FYIWhen using the dist-upgrade command, all necessary dependencies will be installed and sometimes previous versions removed. This ensures updates are installed intelligently. However, if this isn’t something you’d like then the alternate command is upgrade which will keep previous packages and not install anything missing. See this article for more details. - Run the command
sudo reboot
to reboot the machine. - Once the machine comes back up, run the command
sudo docker node update --availability active <NODE-ID>
. Replace<NODE-ID>
with the ID recorded earlier. - Test that the machine is running as expected. If everything is working properly, repeat the above steps on the remaining machines, one by one.
Install Software Updates on SoftNAS VMs
For our NFS solution, SoftNAS has been chosen to provide fast and redundant network storage. The SoftNAS VMs are special and due to their implementation, it is not recommended to update any software on them manually issuing commands. Rather, following the proper update proceed through the SoftNAS UI is required.
Keep in mind that since there are two VMs setup for proper HA and failover, updating each VM will require following a proper set of procedures or catastrophic failure may ensue.
- Open a web browser and login to the first SoftNAS VM GUI (e.g. https://10.0.0.9).
- On the left side, expand Settings and then click on Software Updates.
- If a software update exists, follow this guide from SoftNAS.
FYIThe bottom line is that SoftNAS VMs have to be updated through the GUI and not using the traditional method. Be very cautious with updating these VMs and ensure everything is backed up before proceeding.
Updating Docker Images to Latest Versions
It is important to update Docker images from time to time for many of the reasons outlined previously. The biggest reason to do so is for bug fixes or new features. Depending on how the Docker image is being utilized, getting the latest update may just be a simple change in a Docker Compose file.
Using an Already Available Image (no Dockerfile)
- Update the docker-compose.yml or docker-stack.yml file to reference the latest image version.
- Redeploy the container or service by running the appropriate command. This command will depend on whether Docker Swarm is being used or not. Previous sections of this article describe the commands.
Using a Custom Image (with a Dockerfile)
- Update the docker-compose.yml or docker-stack.yml file to reference the latest image version.
- Update each applicable Dockerfile to use the latest software and base image.
- Build the update images where applicable (e.g. sudo docker-compose build myservice).
- Redeploy the container or service by running the appropriate command. This command will depend on whether Docker Swarm is being used or not. Previous sections of this article describe the commands.
Rotating Logs
It is very important to rotate logs output by Docker containers otherwise they will start to consume very valuable disk space. One solution to this problem is logrotate. This program will automatically rotate logs based on the specified settings.
- SSH into the fist VM (e.g. Spacely-Engineering-VM-001).
- Create a file (e.g. docker) with logrotate settings for all Docker containers and copy it to /etc/logrotate.d. This would rotate all the logs for all Docker containers.
- Copy and paste the following into the newly created file.
12345678910/var/lib/docker/containers/*/*.log {rotate 52weeklycompresssize=1Mmissingokdelaycompresscopytruncate}
A Look At Fluentd
A better long-term solution to log management is to use something like Fluentd. This aggregates all your logs and makes them easily searchable. Granted, Fluentd does much more than this so it’s recommended to check out the official docs. The great news is Docker has a native Fluentd driver.
If Fluentd has been setup and you wish to use it, docker-compose.yml or docker-stack.yml will need to be modified to ensure it is used instead of the default logging driver. Under each defined service, add the following:
1 2 3 4 5 6 |
logging: driver: fluentd options: fluentd-address: localhost:24224 tag: "{{.ImageName}}/{{.Name}}/{{.ID}}" |
It’s important to give an additional person (the more the better) shared keys to the kingdom. Granted, the person who will be given access should be trusted and not be considered a risk factor. Nobody likes waking up at 3AM to be notified of a failure due to a novice admin taking something offline by mistake.
- Give co-administrator access to the Azure account.
- Create a new user account for each Ubuntu VM with sudo access. This is done by selecting the VM in the Azure Portal, going to the Support + Troubleshooting Section, and clicking on Reset Password. Change the username from spacely-eng-admin to the desired username and paste that user’s public key and then click Update.
FYIThis will actually create a new user with the desired public key and do all the heavy lifting for you. Be sure not to mistakenly update the public key on the existing admin account (ensure the username is changed).
This approach will not apply to SoftNAS VMs. Instead, you will have to manually SSH into each SoftNAS VM and add the user and any authorized keys.
- Create user accounts with admin access to all desired services (e.g. Jenkins, GitLab, Nexus, etc.).
Troubleshooting by Viewing Logs
Anytime something goes wrong, it is always best to check the logs first. This will save a large amount of time. Depending on what needs to be checked, the process for checking specific logs varies. See below for the commands needed to access certain logs.
Docker Daemon: journalctl -u docker.service
Docker Container: sudo docker logs <container-id>
Docker Swarm Service: sudo docker service logs <service-id>
Using the CICD Solution
Now that all the hard work has been done setting up the CICD solution, we can begin to take advantage of it.
Development Machine Setup
The following steps will need to be followed on each machine where access is desired to the CICD solution.
- Ensure the following software is installed: a) Git (Windows), b) Comparison Tool (Beyond Compare / KDiff3 / Meld), c) (optional) Git GUI (SourceTree [recommended] / Others).
- Configure Git to use desired comparison tool (General Instructions / Beyond Compare Instructions).
- Request an Azure VPN Gateway from the CICD Administrator and install it by following these instructions under the section titled Install an Exported Client Certificate.
FYICICD Administrators will need to generate a new Client Certificate and then give that to the CICD User for installation. Follow these instructions under the section titled Generate a Client Certificate to generate the necessary client certificate.FYIImportant Security Notice
Each client certificate needs to be generated from a root certificate. Only 20 root certificates can be configured with the VPN Gateway at a time. Therefore, depending on the number of users who need to access the private Azure CICD resources, there will likely be groups of client certificates created from a single root certificate.
Based on these limitations, it is recommended to create a single root certificate for a particular group or organization and then create multiple client certificates from it. If a CICD User leaves the organization, their client certificate can independently be revoked without having to revoke the root certificate or affect other client certificates generated from the same root certificate.
For Maximum Security
It may be possible for distributed client certificates to be given to other users and allow for additional undesirable authentication. The installation of each client certificate involves entering a password. Therefore, it is highly recommended NOT to distribute this password in writing. Rather, ensure the password is very unique and hard to remember and verbally walk a CICD User through the client certificate installation process, giving them the password one character at a time. This may be a harder approach but is much more secure.
Also, be sure to check all VPN Gateway Connection Logs to check for any unrecognized users. Keep in mind, even if a person is able to connect to the Azure Private Network through the VPN Gateway, that doesn’t mean they’ll have the keys to the kingdom. Each asset (e.g. VM, Azure Portal, Azure Native Service, etc.) will require authentication. However, they will be able to access raw network resources and potentially check for vulnerabilities. Therefore, it is important to harden all assets even though they are technically not visible to everyone on the outside.
- Request the VPN Client installation package from the CICD Administrator.
FYICICD Administrators can get this file by following these instructions.
- Request the public IP address of the VPN Gateway and configure the VPN connection to use the public IP of the VPN Gateway. Also, configure it to use Certificate Authentication so that the installed client certificate will be used for authentication.
- Add the necessary entries to the Windows hosts file for CICD services (e.g. GitLab, Jenkins, etc.). To do this, open PowerShell with Administrator privileges and then run the command
notepad C:\Windows\System32\drivers\etc\hosts
. Add the following lines and then click save:10.0.0.10 jenkins-spacely-engineering.example.com
10.0.0.10 gitlab-spacely-engineering.example.com
10.0.0.10 portainer-spacely-engineering.example.com
10.0.0.10 nexus-spacely-engineering.example.com
FYIThe IP address listed here should be that of the private load balancer. Keep in mind the IP you will use may vary depending on service provisioning order. In addition, not all services will require an entry in the host file. Most normal developers will only require access to Jenkins and GitLab. However, even if undesired services are in the hosts file, each service will require an account to access.
- Flush the DNS cache by running the command
ipconfig /flushdns
. - Connect using the VPN client to the Azure Private Network. Please notify the CICD Administrator of any problems encountered.
- Open a browser and connect to the GitLab server. This will be the FQDN entry for GitLab added to the Windows hosts file.
- Create a new GitLab account by clicking the Register tab next to the Sign In tab. Be sure to login after creating the account and setup the proper profile details.
- If necessary, request a Jenkins account (if allow signups is disabled) from the CICD Administrator (this account will need to be created in Jenkins). This is only needed for creating Jenkins pipeline jobs which will be a part of every GitLab repository. It is recommended to have a limited amount of users with this capability.
FYIPermissions for new Jenkins users is currently setup to allow most actions for authenticated users. Feel free to change this to something more granular. This can be done by going to Manage Jenkins -> Configure Global Security.
- In order to properly perform Git operations such as cloning repos, the use of a proxy may cause issues. Because of this, in order to bypass the proxy strictly for GitLab, the no_proxy Windows environment variable needs to be modified to include the domain for GitLab.
Variable Name: no_proxy
Variable Value: 127.0.0.1,gitlab-spacely-engineering.example.com
Required Git Development Workflow
In order to properly use the CICD solution, the below workflow is required. If this isn’t followed, CICD tasks will not be invoked.

Jenkins and GitLab Project Configuration
In order to enable CICD, both Jenkins and GitLab configuration is required. Jenkins will require at least one disparate pipeline job for each GitLab repository. Moreover, GitLab repositories will require Jenkins integration by connecting them to the created Jenkins pipeline jobs. The steps below will outline to setup process to enable CICD.
Create GitLab Upstream Repository
- Login to GitLab (e.g. https://gitlab-spacely-engineering.example.com:51443) and create an upstream repository by creating a Project. Feel free to take advantage of grouping repositories together by creating a Group.
- After creating the repository, create three branches named master, develop, and jenkinsfile.
- Checkout the jenkinsfile branch and then obtain the Jenkinsfile template from the CICD Administrator and modify it to work with repository. See all comments inside Jenkinsfile for instructions.
Add the Jenkinsfile to the jenkinsfile branch. Only the Jenkinsfile should exist inside its own branch. This is due to the lightweight checkout option used to obtain only the Jenkinsfile for getting the pipeline logic. The rest of the Git checkout logic exists within this file.
FYIThe Jenkinsfile should exist only in its own branch. All other files will be in the other branches, including the Dockerfile that’s used to build the project. Also, make sure to customize the Jenkinsfile for your project. It has been setup like a template and all details of the file are captured in comments. Finally, please review the comment at the top of the Jenkinsfile about script approvals. The initial job may fail until the appropriate script approvals are approved. - Be sure to lock down the repository so that only those with appropriate access can directly push to it. The idea is that people work in forks and submit changes through Merge Requests to the upstream repository (usually targeting the develop branch).
To lock it down, select the repository and then go to Settings -> General -> General Project Settings. Change the Default Branch to develop.
Click the Save Changes button when finished.
- Go to Settings -> General -> Merge Request Settings. Tick the box for Merge Request Approvals and add the appropriate approvers. For more details on this feature, see this article. Also, tick the box for Can Override Approvers and Required Per Merge Request, Remove All Approvals in a Merge Request When New Commits are Pushed to Its Source Branch, and Only Allow Merge Requests to be Merged if the Pipeline Succeeds.
Click the Save Changes button when finished.
- Go to Settings -> Repository -> Protected Branches. For each branch, select the proper branch name. Under Allowed to Merge, select the appropriate role (e.g. Masters). Do the same thing for Allowed to Push.
Click the Protect button for each branch.
Create a Jenkins Pipeline Job
- Login to Jenkins and then on the left side click on New Item.
- Enter the name of the project (e.g. spring-boot-demo) and then click Pipeline. Click the OK button when finished.
FYIIt is important to adopt a naming convention that allows for easy identification of Jenkins pipeline jobs and their related GitLab repositories.
- Configure the new pipeline job applicable to open merge requests by filling out the necessary fields and options by taking inspiration from the examples below.
With the default options, make the changes below.
Pipeline Name:
<populated from previous step>
Description:<description which ensures pipeline job is easily identifiable>
Discard Old Build: checked
Strategy: Log Rotation
Max # of Builds to Keep: 25
Do Not Allow Concurrent Builds: checked
GitLab Connection: Spacely Engineering GitLabFYIThe reason concurrent builds are disallowed is because the Jenkins Slave container issues commands to the Docker Daemon to build and run images. Errors or collisions could potentially occur if these actions are happening at the same time. Thus, with this setting enabled, only one job will happen at a time for this project only. Jobs will be queued in order and run one after the other. This is very helpful when several outstanding merge requests trigger new builds when the upstream target branch changes.Under the Build Triggers section:
Build When a Change is Pushed to GitLab: checked (be sure to copy the GitLab CI Service URL displayed)
Push Events: checked
Opened Merge Request Events: checked
Accepted Merge Request Events: checked
Rebuild Open Merge Requests: On push to source or target branch
Comments: unchecked
Enable [ci-skip]: unchecked
Ignore WIP Merge Requests: checked
Set Build Description to Build Cause: checkedUnder the Pipeline section:
Definition: Pipeline script from SCM
SCM: GitUnder the Repositories sub-section:
Repository URL: https://gitlab-docker-only.example.com/cicd-demos/spring-boot.git
Credentials:<click Add, select Jenkins>
FYIBe sure to use the correct URL for Repository URL. In the above example, the URL being used is a network alias of the GitLab service defined in the docker-stack.yml file used to setup GitLab. This is being done on purpose to bypass the private load balancer and access GitLab directly through the overlay network. If the private load balancer URL is used instead, errors may occur.Continuing to add credentials, select the Global Credentials option under Domain. For Kind, select Username with Password. For Scope, select Global. Then enter the username and password for a GitLab account which has admin/owner access to the desired upstream repository. Provide a good description (e.g. GitLab User [mr-spacely]) and then click the Add button.
FYIIt is important for the GitLab user credentials to have access to all upstream and forked repositories which are a part of the job.Now select the newly created credentials so that they are used for access to the repos.
Branch Specifier: */jenkinsfile
Script Path: Jenkinsfile
Lightweight Checkout: checked - Click the Save button when finished.
Enable Jenkins Integration in GitLab Repository
- Login to GitLab and navigate to the desired upstream repository.
- Click on the Settings tab and then click on the Integrations tab.
- Scroll down to Project Services and then select Jenkins CI.
FYIDo not select Jenkins CI (Deprecated).
- Check the Active box and then uncheck the Push box. Select the Merge Request box.
- Under the Jenkins URL field, enter the URL to access Jenkins (e.g. https://jenkins-nginx-docker-only.example.com).
FYIBe sure to use the correct URL for Jenkins URL. In the above example, the URL being used is a network alias of the Jenkins NGINX service defined in the docker-stack.yml file used to setup Jenkins. This is being done on purpose to bypass the private load balancer and access Jenkins directly through the overlay network. If the private load balancer URL is used instead, errors may occur.
- Under the Project Name field, enter the project name (e.g. spring-boot-demo).
- Enter the Jenkins username and password for a Jenkins user with admin rights in the remaining fields and then click the Test Settings and Save Changes button.
Create Fork of Upstream Repository
When ready to begin development work, the upstream repository needs to be forked. As mentioned earlier when discussing the Git Development Workflow, all work should be done inside the fork with a disparate branch created specifically for the task at hand. When changes are ready to be introduced to the upstream repository, a Merge Request is created with the source branch being the fork’s disparate branch and the target branch being upstream’s develop branch.
Once the pull request is submitted, the Jenkins pipeline job will run which will lint, build, test, deploy, etc. (depending on how the jenkinsfile has been customized). If a failure occurs, the merge request will not be allowed until the issue has been corrected. Once the merge request has been accepted, the Jenkins pipeline job will run again and perform additional logic. Be sure to review the jenkinsfile so as to completely understand what it does.
- In GitLab, navigate to the upstream repository.
- At the top of the page where the title of the repository is, click on the Fork button.
- Select the user or group where the fork should be created.
- Clone the newly created fork by running the command
git clone <fork repo url>
. - Navigate to the cloned repo folder and then run the command
git remote add upstream <upstream repo url>
.
Create Branch in Fork and Submit Merge Request
- When navigated to the forked repo folder, create a new branch to begin working in by running the command
git checkout -b <branch name>
. - In order to properly test and follow along with this article, add the Spring Boot Demo files to your branch.
- Modify the pom.xml file in the Spring Boot demo root folder and change the URL under the repository and snapshotRepository sections to use the Nexus server (e.g. https://nexus-spacely-engineering.example.com:53443).
- Modify the settings.xml file from the Spring Boot demo root folder and under the mirror section, change the URL to use to the Nexus server (e.g. https://nexus-spacely-engineering.example.com:53443).
- Copy the settings.xml file from the Spring Boot demo root folder into the data volume folder (see below FYI). Modify it to include the private Maven repo credentials. For more information, see this article.
FYIThe Spring Boot demo project contains a file named settings.xml in its root folder. This file will be added to the project’s Docker image during the build process. This file alone will allow for using the Nexus server to get the latest artifacts. In other words, it serves as a Maven proxy and will cache the files it retrieves from the public Maven repo and thus save bandwitdh.
However, during the push stage in the Jenkinsfile it will attempt to publish the built JAR into the Nexus private Maven repo. Using the settings.xml file in the image will fail since it is missing the repo credentials. This is on purpose since you never want to publish secrets in a Docker image (just like in a Git repo). Therefore, this same file will need to be copied into the data volume location used by the Jenkins Slave container for the Maven settings (see the volume mapping for the Jenkins Slave container in this file). After this same file is copied to that location where it will be used by the running Jenkins Slave container, the credentials can be added and they will remain on the data volume and not in the Docker image.
To wrap up, the settings.xml file which gets added to the built Docker image is used during the image build process so it gets Maven artifacts from the Nexus server. In contrast, the settings.xml file which exists on the data volume with the Maven repo credentials will be used by the running container in the CICD process for pushes (e.g. pushing the latest Docker image and Maven artifact), effectively overwriting the version contained in the image itself.
- Work on the project by making commits, squashes, etc.
- When ready, push the latest changes to the forked repository remote by running the command
git push -u origin <branch name>
. - Navigate to the upstream repository and look for the Create Merge Request button that appears. This happens because GitLab knows a recent change has been made to the forked repository branch and allows us to easily create a Merge Request when looking at the upstream repository.
FYIFor more information on creating a merge request inside GitLab, see this article.
- Fill out the appropriate details and ensure the source branch is the forked repository branch where the work has been done. Make sure the target branch is the upstream repository’s develop branch.
- If no conflicts exist then click on the Submit Merge Request button.
Always create a new branch and ensure your fork’s develop branch and newly created branched are in sync with the upstream repository or merge conflicts may occur. A simple way to pull changes and get everything in sync is to run the command git pull upstream develop.
Jenkinsfile and the Build Process
If you’ve made it to thing point you’ve likely already reviewed the Jenkinsfile and customized it for your project. Many things are happening here and it’s important to understand the build process and how everything works.
How Docker Fits In
This solution uses two dedicated servers to run Jenkins jobs. When Jenkins runs the jobs and executes the logic from the Jenkinsfile, it will run Docker commands against the chosen server. These commands will access the Docker Daemon directly as if they SSH’d into the server and starting running docker x
. This is possible because the Jenkins Slave container has the Docker binaries as well as Docker Compose, etc. Docker isn’t actually running inside the container but when Docker commands are executed they are translated to the host running the Jenkins Slave container.
Therefore, we are using Docker to do our builds. The sample Spring Boot Demo mentioned earlier has its own Dockerfile which describes how to build and test the application. This keeps the tooling for the project out of the Jenkins Slave container and in its own separate container that gets built, run for tests, etc. This enforces responsibility for the project setup, build settings, dependencies, etc. to the developers.
Using this approach, we can actually deploy the application onto the same server running the build if desired or even a different one. This approach offers a great deal of flexibility. We can even push our newly built Docker image for our project to our private Docker Registry, all automatically from the Jenkinsfile just like running the sudo docker push <image>
command directly on the server.
To ensure builds do not conflict with each other for the same image, the option to disable concurrent builds in Jenkins was disabled. Therefore, only one job runs at any time and the others are queued.
The Jenkinsfile also uses Docker Compose to make running Docker commands even easier. Pay attention to the configuration of these files.
How Jenkins Responds to Events
The way Jenkins has been configured is to primarily respond to merge request events. When a merge request is created, Jenkins will fire a job and ensure the code within is clean before allowing the merge request to be accepted. Naturally, over time merge requests will queue up.
When one of them is accepted and thus merged into the development branch, the other outstanding merge requests will be queued again in Jenkins and their jobs fired. This is important because the last merge request that was accepted could potentially break one of the outstanding merge requests. This ensures that is caught so it can be fixed before being introduced into the upstream development branch.
Jenkins will also fire a job when it detects a push to the upstream development and master branch. When a push has been made to the upstream development branch in the case of accepting a merge request, the job will run again but this time won’t run tests or perform linting since those would have been done when the merge request was first introduced. Also, it will perform a push and deployment. This wasn’t done before because it wouldn’t make sense until the merge request was approved and truly made its way to the upstream development branch.
Also, if a push has been detected on the master branch, it is likely it was done intentionally for a release. In this case, a job will fire and the Jenkinsfile will have special logic to handle this. Generally, if something has been pushed to master it is considered ready for production and therefore Jenkins can handle a MTP (move to production) for us.
Finally, the logic in the Jenkinsfile is setup so that if someone triggers a manual build from within Jenkins (the job wasn’t triggered by an event) it will perform actions only against the upstream development branch. This is helpful if someone wants to manually trigger a new deployment. Perhaps someone messed up the files for the previous deployment and we want to fix that with a new deployment. Jenkins has us covered here.
Load Balancing Jenkins Build Servers
As mentioned before, the Jenkins plugin (YADP) for enabling ephemeral build slaves doesn’t support modern Docker Swarm mode at the moment. Therefore, the ephemeral build containers do not run as Docker Swarm services, which would spawn them dynamically across the Docker Swarm cluster. Therefore, in order to enabling scaling and high availability, multiple servers are being using for Jenkins jobs.
However, having multiple servers isn’t enough on its own. There needs to be some logic to ensure the jobs are invoked across all the available servers. This is taken care of for us in the Jenkinsfile. There is logic there which will perform a health check (by hitting the Hello World container setup on each server earlier) on a map of servers and then build a list of those that are online. Once this list is created, a server will be picked from the list at random.
If no servers are available, the job will fail and someone will likely be getting a phone call. The more servers added to run builds, the more resilient the build system will be. Please review the Jenkinsfile to see how this works. It’s the next best thing until modern Swarm Mode is available on the plugin.
Conclusion
This has been my most ambitious article yet. We have covered a large amount of information and all the necessary steps to create a private CICD solution in Azure. I’m confident once you have a working CICD solution and you see its benefits first hand, you’ll never look back.
Let this article serve as a foundation for setting up your own CICD solution. Feel free to adapt it to meet your needs or provide any comments or feedback. I’m always open to new ideas and ways of improving things. I will continue to revisit this article and update it when applicable.