How to configure NGINX Load Balancer on Ubuntu 22?

Introduction

In this post we will set up a Load Balancer using the nginx‘s HTTP Load Balancing on Ubuntu 22. The requirement was that the load balancer is running over https and balances the connections for 4 polkadot based RPC servers. Please note that this setup would work with any other environments including standard web servers over https.

Prerequisities

  • Ubuntu 22 is set up on the Load Balancer server.
  • All backend servers are created and working properly.
  • the loadbalancer domain lb.yourdomain.com is redirecing correctly to the server.

Create SSL certficate

We use certbot to create the SSL certificate for lb.yourdomain.com using the following commands:

sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
sudo certbot certonly --standalone --noninteractive --agree-tos --cert-name lb -d lb.yourdomain.com -m yourmail@yourdomain.com -v

This will generate 2 certificate files:

/etc/letsencrypt/live/lb/fullchain.pem
/etc/letsencrypt/live/lb/privkey.pem

Install nginx server.

sudo apt install nginx  -y

Create the nginx.conf file and add the content below and replace the domain and SSL parameters with your settings.

upstream backend {
server server1.yourdomain.com:443;
server server2.yourdomain.com:443;
server server3.yourdomain.com:443;
server server4.yourdomain.com:443;
}

server {
        server_name lb.yourdomain.com;
        root /var/www/html;
        location / {
          try_files $uri $uri/ =404;
          proxy_buffering off;
          proxy_pass https://backend;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header Host $host;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_http_version 1.1;
          proxy_set_header Upgrade $http_upgrade;
          proxy_set_header Connection "upgrade";
        }
        listen [::]:443 ssl ipv6only=on;
        listen 443 ssl;
        ssl_certificate /etc/letsencrypt/live/lb/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/lb/privkey.pem;
        ssl_dhparam /snap/certbot/current/lib/python3.8/site-packages/certbot/ssl-dhparams.pem;
        ssl_session_cache shared:cache_nginx_SSL:1m;
        ssl_session_timeout 1440m;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE
-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-A
ES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AE
S256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH
-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS";
}

Copy the nginx.conf file to its final destination and remove the old config.

sudo cp --verbose nginx.conf /etc/nginx/sites-available/nginx.conf
sudo ln -s /etc/nginx/sites-available/nginx.conf /etc/nginx/sites-enabled/nginx.conf
sudo rm -rf /etc/nginx/sites-enabled/default

Restart the nginx server to activate your configuration.

sudo service nginx restart

Even though certbot schedules automatic renewal of the SSL certificates, it won’t restart the nginx server. The new certificates to take effect if the nginx server is restarted after the SSL cert renewal, so alternatively you can add the following line to crontab.

0 */12 * * * /usr/bin/certbot renew --quiet && /usr/bin/systemctl restart nginx

This will try to renew the SSL certificate every 12 hours and if it was successful will restart the nginx server.

How to add country flag to Youtube Video Title – Step by Step Guide

After watching a number of badly made and unnecessarily long youtube videos on how to do this I decided to write a very simple and quick explanation on how to add a flag to your youtube title. I hope this will help others and saves time.

  • Copy the highlighted flag icon.
  • Start editing your youtube video title and paste the icon there.
  • Once pasted it should display the flag in the title.
  • That’s it folks as easy as this.

How to Fix Spinning Blue Circle Windows 10

This morning I encountered a never ending spinning circle on my laptop. After some research it was clear that one of the applications were faulty. I have checked the application logs in the Event Viewer and I found that the UIhost.exe process was causing the issue.

After a quick google search it turned out that this process is associated with McAfee WebAdvisor. So the solution was to remove the McAfee WebAdvisor altogether. The never ending spinning blue circle issue is usually caused by a faulty executable and the easiest way to identify it is to check the Windows Event Viewer Application Logs.

The easiest way to execute the Windows Event Viewer is to search for “event” in the search field in the Window Start Menu like this:

A reference to a resource type must be followed by at least one attribute access, specifying the resource name.

There are multiple post going around on this error in my case the problem was caused by missing quotes around the value of provisioningMode. So instead of this:

resource "kubernetes_storage_class" "us-east-1a" {
  metadata {
    name = "us-east-1a"
  }
  storage_provisioner = "efs.csi.aws.com"
  reclaim_policy      = "Retain"
  parameters = {
    provisioningMode = efs-ap
    fileSystemId     = var.us-east-1a-vol
    directoryPerms   = "777"
  }
  mount_options = ["file_mode=0700", "dir_mode=0777", "mfsymlinks", "uid=1000", "gid=1000", "nobrl", "cache=none"]
}

Use this:

resource "kubernetes_storage_class" "us-east-1a" {
  metadata {
    name = "us-east-1a"
  }
  storage_provisioner = "efs.csi.aws.com"
  reclaim_policy      = "Retain"
  parameters = {
    provisioningMode = "efs-ap"
    fileSystemId     = var.us-east-1a-vol
    directoryPerms   = "777"
  }
  mount_options = ["file_mode=0700", "dir_mode=0777", "mfsymlinks", "uid=1000", "gid=1000", "nobrl", "cache=none"]
}

How to automatically register Gitlab Runners using runnerCreate GraphQL API mutation and Project Runners?

Gitlab has changed their Gitab Runner registration process from version 15.10. Previously there was a simple one step registration process where it was possible to register a runner by simply running gitlab-runner register command on the server/entity where the runner was installed. The old procedure was using a single registration token which was linked to the project and was used by all the runners registered for this one project.

The registration process now have changed from one step to two steps and each runner now will have to obtain their own unique authentication token.

Step 1: A new project runner will typically be created manually on the gitlab.com interface. ( scroll down for automation ).

Many of the options previously set from gitlab-runner register command as a parameter, now will be set here instead. Such as run-untagged, locked, tag-list, etc.

This new project runner will generate a unique authentication token.

Step 2: Run the gitlab-runner register command using this unique authentication token to link this runner with the project.

Needless to say that this breaks any kind of automation that has been created previously for runner registration so this would cause multiple issues for any customer using automatic registration.

After browsing some forums I found out that a new project runner can be created automatically using gitlab’s graphQL API. There is a mutation called RunnerCreate which does this part automatically. So running this call will return the authentication token, which then can be used to register the runner. I used the following call to get an authentication token:

mutation {
  runnerCreate(
    input: {projectId: "gid://gitlab/Project/00000000", runnerType: PROJECT_TYPE, tagList: "yourtag"}
  ) {
    errors
    runner {
	ephemeralAuthenticationToken
    }
  }
}

Replace the 00000000 with your project id and it should create a new project runner and return the authentication token.

I have also written a bash script that would automate the entire process end to end. The script should be running on the server/entity where the runner is installed.

 #!/usr/bin/bash
 export PRIVATE_TOKEN="Replace with your Personal Access Tokens" 
 # Go to Settings -> General and copy the numeric value from "Project ID".
 export PROJECT_ID="Replace with your Project ID" 
 export TAGLIST="yourtag" 
 export RUN_UNTAGGED="true"
 export LOCKED="true"
 # Change this is to your own hosted gitlab URL if you use gitlab.com  leave the value set.
 export GITLAB_URL="https://gitlab.com" 
 export TOKEN=$(curl "$GITLAB_URL/api/graphql" --header "Authorization:  Bearer $PRIVATE_TOKEN" --header "Content-Type: application/json" --request POST --data-binary '{"query": "mutation { runnerCreate( input: {projectId: \"gid://gitlab/Project/'$PROJECT_ID'\", runnerType: PROJECT_TYPE, tagList: \"'$TAGLIST'\", runUntagged: '$RUN_UNTAGGED', locked: '$LOCKED'} ) { errors runner { ephemeralAuthenticationToken } } }"}' |jq '.data.runnerCreate.runner.ephemeralAuthenticationToken' | tr -d '"')
 sudo gitlab-runner register --non-interactive --url $GITLAB_URL --token $TOKEN --executor shell

This script is also available on github.

How to install Ta-Lib and its python library on Ubuntu 22?

Installing TA-lib on an Ubuntu server has its challenges as not only the python library has to be installed but the product should be downloaded and compiled first. Use the following steps to perform the installation:

mkdir -p /app
sudo apt-get install build-essential autoconf libtool pkg-config python3-dev -y
cd /app
sudo wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz
sudo tar -xzf ta-lib-0.4.0-src.tar.gz
cd ta-lib/
sudo ./configure
sudo make
sudo make install
sudo pip3 install --upgrade pip
sudo pip3 install TA-Lib

In case you would be using gitlab pipelines you can use the following job to do the same:

 stages:
   - prepare

 prepare:
   stage: prepare
   script:
     - mkdir -p /app
     - sudo apt-get install build-essential autoconf libtool pkg-config       python3-dev -y
     - cd /app
     - sudo wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz
     - sudo tar -xzf ta-lib-0.4.0-src.tar.gz
     - cd ta-lib/
     - sudo ./configure
     - sudo make
     - sudo make install
     - sudo pip3 install --upgrade pip
     - sudo pip3 install TA-Lib

Happy trading!

Azure Terraform – Error: expected “user_data” to be a base64 string, got #!/usr/bin/bash

I am testing Azure setup with terraform to automate VM instance provisioning using user data scritps. User data is a set of scripts or other metadata that’s inserted to an Azure virtual machine at provision time. It rans right after the OS has been installed and the server boots. I would typically use it to update the os, install tools and software I would need on the provisioned node using a shell script which is defined as a data_template like this:

data "template_file" "init-wp-server" {
  template = file("./init-wp-server.sh")
}

I tried to speficy the user data file the same way I used to do on AWS and I was getting the following error:

Error: expected "user_data" to be a base64 string, got #!/usr/bin/bash

This is because Azure requires the user data file to be Base64-Encoded. So instead of using this in azurerm_linux_virtual_machine definiton:

  user_data = data.template_file.init-wp-server.rendered

The following shoud be used:

  user_data = base64encode(data.template_file.init-wp-server.rendered)

Azure Terraform Error – Please change your resource to Standard sku for Availability Zone support.

I am testing creating resources with Terraform on Azure. I have tried to force one AZ per public ip and I ran the following code:

# Create public IPs
resource "azurerm_public_ip" "external_ip" {
  count               = length(var.wp_test_instance_location)
  name                = "external_ip-0${count.index}"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  allocation_method   = "Dynamic"
  zones               = tolist([var.wp_test_instance_location[count.index], ])
  tags = {
    environment = "test"
    terraform   = "Y"
  }
}

Then I ran into the following error message:

Error: creating/updating Public Ip Address: (Name "external_ip-01" / Resource Group "rg-bright-liger"): network.PublicIPAddressesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="ZonesNotAllowedInBasicSkuResource" Message="Request for resource /subscriptions/[MASKED]/resourceGroups/rg-bright-liger/providers/Microsoft.Network/publicIPAddresses/external_ip-01 is invalid as it is Basic sku and in an Availability Zone which is a deprecated configuration.  Please change your resource to Standard sku for Availability Zone support." Details=[]

The issue is that the SKU setting for the public IP has to be Standard instead of Basic ( which is the default and I didn’t set it before ). Also the allocation_method parmaters has to be changed from Dynamic to Static. The corrected code looks like:

# Create public IPs
resource "azurerm_public_ip" "external_ip" {
  count               = length(var.wp_test_instance_location)
  name                = "external_ip-0${count.index}"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  allocation_method   = "Static"
  zones               = tolist([var.wp_test_instance_location[count.index], ])
  sku                 = "Standard"
  tags = {
    environment = "test"
    terraform   = "Y"
  }
}

StandardOutput log file is not updating after the linux service restarts

I ran into this issue recently. I have created a linux service on Ubuntu, defined the StandardOutput to redirect logging to a file and anytime I restarted the service the log file didn’t update.

The first problem was that the logfile actually did update but it updated from the beginning of the file keeping the older lines and updatig the logfile gradually which is a very weird behaviour I would say.

The way this was fixed is that I have changed the StandardOutput definition in my service file from:

StandardOutput=file:/var/log/application.log

to

StandardOutput=append:/var/log/application.log

This means if the file doesn’t exists it will be created if it does exits it will just append the new log lines to the existing file instead of update the file from the very beginning causing confusion.

How to create a new Virtual server instance for VPC in IBM Cloud – Step by Step Guide

In this step by step guide we will create a new Virtual server instance for VPC on IBM Cloud. The creation of a virtual machine under this new option tends to be more complicated than creating a simple virtual machine in the Classic Infrastucture. Let’s get right to it.

Prerequisites

There are number of prerequisites that need fulfilling before the actual creation of the virtual machine.

Create an SSH key pair

The virtual machine will be created without a root password. In order to be able to log in to your new virtual machine you will need to use an SSH key pair which is to be generated manually.

We used an Ubuntu linux session to generate a key pair for the root user by executing the following command:

ssh-keygen -t rsa -C “root”

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /app/root
Enter passphrase (empty for no passphrase):

Enter same passphrase again:
Your identification has been saved in /app/root
Your public key has been saved in /app/root.pub

This procedure generates a public and a private key. We named the public key file to root.pub and the private key was automatically saved as root.

Open IBM Cloud in your browser and navigate to the SSH Keys. VPC Infrastructure -> SSH keys

Click on Create at the SSH keys for VPC screen.

Fill in the information at the next pop up window. Name your certificate and copy paste the public key into the text area in the bottom ( we won’t show that ). If you filled in all the details correctly the Create button will turn blue. Click on that button to continue.

If you face an error here stating that the certificate is incorrect it might be that you tried to copy it across from a Linux shell by cat-ing the file. Open the public key in a text editor and copy it across from there.

Once you have created the ssh key it will show up on the SSH keys for VPC list.

Create Floating IPs for VPC

Floating IPs are basically public ip addresses. To be able to access the server using SSH at the first time a floating ip will be bound to the Virtual server instance for VPC which we will create later on.

Navigate to VPC Infrastructure -> Floating IPs then click Reserve.

Enter the Floating IP name, then click on Reserve. We used testip for the name.

The floating ip is now created.

Create the Virtual server instance for VPC

Now we have the prerequisites in place it is time to create our VM for VPC.

Navigate to VPC Infrastructure -> Virtual server instances and click on Create

Add the name of you choice and select the ssh keys you have created previously, then click Create virtual Server.

Your virtual server is now created. The vm will only have a Private IP.

Navigate to VPC Infrastructure -> Floating IPs, right click on the drop down menu and select Bind.

Select the VM instance you have created at Resource to bind then click Bind.

The status is now greened out showing Bound and the Targeted device should show your vm.

Log into the server using ssh from an other linux server or desktop using the following command:

ssh -i root root@xxx.xxx.xxx.xxx <- Floating ip

after the -i you have to specify the private key filename which is in our case is root.

root@localhost:/app# ssh -i root root@xxx.xxx.xxx.xxx
The authenticity of host ' xxx.xxx.xxx.xxx ( xxx.xxx.xxx.xxx )' can't be established.
ECDSA key fingerprint is SHA256:+wb+ApkNLds5hup2vMWEuvUSoabXppaG1ZCh0FzLrVw.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added ' xxx.xxx.xxx.xxx ' (ECDSA) to the list of known hosts.
Enter passphrase for key 'root':
[root@test01 ~]#

If you would like to use putty to log in from windows then the private key will likely to be incompatible so you will have to use the puttygen utility to load and save the private key in the proper format.

Open the puttygen utility and load your private key. Once loaded click on Save private key and you will get a putty compatible .pkk file.

Open putty, create the session, navigate to the Connection -> SSH -> Auth menu. Click Browse and open the newly generated pkk file then click Open to start the putty session.

The putty session will now open to the server.

Alternative ways to log in to your server

Once you are logged in you can set the root password using the passwd command and you no longer need to have your server on a public ip address.

Feel free to unbind the Floating IP and use it for a different server or just simply delete it.

Navigate to VPC Infrastructure -> Virtual server instances and click on your VM. once you are in the main screen of your vm, select Actions on the top right corner and pick either Open VNC console or Open serial console.

This will open a console to your vm without the need of a public ip address. Use your new password for root to log in.