I am testing Azure setup with terraform to automate VM instance provisioning using user data scritps. User data is a set of scripts or other metadata that’s inserted to an Azure virtual machine at provision time. It rans right after the OS has been installed and the server boots. I would typically use it to update the os, install tools and software I would need on the provisioned node using a shell script which is defined as a data_template like this:
data "template_file" "init-wp-server" {
template = file("./init-wp-server.sh")
}
I tried to speficy the user data file the same way I used to do on AWS and I was getting the following error:
Error: expected "user_data" to be a base64 string, got #!/usr/bin/bash
This is because Azure requires the user data file to be Base64-Encoded. So instead of using this in azurerm_linux_virtual_machine definiton:
Error: creating/updating Public Ip Address: (Name "external_ip-01" / Resource Group "rg-bright-liger"): network.PublicIPAddressesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="ZonesNotAllowedInBasicSkuResource" Message="Request for resource /subscriptions/[MASKED]/resourceGroups/rg-bright-liger/providers/Microsoft.Network/publicIPAddresses/external_ip-01 is invalid as it is Basic sku and in an Availability Zone which is a deprecated configuration. Please change your resource to Standard sku for Availability Zone support." Details=[]
The issue is that the SKU setting for the public IP has to be Standard instead of Basic ( which is the default and I didn’t set it before ). Also the allocation_method parmaters has to be changed from Dynamic to Static. The corrected code looks like:
I ran into this issue recently. I have created a linux service on Ubuntu, defined the StandardOutput to redirect logging to a file and anytime I restarted the service the log file didn’t update.
The first problem was that the logfile actually did update but it updated from the beginning of the file keeping the older lines and updatig the logfile gradually which is a very weird behaviour I would say.
The way this was fixed is that I have changed the StandardOutput definition in my service file from:
StandardOutput=file:/var/log/application.log
to
StandardOutput=append:/var/log/application.log
This means if the file doesn’t exists it will be created if it does exits it will just append the new log lines to the existing file instead of update the file from the very beginning causing confusion.
In this step by step guide we will create a new Virtual server instance for VPC on IBM Cloud. The creation of a virtual machine under this new option tends to be more complicated than creating a simple virtual machine in the Classic Infrastucture. Let’s get right to it.
Prerequisites
There are number of prerequisites that need fulfilling before the actual creation of the virtual machine.
Create an SSH key pair
The virtual machine will be created without a root password. In order to be able to log in to your new virtual machine you will need to use an SSH key pair which is to be generated manually.
We used an Ubuntu linux session to generate a key pair for the root user by executing the following command:
ssh-keygen -t rsa -C “root”
Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): /app/root Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /app/root Your public key has been saved in /app/root.pub
This procedure generates a public and a private key. We named the public key file to root.pub and the private key was automatically saved as root.
Open IBM Cloud in your browser and navigate to the SSH Keys. VPC Infrastructure -> SSH keys
Click on Create at the SSH keys for VPC screen.
Fill in the information at the next pop up window. Name your certificate and copy paste the public key into the text area in the bottom ( we won’t show that ). If you filled in all the details correctly the Create button will turn blue. Click on that button to continue.
If you face an error here stating that the certificate is incorrect it might be that you tried to copy it across from a Linux shell by cat-ing the file. Open the public key in a text editor and copy it across from there.
Once you have created the ssh key it will show up on the SSH keys for VPC list.
Create Floating IPs for VPC
Floating IPs are basically public ip addresses. To be able to access the server using SSH at the first time a floating ip will be bound to the Virtual server instance for VPC which we will create later on.
Navigate to VPC Infrastructure -> Floating IPs then click Reserve.
The floating ip is now created.
Create the Virtual server instance for VPC
Now we have the prerequisites in place it is time to create our VM for VPC.
Navigate to VPC Infrastructure -> Virtual server instances and click on Create
Add the name of you choice and select the ssh keys you have created previously, then click Create virtual Server.
Your virtual server is now created. The vm will only have a Private IP.
Navigate to VPC Infrastructure -> Floating IPs, right click on the drop down menu and select Bind.
Select the VM instance you have created at Resource to bind then click Bind.
The status is now greened out showing Bound and the Targeted device should show your vm.
Log into the server using ssh from an other linux server or desktop using the following command:
ssh -i root root@xxx.xxx.xxx.xxx <- Floating ip
after the -i you have to specify the private key filename which is in our case is root.
root@localhost:/app# ssh -i root root@xxx.xxx.xxx.xxx The authenticity of host ' xxx.xxx.xxx.xxx ( xxx.xxx.xxx.xxx )' can't be established. ECDSA key fingerprint is SHA256:+wb+ApkNLds5hup2vMWEuvUSoabXppaG1ZCh0FzLrVw. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added ' xxx.xxx.xxx.xxx ' (ECDSA) to the list of known hosts. Enter passphrase for key 'root': [root@test01 ~]#
If you would like to use putty to log in from windows then the private key will likely to be incompatible so you will have to use the puttygen utility to load and save the private key in the proper format.
Open the puttygen utility and load your private key. Once loaded click on Save private key and you will get a putty compatible .pkk file.
Open putty, create the session, navigate to the Connection -> SSH -> Auth menu. Click Browse and open the newly generated pkk file then click Open to start the putty session.
The putty session will now open to the server.
Alternative ways to log in to your server
Once you are logged in you can set the root password using the passwd command and you no longer need to have your server on a public ip address.
Feel free to unbind the Floating IP and use it for a different server or just simply delete it.
Navigate to VPC Infrastructure -> Virtual server instances and click on your VM. once you are in the main screen of your vm, select Actions on the top right corner and pick either Open VNC console or Open serial console.
This will open a console to your vm without the need of a public ip address. Use your new password for root to log in.
We ran into this issue several times that on a high performance mysql server the binary files kept filling up the filesystem. Previously we did the purge manually with the following command from the mysql cli:
PURGE BINARY LOGS BEFORE NOW();
Then we did a bit of research on how to do this automatically and we found the binlog_expire_logs_seconds variable which is located in the /etc/mysql/mysql.conf.d/mysqld.cnf file. So adding the following line…
binlog_expire_logs_seconds = 259200
…will keep only 3 days worth of bin logs. Don’t forget to restart the mysql sevice using…
By default you do not have root access on any of the pods created on Openshift. If you still need root access for development or other purposes follow these simple steps to gain root:
Log in to your bastion box and switch project to the one you would like to work with:
oc project projectname
Create a service account that resembles the name of the project. We installed a zabbix container hence I used zabbix in the name.
When you manage multiple store instances from Magento 2 sometimes you will need the specific store id of one store, especially if you would like to manipulate data by accessing the database directly.
One might ask why not use the methods provided by Magento2, the answer is simply they are way too slow to manage a store with over 1000 products effectively. Since we have 20K+ products it was necessary to change/manipulate data directly in the database across a huge, enterprise level,multi store environment.
So back to the original topic, the easiest way to find out your store id is
Log in to the Magento2 backend
Navigate to Stores->(Settings)->Configuration
From the store selector menu ( Scope, upper left corner ) select the store you would like to see the store if for
Have a look at the very end of the url in your browser and you will see something like: …/section/general/store/6/
The number in that URL will tell you what is the store id for the shop you picked from the store selector menu.
I have recently ran into an issue that I had to change the Magento 2’s default contact form for my websites due to google’s request.
I have found a lot of articles stating that the contact form will be among your blocks or pages. My contact form’s url is “contact” and I couldn’t find it in blocks nor in pages. I did some research and finally I figured that there was no option for me to change this from the back-end rather I had to go and look for a file that has this information on my server’s filesystem.
After some research I found that the file containing the Default Magento 2 contact form is called form.phtml and it is located in /vendor/magento/module-contact/view/frontend/templates/ directory on your server. You can add HTML code of your choice to this file, once updated it refreshes straight away even if you use extensive caching like I do.
If you make changes in nova.conf the nova service must be restarted. Since microstack under snap is not installing the same services as openstack it was very difficult to find how to restart the nova service. You can use the following command to restart the nova service:
We performed housekeeping on our Zabbix instance and by the morning the whole system became unresponsive and even though multiple restarts the database kept hanging. Using the following command: journalctl -u mariadb.service we managed to pull this information.
Feb 05 04:09:07 bezabbix001 mysqld[4042]: 2020-02-05 4:09:07 139744107072640 [Note] /usr/sbin/mysqld (mysqld 10.1.43-MariaDB-0ubuntu0.18.04.1) starting as process 4042 … Feb 05 04:09:07 bezabbix001 mysqld[4042]: 2020-02-05 4:09:07 139744107072640 [Warning] Could not increase number of max_open_files to more than 16364 (request: 20115) Feb 05 04:10:37 bezabbix001 systemd[1]: mariadb.service: Start operation timed out. Terminating. Feb 05 04:12:07 bezabbix001 systemd[1]: mariadb.service: State 'stop-sigterm' timed out. Skipping SIGKILL. Feb 05 04:13:37 bezabbix001 systemd[1]: mariadb.service: State 'stop-final-sigterm' timed out. Skipping SIGKILL. Entering failed mode. Feb 05 04:13:37 bezabbix001 systemd[1]: mariadb.service: Failed with result 'timeout'. Feb 05 04:13:37 bezabbix001 systemd[1]: Failed to start MariaDB 10.1.43 database server.
The first issue indicated insufficient number of open files. The solution to this issue was to increase those numbers by setting the following in the /lib/systemd/system/mariadb.service file.
LimitNOFILE=200000 LimitMEMLOCK=200000
Once we set the setting above and run reload with the following command the Warning message went away.
sudo systemctl daemon-reload
The server however still wasn’t willing to start showing the following in the log:
Feb 05 04:28:19 bezabbix001 mysqld[4932]: 2020-02-05 4:28:19 140431509490816 [Note] /usr/sbin/mysqld (mysqld 10.1.43-MariaDB-0ubuntu0.18.04.1) starting as process 4932 … Feb 05 04:29:49 bezabbix001 systemd[1]: mariadb.service: Start operation timed out. Terminating. Feb 05 04:31:19 bezabbix001 systemd[1]: mariadb.service: State 'stop-sigterm' timed out. Skipping SIGKILL. Feb 05 04:32:49 bezabbix001 systemd[1]: mariadb.service: State 'stop-final-sigterm' timed out. Skipping SIGKILL. Entering failed mode. Feb 05 04:32:49 bezabbix001 systemd[1]: mariadb.service: Failed with result 'timeout'. Feb 05 04:32:49 bezabbix001 systemd[1]: Failed to start MariaDB 10.1.43 database server. Feb 05 04:34:20 bezabbix001 systemd[1]: mariadb.service: Got notification message from PID 4932, but reception only permitted for main PID which is currently not known Feb 05 04:34:22 bezabbix001 systemd[1]: mariadb.service: Got notification message from PID 4932, but reception only permitted for main PID which is currently not known Feb 05 04:34:22 bezabbix001 systemd[1]: mariadb.service: Got notification message from PID 4932, but reception only permitted for main PID which is currently not known
This indicated that the service didn’t start within the default 90 seconds however it came up later. Since it has been timed out by service control already once the server came up it was immediately shut down. Looking at the /var/www/mysql/error.log which is the error log for maridb/mysql, we found that the server was just busy processing data and the startup was taking much longer than 90 seconds.
2020-02-05 4:39:55 140524339281024 [Note] InnoDB: Using mutexes to ref count buffer pool pages 2020-02-05 4:39:55 140524339281024 [Note] InnoDB: The InnoDB memory heap is disabled 2020-02-05 4:39:55 140524339281024 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2020-02-05 4:39:55 140524339281024 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier 2020-02-05 4:39:55 140524339281024 [Note] InnoDB: Compressed tables use zlib 1.2.11 2020-02-05 4:39:55 140524339281024 [Note] InnoDB: Using Linux native AIO 2020-02-05 4:39:55 140524339281024 [Note] InnoDB: Using SSE crc32 instructions 2020-02-05 4:39:55 140524339281024 [Note] InnoDB: Initializing buffer pool, size = 3.0G 2020-02-05 4:39:55 140524339281024 [Note] InnoDB: Completed initialization of buffer pool 2020-02-05 4:39:55 140524339281024 [Note] InnoDB: Highest supported file format is Barracuda. InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 86545366 row operations to undo InnoDB: Trx id counter is 311544320 2020-02-05 4:54:54 140524339281024 [Note] InnoDB: 128 rollback segment(s) are active. 2020-02-05 4:54:54 140520315680512 [Note] InnoDB: Starting in background the rollback of recovered transactions 2020-02-05 4:54:54 140520315680512 [Note] InnoDB: To roll back: 1 transactions, 86545366 rows 2020-02-05 4:54:54 140524339281024 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.45-86.1 started; log sequence number 376642022837 2020-02-05 4:54:54 140520315680512 [Warning] InnoDB: Difficult to find free blocks in the buffer pool (21 search iterations)! 0 failed attempts to flush a page! 2020-02-05 4:54:54 140520315680512 [Note] InnoDB: Consider increasing the buffer pool size. 2020-02-05 4:54:54 140520315680512 [Note] InnoDB: Pending flushes (fsync) log: 0 buffer pool: 1 OS file reads: 341095 OS file writes: 3 OS fsyncs: 2 2020-02-05 4:54:55 140520240146176 [Note] InnoDB: Dumping buffer pool(s) not yet started 2020-02-05 4:54:55 140524339281024 [Note] Plugin 'FEEDBACK' is disabled. 2020-02-05 4:54:55 140524339281024 [Note] Server socket created on IP: '0.0.0.0'. 2020-02-05 4:54:55 140524338853632 [Note] /usr/sbin/mysqld: Normal shutdown 2020-02-05 4:54:56 140524339281024 [Note] /usr/sbin/mysqld: ready for connections. Version: '10.1.43-MariaDB-0ubuntu0.18.04.1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 Ubuntu 18.04 2020-02-05 4:54:56 140524338853632 [Note] Event Scheduler: Purging the queue. 0 events 2020-02-05 4:54:56 140520273716992 [Note] InnoDB: FTS optimize thread exiting. 2020-02-05 4:54:56 140524338853632 [Note] InnoDB: Starting shutdown… 2020-02-05 4:54:56 140524338853632 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool 2020-02-05 4:54:58 140524338853632 [Note] InnoDB: Shutdown completed; log sequence number 376642023203 2020-02-05 4:54:58 140524338853632 [Note] /usr/sbin/mysqld: Shutdown complete
The following settings were added to the /lib/systemd/system/mariadb.service file to increase service start and stop timeout.
TimeoutStartSec=infinity TimeoutStopSec=infinity
Run reload to apply these settings:
sudo systemctl daemon-reload
The mariadb server slowly but finally started after increasing the timeout.
2020-02-05 5:12:06 139656180632704 [Note] InnoDB: Using mutexes to ref count buffer pool pages 2020-02-05 5:12:06 139656180632704 [Note] InnoDB: The InnoDB memory heap is disabled 2020-02-05 5:12:06 139656180632704 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2020-02-05 5:12:06 139656180632704 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier 2020-02-05 5:12:06 139656180632704 [Note] InnoDB: Compressed tables use zlib 1.2.11 2020-02-05 5:12:06 139656180632704 [Note] InnoDB: Using Linux native AIO 2020-02-05 5:12:06 139656180632704 [Note] InnoDB: Using SSE crc32 instructions 2020-02-05 5:12:06 139656180632704 [Note] InnoDB: Initializing buffer pool, size = 5.0G 2020-02-05 5:12:06 139656180632704 [Note] InnoDB: Completed initialization of buffer pool 2020-02-05 5:12:06 139656180632704 [Note] InnoDB: Highest supported file format is Barracuda. InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 86545366 row operations to undo InnoDB: Trx id counter is 311544832 2020-02-05 5:25:38 139656180632704 [Note] InnoDB: 128 rollback segment(s) are active. 2020-02-05 5:25:38 139649825634048 [Note] InnoDB: Starting in background the rollback of recovered transactions 2020-02-05 5:25:38 139649825634048 [Note] InnoDB: To roll back: 1 transactions, 86545366 rows 2020-02-05 5:25:38 139656180632704 [Note] InnoDB: Waiting for purge to start 2020-02-05 5:25:38 139656180632704 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.45-86.1 started; log sequence number 376642023203 2020-02-05 5:25:38 139649825634048 [Warning] InnoDB: Difficult to find free blocks in the buffer pool (21 search iterations)! 0 failed attempts to flush a page! 2020-02-05 5:25:38 139649825634048 [Note] InnoDB: Consider increasing the buffer pool size. 2020-02-05 5:25:38 139649825634048 [Note] InnoDB: Pending flushes (fsync) log: 1 buffer pool: 0 OS file reads: 341115 OS file writes: 5 OS fsyncs: 4 2020-02-05 5:25:39 139649750099712 [Note] InnoDB: Dumping buffer pool(s) not yet started 2020-02-05 5:25:39 139656180632704 [Note] Plugin 'FEEDBACK' is disabled. 2020-02-05 5:25:39 139656180632704 [Note] Server socket created on IP: '0.0.0.0'. 2020-02-05 5:25:40 139656180632704 [Note] /usr/sbin/mysqld: ready for connections. Version: '10.1.43-MariaDB-0ubuntu0.18.04.1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 Ubuntu 18.04 2020-02-05 5:25:53 139649825634048 [Note] InnoDB: To roll back: 1 transactions, 86544705 rows 2020-02-05 5:26:08 139649825634048 [Note] InnoDB: To roll back: 1 transactions, 86543690 rows 2020-02-05 5:26:23 139649825634048 [Note] InnoDB: To roll back: 1 transactions, 86464992 rows 2020-02-05 5:26:38 139649825634048 [Note] InnoDB: To roll back: 1 transactions, 85968784 rows 2020-02-05 5:26:53 139649825634048 [Note] InnoDB: To roll back: 1 transactions, 85674936 rows 2020-02-05 5:27:08 139649825634048 [Note] InnoDB: To roll back: 1 transactions, 85274020 rows