Te reja dhe Aktivitete

Migration Guide: CentOS to AlmaLinux, Rocky Linux and Other Alternatives

CentOS 8 Reached End-of-Life on December 31, 2021

With Red Hat officially ending support for CentOS 8, users are forced to migrate to a different operating system. While CentOS 7 remains supported till end of 2024, the life cycle of the newer CentOS 8 was cut short by Red Hat for the end of 2021. Red Hat’s decision to end the CentOS 8 support remains highly controversial in the Linux community and it has stirred a lot of criticism. While the EOL of CentoOS 8 is a fact, we’re going to focus on recommending reasonable alternatives that can swiftly replace no longer supported CentOS 8.

Alternatives to CentOS 8

If your server runs CentOS, then the natural solution seems to migrate to other distribution based on Red Hat Enterprise Linux (RHEL) code. Migrating to one of the recommended RHEL derivatives will ensure a seamless experience and uninterrupted performance regardless of what you run on the server. We describe 4 RHEL derivatives in this article:

• AlmaLinux
• Rocky Linux
• Oracle Linux
• CentOS Stream

All of them aim to be 100% binary compatible with the original CentOS, meaning your current projects running on CentOS should run on each of them in the same way. From a technical perspective and from user’s experience, there is not much of a difference between the original CentOS 8 and the above-mentioned distributions.

The decision on which distribution to migrate is less about the functionality of the operating system and more about how it’s developed and maintained. Some of the alternatives are free community projects, while others are backed by large-scale commercial businesses.

AlmaLinux

First distribution we will cover is AlmaLinux, an increasingly popular semi-commercial project.

AlmaLinux is developed as a direct successor for CentOS. It is the binary equivalent to CentOS 8. But unlike Rocky Linux, the development is run by a private company. CloudLinux is a company that founded and funded the AlmaLinux OS foundation. CloudLinux invested approximately $1 million to develop AlmaLinux, after Red Hat announced discontinued support for CentOS 8. The combination of private funding and a strong community makes this distribution very promising.

This distribution is widely recognized in the IT community. For example, cPanel, developers of a popular server management panel, decided to support AlmaLinux from day one. And so did Plesk. The endorsement from both well-known panels’ developers is not only a matter of reputation, but it also guarantees easy migration for all users working with these panels.

Key features:

  • Backed by a reputable company
  • Active community
  • Works with cPanel and Plesk

Rocky Linux

When Red Hat announced the end of support for CentOS 8, one of the co-founders of the original CentOS operating system stepped in. Henry Kurtzer founded CentOS together with his colleague Rocky McGaugh in 2004. When Kurtzer decided to start working on a new substitute for CentOS, he named the new operating system after the late Rocky McGaugh. As all other alternatives mentioned in this article, the Rocky Linux distribution is based on the RHEL code, which means there are only marginal differences in code in comparison to CentOS.

Rocky Linux is an independent project with a free community support. The stable release 8.4 was released in June 2021 under the codename “Green Obsidian”. From all mentioned distributions, this is the youngest distribution, and one that is still under construction.

Since mid-2021, Henry Kurtzer has been offering paid support plans via his CIQ company, trying to offer “enterprise-grade” services . Still, the Rocky Linux distribution is at heart a community project preserving the original open-source spirit.

Key features:

  • Active community
  • Lead by CentOS founder
  • Slower release cycle
  • Young project
  • Plesk / cPanel support is missing

CentOS Stream

CentOS Stream is an alternative to CentOS 8 that comes directly from Red Hat. The main difference is that it has a different release cycle than other CentOS8 alternatives. It’s a rolling release of the RHEL, meaning that there are almost constant updates with all the newest developments in the code of the core, libraries and applications:

On one hand this means access to the newest software very quickly, on the other hand there is a higher risk of compatibility issues. It is a great idea for a development sandbox, but not necessarily for a production system.

Key features:

  • Backed by a renowned company
  • Constantly Updated

Oracle Linux

One of the less mentioned alternatives is using Linux from Oracle. Like all other alternatives, Oracle’s Linux is 100 percent binary compatible with the original Red Hat distribution. And that makes it also very close to the CentOS distribution – Oracle ensures that their Linux is fully compatible with existing CentOS 8 apps.

The biggest advantage of Oracle Linux is obvious – the support of one of the most prominent IT companies. The customer support isn’t free, but everything else is. Some users also consider as an advantage the possibility to perform updates via the yum server.

On the other hand, many users are now discouraged from using Linux distributions developed by corporations. They argue that after Red Hat discontinued support of CentOS, Oracle Linux can end up doing the same. But at least for now, the Oracle Linux remains a perfectly viable alternative, especially if you can afford their support.

Key Features:

  • Backed by a renowned company

Contabo Recommendation

All distributions mentioned above are very close to the original CentOS 8. Developers of all mentioned distributions also made it easy to migrate using just a few commands.

Your decision which alternative to use should be based on who stands behind each project. Do you trust an open community like Rocky Linux or are you more keen on projects backed by big corporations like Oracle?

If we were to recommend just one distribution, it would undoubtedly be AlmaLinux. It has the stability thanks to the backing of CloudLinux and the energy coming from the community. That’s why we recommend AlmaLinux as a safe bet for CentOS 8 users.

How to Migrate from CentOS 8

Video Guide

Below, you will find a brief description of the migration process for AlmaLinux, Rocky Linux, CentOS Stream and Oracle Linux.

Don’t forget to back up your data before performing the migration. If you don’t care about the data on the instance, you can simply reinstall the virtual machine without performing a migration. Contabo customers can do this in Contabo Customer Panel and choose AlmaLinux, Rocky Linux or CentOS Stream.

First, log in to your instance using SSH. Also, please make sure you have sufficient admin rights first. The easiest way how to avoid admin rights issues is to enter the command sudo -i at the beginning of each session:
sudo -i

How to migrate from CentOS 8 to AlmaLinux

Before migrating, update your current distribution by running this command:
dnf update

Download the distribution from GitHub using
curl -O https://raw.githubusercontent.com/AlmaLinux/almalinux-deploy/master/almalinux-deploy.sh

Assign all necessary permissions to the script:
chmod +x almalinux-deploy.sh

Now run the script by typing
./almalinux-deploy.sh

And finally
reboot

How to migrate from CentOS 8 to Rocky Linux

Before migrating, update your current distribution by running this command:
dnf update

Rocky Linux developers have created a script called migrate2rocky. To obtain the distribution, just download it using this command:
curl -O https://raw.githubusercontent.com/rocky-linux/rocky-tools/main/migrate2rocky/migrate2rocky.sh

When the file has been finished downloading, give the script all necessary permissions by
chmod u+x migrate2rocky.sh

Now execute the script itself:
./migrate2rocky.sh -r

And finally
reboot

How to migrate from CentOS 8 to CentOS Stream

Before migrating, update your current distribution by running this command:
dnf update

Start the installation by typing:
dnf install centos-release-stream

After the installation is done, you have to change the repository for CentOS Stream:
dnf swap centos-linux-repos centos-stream-repos

And at last, let’s sync all your existing packages with the new distribution:
dnf distro-sync

and finally
reboot

How to Migrate From CentOS 8 to Oracle Linux

Before migrating, update your current distribution by running this command:
dnf update

Download the distribution from GitHub:
curl -O https://raw.githubusercontent.com/oracle/centos2ol/main/centos2ol.sh

Now run this command to replace your CentOS 8 with the Oracle:
./centos2ol.sh

And finally
reboot

How to Install Laravel on Ubuntu 22.04 with Nginx

How To Install and Configure Laravel with Nginx on Ubuntu 22.04

Just follow the following steps to install and configure laravel with Nginx on ubuntu 22.04:

  • Step 1 – Install Required PHP Modules
  • Step 2 – Creating a Database for Laravel
  • Step 3 – Install Composer in Ubuntu 22.04
  • Step 4 – Install Laravel in Ubuntu 22.04
  • Step 5 – Configure Laravel in Ubuntu 22.04
  • Step 6 – Configure NGINX to Serve Laravel Application
  • Step 7 – Accessing Laravel Application from a Web Browser

Step 1 – Install Required PHP Modules

First of all, install required php modules, so open command prompt and execute the following command to install required php modules:

$ sudo apt update
$ sudo apt php-common php-json php-mbstring php-zip php-xml php-tokenizer

Step 2 – Creating a Database for Laravel

Create a MySQL database for Laravel application; so execute the following command on command line to create database for laravel app:

$ sudo mysql
MariaDB [(none)]> CREATE DATABASE laraveldb;
MariaDB [(none)]> GRANT ALL ON laraveldb.* to 'webmaster'@'localhost' IDENTIFIED BY 'tecmint';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> quit

Step 3 – Install Composer in Ubuntu 22.04

Then use the following command to install composer (a dependency manager for PHP) on ubuntu 22.04 system:

$ curl -sS https://getcomposer.org/installer | php
$ sudo mv composer.phar /usr/local/bin/composer
$ sudo chmod +x /usr/local/bin/composer

Step 4 – Install Laravel in Ubuntu 22.04

Once the composer installation has been done; execute the following command on command line to install laravel apps in ubuntu 22.04 system:

$ cd /var/www/html
$ composer create-project --prefer-dist laravel/laravel example.com

Step 5 – Configure Laravel in Ubuntu 22.04

Now configure laravel apps using the following commands:

Set permissions on the Laravel directory using the following command:

$ sudo chown -R :www-data /var/www/html/example.com/storage/
$ sudo chown -R :www-data /var/www/html/example.com/bootstrap/cache/
$ sudo chmod -R 0777 /var/www/html/example.com/storage/

he default .env contains a default application key but you need to generate a new one for your laravel deployment for security purposes.

$ sudo php artisan key:generate

We also need to configure the Laravel database connection details in .env as shown in the following screenshot.

$ sudo nano /var/www/html/example.com/.env

Step 6 – Configure NGINX to Serve Laravel Application

Create a server block for it within the NGINX configuration, under the /etc/nginx/sites-available/ directory:

$ sudo nano /etc/nginx/sites-available/example.com.conf

Also, set the fastcgi_pass directive should point to the medium PHP-FPM is listening on for requests (for example fastcgi_pass unix:/run/php/php7.4-fpm.sock):

server{
        server_name www.example.com;
        root        /var/www/html/example.com/public;
        index       index.php;

        charset utf-8;
        gzip on;
        gzip_types text/css application/javascript text/javascript application/x-javascript  image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon;
        location / {
                try_files $uri $uri/ /index.php?$query_string;
        }

        location ~ \.php {
                include fastcgi.conf;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/run/php/php7.4-fpm.sock;
        }
        location ~ /\.ht {
                deny all;
        }
}

Save the file and then enable the Laravel site configuration by creating a link from /etc/nginx/sites-available/example.com.conf to the /etc/nginx/sites-enabled/ directory. Besides, remove the default server block configuration.

$ sudo ln -s /etc/nginx/sites-available/example.com.conf /etc/nginx/sites-enabled/
$ sudo rm /etc/nginx/sites-enabled/default

Next, check if the NGINX configuration syntax is correct by running the following command before restarting the service.

$ sudo nginx -t
$ sudo systemctl restart nginx

Step 7 – Accessing Laravel Application from a Web Browser

Now open a web browser on the local computer and use the following address to navigate.

http://www.example.com/

Conclusion

Through this tutorial, we have learned how to install and configure laravel on ubuntu 22.04 with nginx.

How to Fix the Buffalo Linkstation NAS – Partition Not Found Error

Recently I picked up a Buffalo Linkstation 220 to play around with at home as I felt that I could use a bit of additional storage to play around with. Note that this previous statement is pretty much a lie. I have tons of storage, and was really just looking for an new toy to play around with. Basically I just had a few disks laying around that I wanted to put to use.

However, much to my dismay the I was unable to configure the device once I shoved in the disks, powered it up, and connected to it with the Buffalo Smart Phone Navigator. I figured that this was not a big deal however, so I tried the installable Windows App from my Windows 7 Vm. The Buffalo NAS Navigator was also able to connect to the device, however the device showed that it was currently booted in what was called “Emergency Mode”. Not sensing a real emergency, I did not panic.

See borrowed image below.

Fortunately the site that I borrowed the above image from (here) and this site (here) give advice on how to fix the issue. First step is to download the Buffalo Linkstation Firmware Updater that you can get here. Both pages advise you to modify the LSUpdater.ini file. However their instructions did not work for me. The exact changes, and the LSUpdater.ini in its entirety are below.

[code language=”css”]

[Application]
Title = BUFFALO LinkStation Series Updater Ver.1.62
WaitReboot = 1200
WaitFormat = 600
WaitFileSend = 600
WaitDiscover = 120

[Target]
ProductID = 0x80000000
ProductID2 = 0x0000001D
ProductID3 = 0x0000300D
ProductID4 = 0x0000300E
ProductID5 = 0x00003011

Name = LinkStation

[Flags]
VersionCheck = 0
NoFormatting = 0

[SpecialFlags]
Debug = 1

[/code]

At this point you launch the updater again, and select “Update“. This fully partitions the drives and then updates the firmware. This process takes a while, so be patient. Now you can launch the NAS Navigator and configure the device.

How To Set Up Automatic Deployment with Git with a VPS

Introduction

For an introduction to Git and how to install, please refer to the introduction tutorial.

This article will teach you how to use Git when you want to deploy your application. While there are many ways to use Git to deploy our application, this tutorial will focus on the one that is most straightforward. I assume you already know how to create and use a repository on your local machine. If not, please refer to this tutorial.

When you use Git, the workflow generally is toward version control only. You have a local repository where you work and a remote repository where you keep everything in sync and can work with a team and different machines. But you can also use Git to move your application to production.

Server Setup

Our fictitious workspace:

Your server live directory: /var/www/html

Your server repository: /var/www/site.git

What should we do if we want to push to site.git and at the same time make all the content available at /var/www/html?

To create our repository:

On /var/www/site.git we execute sudo git config –global init.defaultBranch main and right after sudo git init –bare

You will see a few files and folders, including the ‘hooks’ folder. So let’s go to ‘hooks’ folder:

cd hooks

sudo touch post-receive
sudo nano post-receive

#!/bin/bash
git –work-tree=/var/www/html –git-dir=/var/www/site.git checkout –force

sudo chmod +x post-receive

———————————————————————————————————

git remote add production ssh://root@ip-address/var/www/site.git

 

(take ownership)———————————

sudo chown -R useranme /var/www/repo

(change permissions) —————————

sudo chmod -R 775 /var/www/repo

How To Install and Use Docker on Ubuntu 22.04

Docker is an application that simplifies the process of managing application processes in containers. Containers let you run your applications in resource-isolated processes. They’re similar to virtual machines, but containers are more portable, more resource-friendly, and more dependent on the host operating system.

For a detailed introduction to the different components of a Docker container, check out The Docker Ecosystem: An Introduction to Common Components.

In this tutorial, you’ll install and use Docker Community Edition (CE) on Ubuntu 22.04. You’ll install Docker itself, work with containers and images, and push an image to a Docker Repository.

Prerequisites

To follow this tutorial, you will need the following:

Step 1 — Installing Docker

The Docker installation package available in the official Ubuntu repository may not be the latest version. To ensure we get the latest version, we’ll install Docker from the official Docker repository. To do that, we’ll add a new package source, add the GPG key from Docker to ensure the downloads are valid, and then install the package.

First, update your existing list of packages:

  1. sudo apt update

Next, install a few prerequisite packages which let apt use packages over HTTPS:

  1. sudo apt install apt-transport-https ca-certificates curl software-properties-common

Then add the GPG key for the official Docker repository to your system:

  1. curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg –dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Add the Docker repository to APT sources:

  1. echo “deb [arch=$(dpkg –print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update your existing list of packages again for the addition to be recognized:

  1. sudo apt update

Make sure you are about to install from the Docker repo instead of the default Ubuntu repo:

  1. apt-cache policy docker-ce

You’ll see output like this, although the version number for Docker may be different:

Output of apt-cache policy docker-ce
docker-ce:
  Installed: (none)
  Candidate: 5:20.10.14~3-0~ubuntu-jammy
  Version table:
     5:20.10.14~3-0~ubuntu-jammy 500
        500 https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages
     5:20.10.13~3-0~ubuntu-jammy 500
        500 https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages

Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository for Ubuntu 22.04 (jammy).

Finally, install Docker:

  1. sudo apt install docker-ce

Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:

  1. sudo systemctl status docker

The output should be similar to the following, showing that the service is active and running:

Output
● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2022-04-01 21:30:25 UTC; 22s ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 7854 (dockerd)
      Tasks: 7
     Memory: 38.3M
        CPU: 340ms
     CGroup: /system.slice/docker.service
             └─7854 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We’ll explore how to use the docker command later in this tutorial.

Step 2 — Executing the Docker Command Without Sudo (Optional)

By default, the docker command can only be run the root user or by a user in the docker group, which is automatically created during Docker’s installation process. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you’ll get an output like this:

Output
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.

If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

  1. sudo usermod -aG docker ${USER}

To apply the new group membership, log out of the server and back in, or type the following:

  1. su${USER}

You will be prompted to enter your user’s password to continue.

Confirm that your user is now added to the docker group by typing:

  1. groups

Output
sammy sudo docker

If you need to add a user to the docker group that you’re not logged in as, declare that username explicitly using:

  1. sudo usermod -aG docker username

The rest of this article assumes you are running the docker command as a user in the docker group. If you choose not to, please prepend the commands with sudo.

Let’s explore the docker command next.

Step 3 — Using the Docker Command

Using docker consists of passing it a chain of options and commands followed by arguments. The syntax takes this form:

  1. docker [option] [command] [arguments]

To view all available subcommands, type:

  1. docker

As of Docker version 20.10.14, the complete list of available subcommands includes:

Output
  attach      Attach local standard input, output, and error streams to a running container
  build       Build an image from a Dockerfile
  commit      Create a new image from a container's changes
  cp          Copy files/folders between a container and the local filesystem
  create      Create a new container
  diff        Inspect changes to files or directories on a container's filesystem
  events      Get real time events from the server
  exec        Run a command in a running container
  export      Export a container's filesystem as a tar archive
  history     Show the history of an image
  images      List images
  import      Import the contents from a tarball to create a filesystem image
  info        Display system-wide information
  inspect     Return low-level information on Docker objects
  kill        Kill one or more running containers
  load        Load an image from a tar archive or STDIN
  login       Log in to a Docker registry
  logout      Log out from a Docker registry
  logs        Fetch the logs of a container
  pause       Pause all processes within one or more containers
  port        List port mappings or a specific mapping for the container
  ps          List containers
  pull        Pull an image or a repository from a registry
  push        Push an image or a repository to a registry
  rename      Rename a container
  restart     Restart one or more containers
  rm          Remove one or more containers
  rmi         Remove one or more images
  run         Run a command in a new container
  save        Save one or more images to a tar archive (streamed to STDOUT by default)
  search      Search the Docker Hub for images
  start       Start one or more stopped containers
  stats       Display a live stream of container(s) resource usage statistics
  stop        Stop one or more running containers
  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
  top         Display the running processes of a container
  unpause     Unpause all processes within one or more containers
  update      Update configuration of one or more containers
  version     Show the Docker version information
  wait        Block until one or more containers stop, then print their exit codes

To view the options available to a specific command, type:

  1. docker docker-subcommand –help

To view system-wide information about Docker, use:

  1. docker info

Let’s explore some of these commands. We’ll start by working with images.

Step 4 — Working with Docker Images

Docker containers are built from Docker images. By default, Docker pulls these images from Docker Hub, a Docker registry managed by Docker, the company behind the Docker project. Anyone can host their Docker images on Docker Hub, so most applications and Linux distributions you’ll need will have images hosted there.

To check whether you can access and download images from Docker Hub, type:

  1. docker run hello-world

The output will indicate that Docker in working correctly:

Output
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
2db29710123e: Pull complete
Digest: sha256:bfea6278a0a267fad2634554f4f0c6f31981eea41c553fdf5a83e95a41d40c38
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

...

Docker was initially unable to find the hello-world image locally, so it downloaded the image from Docker Hub, which is the default repository. Once the image downloaded, Docker created a container from the image and the application within the container executed, displaying the message.

You can search for images available on Docker Hub by using the docker command with the search subcommand. For example, to search for the Ubuntu image, type:

  1. docker search ubuntu

The script will crawl Docker Hub and return a listing of all images whose name matches the search string. In this case, the output will be similar to this:

Output
NAME                             DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
ubuntu                           Ubuntu is a Debian-based Linux operating sys…   14048     [OK]
websphere-liberty                WebSphere Liberty multi-architecture images …   283       [OK]
ubuntu-upstart                   DEPRECATED, as is Upstart (find other proces…   112       [OK]
neurodebian                      NeuroDebian provides neuroscience research s…   88        [OK]
open-liberty                     Open Liberty multi-architecture images based…   51        [OK]
...

In the OFFICIAL column, OK indicates an image built and supported by the company behind the project. Once you’ve identified the image that you would like to use, you can download it to your computer using the pull subcommand.

Execute the following command to download the official ubuntu image to your computer:

  1. docker pull ubuntu

You’ll see the following output:

Output
Using default tag: latest
latest: Pulling from library/ubuntu
e0b25ef51634: Pull complete
Digest: sha256:9101220a875cee98b016668342c489ff0674f247f6ca20dfc91b91c0f28581ae
Status: Downloaded newer image for ubuntu:latest
docker.io/library/ubuntu:latest

After an image has been downloaded, you can then run a container using the downloaded image with the run subcommand. As you saw with the hello-world example, if an image has not been downloaded when docker is executed with the run subcommand, the Docker client will first download the image, then run a container using it.

To see the images that have been downloaded to your computer, type:

  1. docker images

The output will look similar to the following:

Output
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              1d622ef86b13        3 weeks ago         73.9MB
hello-world         latest              bf756fb1ae65        4 months ago        13.3kB

As you’ll see later in this tutorial, images that you use to run containers can be modified and used to generate new images, which may then be uploaded (pushed is the technical term) to Docker Hub or other Docker registries.

Let’s look at how to run containers in more detail.

Step 5 — Running a Docker Container

The hello-world container you ran in the previous step is an example of a container that runs and exits after emitting a test message. Containers can be much more useful than that, and they can be interactive. After all, they are similar to virtual machines, only more resource-friendly.

As an example, let’s run a container using the latest image of Ubuntu. The combination of the -i and -t switches gives you interactive shell access into the container:

  1. docker run -it ubuntu

Your command prompt should change to reflect the fact that you’re now working inside the container and should take this form:

Output
root@d9b100f2f636:/#

Note the container id in the command prompt. In this example, it is d9b100f2f636. You’ll need that container ID later to identify the container when you want to remove it.

Now you can run any command inside the container. For example, let’s update the package database inside the container. You don’t need to prefix any command with sudo, because you’re operating inside the container as the root user:

  1. apt update

Then install any application in it. Let’s install Node.js:

  1. apt install nodejs

This installs Node.js in the container from the official Ubuntu repository. When the installation finishes, verify that Node.js is installed:

  1. node -v

You’ll see the version number displayed in your terminal:

Output
v12.22.9

Any changes you make inside the container only apply to that container.

To exit the container, type exit at the prompt.

Let’s look at managing the containers on our system next.

Step 6 — Managing Docker Containers

After using Docker for a while, you’ll have many active (running) and inactive containers on your computer. To view the active ones, use:

  1. docker ps

You will see output similar to the following:

Output
CONTAINER ID        IMAGE               COMMAND             CREATED

In this tutorial, you started two containers; one from the hello-world image and another from the ubuntu image. Both containers are no longer running, but they still exist on your system.

To view all containers — active and inactive, run docker ps with the -a switch:

  1. docker ps -a

You’ll see output similar to this:

Output
CONTAINER ID   IMAGE         COMMAND   CREATED         STATUS                     PORTS     NAMES
1c08a7a0d0e4   ubuntu        "bash"     About a minute ago   Exited (0) 7 seconds ago             dazzling_taussig
587000e49d53   hello-world   "/hello"   5 minutes ago        Exited (0) 5 minutes ago             adoring_kowalevski

To view the latest container you created, pass it the -l switch:

  1. docker ps -l

Output
CONTAINER ID   IMAGE     COMMAND   CREATED         STATUS                     PORTS     NAMES
1c08a7a0d0e4   ubuntu    "bash"    3 minutes ago   Exited (0) 2 minutes ago             dazzling_taussig

To start a stopped container, use docker start, followed by the container ID or the container’s name. Let’s start the Ubuntu-based container with the ID of 1c08a7a0d0e4:

  1. docker start 1c08a7a0d0e4

The container will start, and you can use docker ps to see its status:

Output
CONTAINER ID   IMAGE     COMMAND   CREATED         STATUS         PORTS     NAMES
1c08a7a0d0e4   ubuntu    "bash"    6 minutes ago   Up 8 seconds             dazzling_taussig

To stop a running container, use docker stop, followed by the container ID or name. This time, we’ll use the name that Docker assigned the container, which is dazzling_taussig:

  1. docker stop dazzling_taussig

Once you’ve decided you no longer need a container anymore, remove it with the docker rm command, again using either the container ID or the name. Use the docker ps -a command to find the container ID or name for the container associated with the hello-world image and remove it.

  1. docker rm adoring_kowalevski

You can start a new container and give it a name using the --name switch. You can also use the --rm switch to create a container that removes itself when it’s stopped. See the docker run help command for more information on these options and others.

Containers can be turned into images which you can use to build new containers. Let’s look at how that works.

Step 7 — Committing Changes in a Container to a Docker Image

When you start up a Docker image, you can create, modify, and delete files just like you can with a virtual machine. The changes that you make will only apply to that container. You can start and stop it, but once you destroy it with the docker rm command, the changes will be lost for good.

This section shows you how to save the state of a container as a new Docker image.

After installing Node.js inside the Ubuntu container, you now have a container running off an image, but the container is different from the image you used to create it. But you might want to reuse this Node.js container as the basis for new images later.

Then commit the changes to a new Docker image instance using the following command.

  1. docker commit -m “What you did to the image” -a “Author Name” container_id repository/new_image_name

The -m switch is for the commit message that helps you and others know what changes you made, while -a is used to specify the author. The container_id is the one you noted earlier in the tutorial when you started the interactive Docker session. Unless you created additional repositories on Docker Hub, the repository is usually your Docker Hub username.

For example, for the user sammy, with the container ID of d9b100f2f636, the command would be:

  1. docker commit -m “added Node.js” -a sammy d9b100f2f636 sammy/ubuntu-nodejs

When you commit an image, the new image is saved locally on your computer. Later in this tutorial, you’ll learn how to push an image to a Docker registry like Docker Hub so others can access it.

Listing the Docker images again will show the new image, as well as the old one that it was derived from:

  1. docker images

You’ll see output like this:

Output
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
sammy/ubuntu-nodejs   latest              7c1f35226ca6        7 seconds ago       179MB
...

In this example, ubuntu-nodejs is the new image, which was derived from the existing ubuntu image from Docker Hub. The size difference reflects the changes that were made. And in this example, the change was that NodeJS was installed. So next time you need to run a container using Ubuntu with NodeJS pre-installed, you can just use the new image.

You can also build Images from a Dockerfile, which lets you automate the installation of software in a new image. However, that’s outside the scope of this tutorial.

Now let’s share the new image with others so they can create containers from it.

Step 8 — Pushing Docker Images to a Docker Repository

The next logical step after creating a new image from an existing image is to share it with a select few of your friends, the whole world on Docker Hub, or other Docker registry that you have access to. To push an image to Docker Hub or any other Docker registry, you must have an account there.

To push your image, first log into Docker Hub.

  1. docker login -u docker-registry-username

You’ll be prompted to authenticate using your Docker Hub password. If you specified the correct password, authentication should succeed.

Note: If your Docker registry username is different from the local username you used to create the image, you will have to tag your image with your registry username. For the example given in the last step, you would type:

  1. docker tag sammy/ubuntu-nodejs docker-registry-username/ubuntu-nodejs

Then you may push your own image using:

  1. docker push docker-registry-username/docker-image-name

To push the ubuntu-nodejs image to the sammy repository, the command would be:

  1. docker push sammy/ubuntu-nodejs

The process may take some time to complete as it uploads the images, but when completed, the output will look like this:

Output
The push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Pushed
5f70bf18a086: Pushed
a3b5c80a4eba: Pushed
7f18b442972b: Pushed
3ce512daaf78: Pushed
7aae4540b42d: Pushed

...


After pushing an image to a registry, it should be listed on your account’s dashboard, like that show in the image below.

If a push attempt results in an error of this sort, then you likely did not log in:

Output
The push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Preparing
5f70bf18a086: Preparing
a3b5c80a4eba: Preparing
7f18b442972b: Preparing
3ce512daaf78: Preparing
7aae4540b42d: Waiting
unauthorized: authentication required

Log in with docker login and repeat the push attempt. Then verify that it exists on your Docker Hub repository page.

You can now use docker pull sammy/ubuntu-nodejs to pull the image to a new machine and use it to run a new container.

How to Get Started With Portainer, a Web UI for Docker

Portainer is a popular Docker UI that helps you visualise your containers, images, volumes and networks. Portainer helps you take control of the Docker resources on your machine, avoiding lengthy terminal commands.

Portainer recently reached version 2.0 which added support for Kubernetes clusters. The tool also supports Docker Swarm and Azure ACI environments. In this tutorial, we’ll be keeping it simple and using Portainer to manage a local Docker installation.

Two editions of the software are available, the free and open-source CE and commercial Business. The extra capabilities of Business are mostly focused on enhanced access, quota management, and administrator controls.

Install Portainer

Make sure you’ve got Docker installed and running before proceeding any further. Docker 19.01 is required for all Portainer features to be fully supported.

First of all, you’ll need to create a new Docker volume. Portainer will use this to store its persistent data. Ours is going to be called portainer_data.

docker volume create portainer_data

Next, use Docker to start a new Portainer container:

docker run -d -p 9000:9000 --name=portainer --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

This command will pull the portainer/portainer-ce image and start a new container from it. The container will be detached and executing in the background (-d).

The volume created earlier is mounted to /data within the container, where Portainer stores all its application data. The host’s Docker socket is also mounted into the container, so that Portainer has access to your machine’s Docker instance. Finally, port 9000 on the host is bound to port 9000 within the container. This is the port Portainer exposes its web UI on.

First Run

You can now login to Portainer by visiting http://localhost:9000 in your browser. You’ll need to set a password for the admin user upon first use. You’ll then land on the Home screen.

Before beginning to use Portainer, it’s worth looking at the app’s own configuration options. Click the “Settings” link in the left navigation menu. Here, you can change Portainer security settings, set a custom application logo and opt out of anonymous usage statistics collection. Most of the settings should be fairly self-explanatory, with the majority focused on limiting the power afforded to non-administrator users.

The “Authentication” submenu in the navigation bar lets you configure how users login to Portainer. Portainer uses its own internal user management system by default but you can choose to use an existing LDAP server or OAuth provider. Select the method to use and then fill out the form fields to setup your preferred authentication system. When using the built-in users database, you can use the “Users” link in the sidemenu to create additional users and sort them into teams.

Endpoints

Portainer lets you manage multiple Docker endpoints. To begin with, you’ll see a single “local” endpoint, representing the Docker Engine running on your own machine.

To add an additional endpoint, click the “Endpoints” link in the sidebar. Next, click the blue “Add endpoint” button. Choose the type of endpoint you’ll be using and supply your connection details.

All being well, you’ll be able to add your endpoint. It’ll show up as a new selectable tile on the Portainer homescreen. Detailed guidance on adding additional endpoints is outside the scope of this introductory guide as success will require correct configuration of the host you’re connecting to.

Managing Containers

You’re now ready to begin using Portainer to interact with Docker. From the homescreen, click your “local” endpoint. It will become selected within Portainer, giving you access to the full management UI. You’ll arrive at a simple dashboard giving you an overview of your containers, images and volumes.

Click “Containers” on the dashboard or in the sidebar to open the container management screen. You’ll see a table displaying all your Docker containers.

To take an action against a container, click the checkbox next to its name. You can now use the button row at the top of the screen to start, stop, restart or remove the container. Containers which are currently running will show a green “running” state while stopped ones get a red “stopped”.

If you’re using a fresh Docker installation, your only container might be Portainer itself. Take care not to stop this container, as it’s serving the Portainer web UI you’re using!

Click the name of a container to view and change its details. This screen allows you to inspect the container’s properties, create a new Docker image from its current state and manage its network connections.

At the top of the screen, you’ll find five buttons under “Container status” that allow you to view the container’s logs (“Logs”), inspect its Docker manifest (“Inspect”), view resource usage statistics (“Stats”), access an interactive console (“Console”) or attach a console to the foreground process in the container (“Attach”).

Create a Container

To create a new container, return to the Containers screen and click the blue “Add container” button. You may also edit an existing container – effectively destroying it and replacing it with a new one with modified properties – by using the “Duplicate/Edit” button on the container details screen. Both operations display the same interface.

First, type a name for your new container. Next, specify the Docker image to use. For public images on Docker Hub, such as wordpress:latest, you can type an image name without providing any additional configuration.

To use images stored within a private registry, you’ll first need to add the registry’s details to Portainer. Click the “Registries” link under the Settings heading in the left sidebar. Press the blue “Add registry” button and define the URL, username and password of your registry. You’ll then be able to select it in the “Registry” dropdown on the container creation screen. You may also use the Registries screen to set credentials for Docker Hub connections, allowing you to pull private images and avoid the rate limits applied to unauthenticated users.

You’re now ready to deploy your container by pressing the “Deploy the container” button at the bottom of the form. Before proceeding, review the additional settings which are displayed above the button. You can configure port binding, force Portainer to pull the image before deploying and choose to remove the container automatically when it exits.

At the bottom of the screen, you’ll find an advanced settings UI that offers even more options – too many to cover exhaustively here. These replicate the entire functionality of the docker run CLI command, enabling you to set up the container’s command, entrypoint, volumes, network interfaces and environment variables. Much of this UI should feel intuitive if you’re already familiar with Docker’s capabilities.

Using Container Stacks

The container creation screen only permits you to spin up one container at a time. Portainer has built-in support for “stacks” which allow you to deploy linked containers. This functionality is based on docker-compose version 2.

Click the “Stacks” item in the navigation bar, then press the “Add stack” button. There’s no support for creating stacks graphically – you have to paste or upload a docker-compose.yml file. You may also choose to connect to a Git repository and use its docker-compose.yml directly.

Before deploying the stack, you’re able to set environment variables that will be made available to the containers. Choose which level of Portainer access control to apply and then click “Deploy the stack”. Portainer will pull all the images and create all the containers specified by the Compose file.

Select your stack from the Stacks screen to manage its containers collectively. You can stop all the containers in the stack, or delete the stack entirely, using the buttons at the top of the screen. There’s also controls to duplicate the stack or create a reusable template from its current state.

Templates can be accessed from the stack creation screen and allow you to quickly spin up new instances of frequently used services. Portainer also ships with a number of built-in templates, accessible from the “App Templates” link in the navigation bar.

RELATED: How to Install Docker and Docker Compose on Linux

Portainer’s Convenience

Portainer helps you quickly create, manage and monitor Docker containers. It provides a graphical interface to Docker CLI commands that can sometimes become long and unwieldy. It also makes Docker accessible to users who may be unfamiliar with command-line interfaces.

Besides its container management capabilities, Portainer also provides visibility into the other fundamental Docker resources. The Images screen allows you to view, pull, import, export and delete the images available on your endpoint. The Networks and Volumes screens act similarly, enumerating and providing control over their respective resources. Finally, the Events table offers a comprehensive listing of all the actions taken by the Docker engine. This can be useful when reviewing past actions and identifying when certain containers were created or destroyed.

Bilanc Public Rest API Documentation

Item Public Rest API

Title Show All Items
URL /public/items
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params NONE
Success Response Example:

Code: 200

Content: [
                    {
                    "id": 22433,
                    "code": "SHP03095",
                    "code2": "SHP03095",
                    "code3": "",
                    "description": "01 AUREA VERE E BARDHE",
                    "longDescription": "",
                    "minimumBalance": 0,
                    "itemType": 1,
                    "createDate": "31/12/2013",
                    "lastUpdateDate": "13/03/2018",
                    "deleted": false,
                    "purchasePrice": 130,
                    "purchaseDiscount": 0,
                    "salesDiscount": 0,
                    "active": true,
                    "unit2Coeficient": 0,
                    "activeForPurchase": true,
                    "depreciationMethod": -1,
                    "depreciationCoeficient": 0,
                    "netWeight": 0,
                    "grossWeight": 0,
                    "customsCountryOfOrigin": "",
                    "customsCode": "",
                    "volume": 0,
                    "excludedFromSalesDiscount": false,
                    "customsTaxPercentage": 0,
                    "userID": 70,
                    "groupID": 343,
                    "subGroupID": 344,
                    "defaultSupplierID": 3083,
                    "itemClassifier1ID": 207,
                    "itemClassifier2ID": 713,
                    "itemClassifier3ID": 202,
                    "itemClassifier4ID": 652,
                    "itemClassifier5ID": 773,
                    "VATTypeID": 1,
                    "unitID": 20,
                    "unit2ID": -1,
                    "accountingSchemeID": 1
                    }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Item Group Public Rest API

Title Show All Item Groups
URL /public/items/groups
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params NONE
Success Response Example:

Code: 200

Content: [
                    {
        "id": 343,
        "code": "MAPO",
        "description": "MAPO",
        "VAT": 0,
        "parentID": -1,
        "deleted": false,
        "createDate": "07/08/2013",
        "lastUpdateDate": "07/08/2013"
    }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Item SubGroup Public Rest API

Title Show All Item SubGroups By ItemGroupID
URL /public/items/groups/subgroups
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params Required:

parentID=[integer]

Data Params NONE
Success Response Example:

Request: /public/items/groups/subgroups?parentID=394

Code: 200

Content: [
      {
        "id": 460,
        "code": "ALCOHOLIC",
        "description": "ALCOHOLIC",
        "VAT": 0,
        "parentID": 394,
        "deleted": false,
        "createDate": "31/12/2013",
        "lastUpdateDate": "31/12/2013"
    },
    {
        "id": 542,
        "code": "BEER",
        "description": "BEER",
        "VAT": 0,
        "parentID": 394,
        "deleted": false,
        "createDate": "04/09/2015",
        "lastUpdateDate": "04/09/2015"
    }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Item Classifier Public Rest API

Title Show All Item Classifiers
URL /public/items/classifiers
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params NONE
Success Response Example:

Code: 200

Content: [
                     {
        "id": 342,
        "code": "BRANCA",
        "description": "BRANCA",
        "classifierType": 2,
        "userID": 70,
        "createDate": "31/12/2013",
        "lastUpdateDate": "31/12/2013",
        "deleted": false
    }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Item Unit Public Rest API

Title Show All Item Units
URL /public/items/units
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params NONE
Success Response Example:

Code: 200

Content: [
     {
        "id": 15,
        "code": "KG",
        "description": "KG",
        "userID": 70,
        "createDate": "29/11/2004",
        "lastUpdateDate": "29/11/2004",
        "deleted": false
    }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Unpaid Sales Document Headers Public Rest API

Title Show All Unpaid Sales Document Headers
URL /public/documents/sales/headers/unpaid/
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params NONE
Success Response Example:

Code: 200

Content: [
                    {
        "id": 53637,
        "documentNumber": "72296",
        "description": "FATURE PER RRUMBULLAKOSJE ARKA ",
        "total": 8333.3333,
        "documentDate": "03/01/2016",
        "declarationDate": "03/01/2016",
        "dueDate": "08/07/2016",
        "generateInventoryDoc": true,
        "withVAT": true,
        "downPayment": false,
        "serialNumber": "",
        "amountPaid": 5.688028000000581,
        "totalWithVAT": 9999.99996,
        "totalDiscount": 0,
        "rate": 1,
        "paymentStatus": "Faturë e Paguar Pjesërisht",
        "generatedSourceID": -1,
        "officialClassificationID": -1,
        "createDate": "08/07/2016",
        "lastUpdateDate": "16/09/2016",
        "deleted": false,
        "userID": 1028,
        "hasChildren": false,
        "docTypeID": 4,
        "fiscalRegisterMode": 3,
        "exportSerialNumber": "",
        "transportType": 0,
        "transportNotes": "",
        "transportTime": "",
        "notes": "",
        "salesOfferID": -1,
        "salesOrderID": -1,
        "salesAgentID": -1,
        "agentCommission": 0,
        "currencyID": 7,
        "clientID": 25958,
        "serviceUnitID": 129,
        "transporterID": -1
    }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Service Units Public Rest API

Title Show All Service Units
URL /public/serviceunits/
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params NONE
Success Response Example:

Code: 200

Content: [
                    {
         "id": 136,
        "code": "MAG TIRANA 2016",
        "description": "MAG TIRANA 2016",
        "active": true,
        "address": "",
        "contact": "",
        "mobile": "",
        "phone": "",
        "email": "",
        "niptSekondar": "",
        "notes": "",
        "createDate": "28/03/2016",
        "lastUpdateDate": "28/03/2016",
        "deleted": false,
        "customsDepot": false,
        "typeID": 0,
        "userID": 921,
        "cityID": -1,
        "defaultCostCenter1ID": -1,
        "defaultCostCenter2ID": -1
    }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Sales Agents Public Rest API

Title Show All Sales Agents
URL /public/salesagents/
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params NONE
Success Response Example:

Code: 200

Content: [
      {
        "id": 1182,
        "description": "SARDIAN MEZANI",
        "address": "ARTUR BIBO",
        "phone": "",
        "mobile": "",
        "email": "",
        "commission": 0,
        "active": false,
        "hasSpecificItems": false,
        "hasSpecificClients": false,
        "createDate": "02/02/2017",
        "lastUpdateDate": "21/03/2017",
        "deleted": false,
        "userID": 2084,
        "salesAgentUserID": 2109
    }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Cash Units Public Rest API

Title Show All Cash Units
URL /public/cashunits/
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params NONE
Success Response Example:

Code: 200

Content: [
      {
        "id": 80,
        "code": "ARKA 1",
        "description": "",
        "contact": "",
        "active": true,
        "createDate": "17/10/2017",
        "lastUpdateDate": "17/10/2017",
        "deleted": false,
        "userID": 2134,
        "cityID": -1,
        "accountID": 42713,
        "currencyID": 7,
        "DefaultCostCenter1ID": -1,
        "DefaultCostCenter2ID": -1
    }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Sales Types Public Rest API

Title Show All Sales Type
URL /public/salestypes/
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params NONE
Success Response Example:

Code: 200

Content: [
      {
        "id": 69,
        "salesType": "SHUMICE",
        "createDate": "24/06/2013",
        "lastUpdateDate": "24/06/2013",
        "deleted": false,
        "userID": 947,
        "currencyID": 7
    }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Items Sales Price Public Rest API

Title Show All Items Sales Price By Sales Type ID
URL /public/items/prices
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params Required:

salesTypeID=[integer]

Data Params NONE
Success Response Example:

Request: /public/items/prices?salesTypeID=70

Code: 200

Content: [
    {
        "itemID": 32949,
        "price": 65.04
    },
    {
        "itemID": 32950,
        "price": 335.39
    },
    {
        "itemID": 32951,
        "price": 0
    }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Save Sales Document Public Rest API

Title Save Sales Document
URL /public/documents/sales
Method PUT
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params
                {

                "header":{
                "id":-1,
                "documentNumber":"24",
                "description":"test",
                "documentDate":"16/03/2018",
                "declarationDate":"16/03/2018",
                "dueDate":"16/03/2018",
                "withVAT":true,
                "serialNumber":"",
                "rate":1,
                "currencyID":7,
                "serviceUnitID":140,
                "salesAgentID":140,
                "agentCommission":0,
                "transporterID":140,
                "clientID":30998
                },
                "body":[
                {"id":1113195,
                "price":350.8666666666667,
                "quantity":1,
                "VATCoeficient":0.19999999999999998,
                "discountPercentage":0,
                "itemID":22433,
                "vatTypeClassifierID":1
                }
                ]
                }
Success Response Example:

Request: /public/documents/sales

Code: 200

Content: [
   {
    "header": {
        "id": 216328,
        "documentNumber": "24",
        "description": "test",
        "total": 350.8666666666667,
        "documentDate": "16/03/2018",
        "declarationDate": "16/03/2018",
        "dueDate": "16/03/2018",
        "generateInventoryDoc": true,
        "withVAT": true,
        "downPayment": false,
        "serialNumber": "",
        "amountPaid": 0,
        "totalWithVAT": 421.04,
        "totalDiscount": 0,
        "rate": 1,
        "paymentStatus": "Faturë e Papaguar",
        "generatedSourceID": -1,
        "officialClassificationID": -1,
        "createDate": "16/03/2018",
        "lastUpdateDate": "16/03/2018",
        "deleted": false,
        "userID": 70,
        "hasChildren": false,
        "docTypeID": 4,
        "fiscalRegisterMode": 3,
        "exportSerialNumber": "",
        "transportType": 0,
        "transportNotes": "",
        "transportTime": "",
        "notes": "",
        "salesOfferID": -1,
        "salesOrderID": -1,
        "agentCommission": 0,
        "currencyID": 7,
        "clientID": 30998,
        "serviceUnitID": 140,
        "transporterID": -1,
        "salesAgentID": 140
    },
    "body": [
        {
            "id": 1113196,
            "price": 350.8666666666667,
            "quantity": 1,
            "VATCoeficient": 0.19999999999999998,
            "discountPercentage": 0,
            "itemID": 22433,
            "vatTypeClassifierID": 1
        }
    ]
}
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

Code:501

                    [
    {
        "date": "16/03/2018",
        "item": {
            "id": 22433,
            "code": "SHP03095",
            "code2": "SHP03095",
            "code3": "",
            "description": "01 AUREA VERE E BARDHE",
            "longDescription": "",
            "minimumBalance": 0,
            "itemType": 1,
            "createDate": "31/12/2013",
            "lastUpdateDate": "13/03/2018",
            "deleted": false,
            "purchasePrice": 130,
            "purchaseDiscount": 0,
            "salesDiscount": 0,
            "active": true,
            "unit2Coeficient": 0,
            "activeForPurchase": true,
            "depreciationMethod": -1,
            "depreciationCoeficient": 0,
            "netWeight": 0,
            "grossWeight": 0,
            "customsCountryOfOrigin": "",
            "customsCode": "",
            "volume": 0,
            "excludedFromSalesDiscount": false,
            "customsTaxPercentage": 0,
            "userID": 70,
            "groupID": 343,
            "subGroupID": 344,
            "defaultSupplierID": 3083,
            "itemClassifier1ID": 207,
            "itemClassifier2ID": 713,
            "itemClassifier3ID": 202,
            "itemClassifier4ID": 652,
            "itemClassifier5ID": 773,
            "VATTypeID": 1,
            "unitID": 20,
            "unit2ID": -1,
            "accountingSchemeID": 1
        },
        "quantity": -99821
    }
]

OR

Code:409

Content: ‘Exception message’

Notes

Items Inventory Public Rest API

Title Show All Items Inventory By Service Unit
URL /public/items/inventories
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params Required:

serviceUnitID=[integer]

Data Params NONE
Success Response Example:

Request: /public/items/inventories?serviceUnitID=139

Code: 200

Content: [
                {
                "itemID": 28619,
                "inventory": 1
                },
                {
                "itemID": 34063,
                "inventory": 2
                }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Receive Payment Document Public Rest API

Title Save Receive Payment Document
URL /public/documents/receivepayment
Method PUT
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params
                {

	"clientID":26117,

	"header":
		{
			"id":-1,
			"description":"test Rest",
			"amount":120,
			"exchangeRate":1,
			"documentDate":"19/03/2018",
			"cashUnitID":65
		},
	"body":
		[
			{
				"amountPaid":120,
				"exchangeRate":1,
				"invoiceID":216334,
				"invoiceType":4,
				"docID":-1,
				"partnerAccountID":31137,
				"currencyID":7
			}
		]

}
Success Response Example:

Request: /public/documents/receivepayment

Code: 200

Content: 1
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Get Clients Public Rest API

Title Show All Clients
URL /public/clients
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params NONE
Success Response Example:

Code: 200

Content: [
                    {
                      "id": 7259,
                      "code": "",
                      "nipt": "3213123123",
                      "debtLimit": 0,
                      "discountPercentage": 0,
                      "salesTypeID": 35,
                      "phone": "",
                      "mobile": "345345345",
                      "contactPerson": "",
                      "notes": "Pocket-PC : ",
                      "fiscalVATNo": "",
                      "fiscalBusinessNo": "",
                      "bankAccount1": "",
                      "bankAccount2": "",
                      "bankAccount3": "",
                      "paymentPeriodInDays": 0,
                      "isBlockSalesOnMaturityPeriodActive": false,
                      "accountID": 13070,
                      "cityID": 2,
                      "address": "",
                      "deleted": false,
                      "isActive": true,
                      "name": "Aasdasdasdad",
                      "email": "",
                      "sex": -1,
                      "education": -1,
                      "latitude": 0,
                      "longitude": 0,
                      "zoneID": -1,
                      "subZoneID": -1,
                      "salesAgentID": 3,
                      "transporterID": 5,
                      "isBirthdaySet": false,
                      "preferredPaymentType": -1,
                      "salesDiscountTypeID": 0,
                      "birthDay": "",
                      "clientClassifier1ID": -1,
                      "clientClassifier2ID": -1,
                      "clientClassifier3ID": -1,
                      "clientClassifier4ID": -1,
                      "clientClassifier5ID": -1,
                      "clientClassifier6ID": -1,
                      "userID": 70,
                      "createDate": "05/03/2018",
                      "lastUpdateDate": "05/03/2018"
                    }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Save Client Public Rest API

Title Save Client
URL /public/clients
Method PUT
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params
                {
                "id": -1,
                "code": "",
                "nipt": "",
                "debtLimit": 0,
                "discountPercentage": 0,
                "salesTypeID": 35,
                "phone": "",
                "mobile": "",
                "contactPerson": "",
                "notes": "",
                "fiscalVATNo": "",
                "fiscalBusinessNo": "",
                "bankAccount1": "",
                "bankAccount2": "",
                "bankAccount3": "",
                "paymentPeriodInDays": 0,
                "isBlockSalesOnMaturityPeriodActive": false,
                "cityID": 2,
                "address": "",
                "deleted": false,
                "isActive": true,
                "name": "111111222222222",
                "email": "",
                "sex": -1,
                "education": -1,
                "latitude": 0,
                "longitude": 0,
                "zoneID": -1,
                "subZoneID": -1,
                "salesAgentID": -1,
                "transporterID": -1,
                "isBirthdaySet": false,
                "preferredPaymentType": -1,
                "salesDiscountTypeID": 0,
                "birthDay": "",
                "clientClassifier1ID": -1,
                "clientClassifier2ID": -1,
                "clientClassifier3ID": -1,
                "clientClassifier4ID": -1,
                "clientClassifier5ID": -1,
                "clientClassifier6ID": -1,
                "userID": 70,
                "createDate": "20/03/2018",
                "lastUpdateDate": "20/03/2018"
                }
Success Response Example:

Code: 200

Content: [ClientID Created]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Update Client Public Rest API

Title Update Client
URL /public/clients
Method POST
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params
                {
                "id": 8263,
                "code": "",
                "nipt": "",
                "debtLimit": 0,
                "discountPercentage": 0,
                "salesTypeID": 35,
                "phone": "",
                "mobile": "",
                "contactPerson": "",
                "notes": "",
                "fiscalVATNo": "",
                "fiscalBusinessNo": "",
                "bankAccount1": "",
                "bankAccount2": "",
                "bankAccount3": "",
                "paymentPeriodInDays": 0,
                "isBlockSalesOnMaturityPeriodActive": false,
                "cityID": 2,
                "address": "",
                "deleted": false,
                "isActive": true,
                "name": "1111112222222223333333333",
                "email": "",
                "sex": -1,
                "education": -1,
                "latitude": 0,
                "longitude": 0,
                "zoneID": -1,
                "subZoneID": -1,
                "salesAgentID": -1,
                "transporterID": -1,
                "isBirthdaySet": false,
                "preferredPaymentType": -1,
                "salesDiscountTypeID": 0,
                "birthDay": "",
                "clientClassifier1ID": -1,
                "clientClassifier2ID": -1,
                "clientClassifier3ID": -1,
                "clientClassifier4ID": -1,
                "clientClassifier5ID": -1,
                "clientClassifier6ID": -1,
                "userID": 70,
                "createDate": "20/03/2018",
                "lastUpdateDate": "20/03/2018"
                }
Success Response Example:

Code: 200

Content: [Updated OK]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Delete Client Public Rest API

Title Delete Client
URL /public/clients/8263
Method Delete
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params Required:

clientID=[integer]

Data Params NONE
Success Response Example:

Code: 200

Content: [Delete OK]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Get ClientRecords Public Rest API

Title Get ClientRecords
URL /public/clients/getClientRecords?clientID=119&startDate=01/01/2018&endDate=01/02/2018
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params Required:

clientID=[integer]

startDate=[string] format(dd/MM/yyyy)

endDate=[string] format(dd/MM/yyyy)

Data Params NONE
Success Response Example:

Code: 200

Content: [
                    {
                       "situationBefore": 24338,
                       "situationAfter": 79512.41,
                       "clientRecords": [
                          {
                             "docTypeCode": "FS",
                             "docNumber": "5/318*POS*",
                             "docDate": "04/01/2018",
                             "serviceUnitGeneric": "Mag",
                             "debtOnDefaultCurrency": 110,
                             "creditOnDefaultCurrency": 0
                          },
                          {
                             "docTypeCode": "FS",
                             "docNumber": "5/319*POS*",
                             "docDate": "04/01/2018",
                             "serviceUnitGeneric": "Mag",
                             "debtOnDefaultCurrency": 110,
                             "creditOnDefaultCurrency": 0
                          }
                    }
                    ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Save Sales Order Document Public Rest API

Title Save Sales Order Document
URL /public/documents/salesorder
Method PUT
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params
                {

                    "header":{
                    "id":-1,
                    "documentNumber":"24",
                    "description":"test",
                    "documentDate":"18/03/2018",
                    "deliveryDate":"18/03/2018",
                    "deliveryTime":"00:00",
                    "orderStatus" : 1,
                    "withVAT":true,
                    "rate":1,
                    "currencyID":7,
                    "serviceUnitID":15,
                    "salesAgentID":140,
                    "agentCommission":0,
                    "transporterID":140,
                    "clientID":4
                    },
                    "body":[
                    {"id":1113195,
                    "price":350.8666666666667,
                    "quantity":1,
                    "VATCoeficient":0.19999999999999998,
                    "discountPercentage":0,
                    "itemID":4,
                    "vatTypeClassifierID":1
                    }
                    ]
                }
Success Response Example:

Request: /public/documents/sales

Code: 200

Content: [
               {
                "header": {
                    "id": 1125,
                    "documentNumber": "24",
                    "description": "test",
                    "total": 350.8666666666667,
                    "documentDate": "18/03/2018",
                    "deliveryDate": "27/03/2019",
                    "deliveryTime": "00:00",
                    "withVAT": true,
                    "totalWithVAT": 0,
                    "rate": 1,
                    "orderStatus": 1,
                    "createDate": "27/03/2019",
                    "lastUpdateDate": "27/03/2019",
                    "deleted": false,
                    "userID": 70,
                    "hasChildren": false,
                    "docTypeID": 74,
                    "transportType": 0,
                    "transportNotes": "",
                    "notes": "",
                    "offerID": -1,
                    "agentCommission": 0,
                    "currencyID": 7,
                    "clientID": 4,
                    "serviceUnitID": 15,
                    "transporterID": -1,
                    "salesAgentID": -1
                },
                "body": [
                    {
                        "id": 1847,
                        "price": 350.8666666666667,
                        "quantity": 1,
                        "VATCoeficient": 0.19999999999999998,
                        "discountPercentage": 0,
                        "itemID": 4,
                        "vatTypeClassifierID": 1
                    }
                    ]
                }
            ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Items Inventory By Service Unit Public Rest API

Title Show Items Inventory Situation By Service Unit
URL /public/items/inventories/{}
Method GET
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params
            {
                 "id" = "21",
                 "code" = "BRUSHE DHEMBESH COLGATE",
                 "serviceUnitID" = "3"
            }
Success Response Example:

Request: /public/items/inventories/{id=21, serviceUnitID=3}

Code: 200

Content:[
                {
                    "itemID": 21,
                    "itemCode": "BRUSHE DHEMBESH COLGATE",
                    "serviceUnitID": 3,
                    "serviceUnitCode": "MAGAZINA",
                    "situation": 219
                }
            ]
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Save Purchase Order Document Public Rest API

Title Save Purchase Order Document
URL /public/documents/purchaseorder
Method PUT
Required Authorization Yes
Header Params Authorization

Example:

Authorization = ‘y8AgQDAFFqo=’

URL Params NONE
Data Params
                {

                "header":{
                "id":4000,
                "documentNumber":"342",
                "notes":"From Purchase Order Public REST API",
                "description":"Test Rest API",
                "documentDate":"01/07/2019",
                "withVAT":true,
                "serialNumber":"",
                "rate":1,
                "currencyID":9,
                "serviceUnitID":3,
                "supplierID":1,
                "orderStatus":0
                },
                "body":[
                {"id":1113195,
                "price":350.8666666666667,
                "quantity":1,
                "VATCoeficient":0.19999999999999998,
                "discountPercentage":0,
                "itemID":4,
                "vatTypeClassifierID":1,
                "notes":""
                }
                ]
                }
Success Response Example:

Request: /public/documents/purchaseorder

Code: 200

Content: {
            "header": {
                "id": 1026,
                "documentNumber": "342",
                "description": "Test Rest API",
                "total": 350.8666666666667,
                "documentDate": "01/07/2019",
                "withVAT": true,
                "totalWithVAT": 421.04,
                "rate": 1,
                "orderStatus": 0,
                "createDate": "01/07/2019",
                "lastUpdateDate": "01/07/2019",
                "deleted": false,
                "userID": 70,
                "hasChildren": false,
                "docTypeID": 66,
                "notes": "From Purchase Order Public REST API",
                "currencyID": 9,
                "supplierID": 1,
                "serviceUnitID": 3
            },
            "body": [
                {
                    "id": 1113195,
                    "price": 350.8666666666667,
                    "quantity": 1,
                    "VATCoeficient": 0.19999999999999998,
                    "discountPercentage": 0,
                    "notes": "",
                    "itemID": 4,
                    "vatTypeClassifierID": 1
                }
            ]
        }
Error Response Example:

Code:401

Content: ‘UNAUTHORIZED’

OR

Code:409

Content: ‘Exception message’

Notes

Versioni 022.022

– Shtohet opsioni i likujdimit te faturave te Shitjes / Blerjes nga ambjenti dokumenteve te Arkes / Bankes

 

– Shtohet opsioni i ri “Recepturat e artikullit per Fleten e Prodhimit” me ate te se cilit  perdoruesi mund te perzgjedhe recepturat e prodhimit direkt te Moduli i  Artikullit

– Shtohet opsion per te kopjuar te gjitha dhenat e Fatures se Shitjes (Koka + Trupi i Dokumentit), qe te mund te behen paste ne nje ndermarrje tjeter

 

– Aplikohen ndryshimet e kerkuara ligjore (Prill 2022) per llogaritjen e TAP (Tatimet mbi te ardhurat personale) sipas ligjit Nr.113/2021
 

– Shtohet logjika e Tatimit ne Burim per legjislacionin Kosove, rasti i qiradhenies (tatimi ne burim (9% e vleres pa tvsh) eshte paguar nga klienti dhe duhet t’i zbritet shitjes)

 

– Ne Fature Blerje Importi me artikuj te thjeshte (mallra) kontabilizohen disa llogari analitike te 375, jo vetem nje

 

– Ne modulin e “Importit te te dhenave” shtohen opsionet:

 

   a.”Importo Artikuj”
   b.”Importo Kliente”
   c.”Importo Furnitore”
   d. Importo Artikujt e Perbere
   e. Importo Asete
 

– Ne modulin “Historiku i Ndryshimeve te Entiteve”, shtohet opsioni “Historiku i ndryshimeve te Furnitorit”

 

– Shtohet opsion per kontrollin e Arkes qe mos te kaloje me gjendje negative

 

– Ne Bilanc Mobile shtohet opsion i ri per te zgjedhur disa artikujt ne grup gjate rejistrimit te Porosi Shitje ose Fature Shitje

Raportet

– Ndryshim i “Pasqyra e te Ardhura – Shpenzime – PASH” sipas kerkesave te reja ligjore Kosove

 

– Ndryshim i “Pasqyra e pozites financiare- Bilanc”  sipas kerkesave te reja ligjore Kosove

 

– Shtohet raport i ri “Marzhi sipas klienteve”

 

– Shtohet opsioni/butoni “Shfaq Kartelen”, te raporti kontabilitetit: Gjendja e llogarive sipas monedhave

Setting up a WordPress cluster for huge sites

If you have a huge site, chances are you also do a lot of data processing – imports, exports, calculations etc.

These kind of batch processing jobs that max out the CPU and disk are the mortal enemy of real-time transactions. Your web visitors demand real-time interaction and fast response from your site, so if you are running imports and maxing out your CPU and disk on the same server hosting your web traffic then your users are regularly going to encounter slowness. This leads to loss of interest from your visitors, loss of sales and loss of SEO rank.

Ultimately, to solve this, once you have exhausted scaling up, you need to architect a better solution.

Scaling Up WordPress – check before building your cluster!

If you are considering building a cluster, it means you think you can’t get more speed from a single server. If you have not yet used our Super Speedy Pack yet then you should definitely try that before building a cluster. We built our Super Speedy Pack to solve search, filtering and underlying scalability issues exhibited in WordPress and WooCommerce.

It is not uncommon for customers with large sites to get 10x or more speed boost from our Super Speedy Plugin pack so prior to building your cluster, check out our Super Speedy Plugin pack.

Scaling Out with a WordPress Cluster

You need to separate the batch processing from the realtime stuff. That means you need a minimum of 2 servers. 1 server processes all the data imports, exports, calculations, category counts, etc – the data is replicated to the 2nd server and that server serves your web traffic.

If you’re going to the bother of getting 2 servers, you’re better off going further and getting 3 servers. It’s very little extra hassle and then gives you the ability to have 3 servers online at once with no batch processing, or 1 or 2 of the servers handling batch processing and the remaining ones serving web traffic.

Using this model, you can also easily switch servers offline to upgrade them without interrupting visitors to your website. That means you can be online 100% of the time!

Note that this setup technically uses 4 servers – the 4th server being a load balancer. Instead of this server, you could use the Digital Ocean load balancer feature/server instead but I provide details below for installing this easily.

If you’re looking at building a cluster for more speed, you may find our plugin pack will help give you the speed boost you need.

Step by step guide to building your cluster

This is the guide I use to install these clusters, so hopefully it helps some of you out there who wish to go huge with your WordPress sites.

Create 3 Ubuntu 16.04 servers

I like Digital Ocean, so this guide is using their hosting, but you can use any host provided they offer private networking and Ubuntu 16.04.

Create 3 Ubuntu 16.04 (or 3 servers on any platform) – they make it easy to make multiple at once – make sure to enable private networking and add your ssh key.

Install PerconaDB XtraDB Cluster on your cluster-nodes

Log into your 3 droplets and run the following commands on each node:

wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
apt-get update
apt-get upgrade
apt-get install percona-xtradb-cluster-57

Note: You will be asked to enter a new root password for the cluster. To make life easier, use the same password for each PerconaDB node, or leave the root password blank and it will infer security if you log in as root and connect.

Configure private networking

We want the nodes to share data over the private network, rather than out and in from your hosting company. This prevents crazy bandwidth costs, speeds things up and improves security.

Even though private networking is already enabled, we need to be able to reliably use eth1 (rather than eth0) as the private network device.

On each node edit the grub networking file. I prefer vi to edit files, but you can use nano or even edit the files with Filezilla.

vi /etc/default/grub.d/50-cloudimg-settings.cfg

Find the line that begins GRUB_CMDLINE_LINUX_DEFAULT and alter it as follows (add net.ifnames=0):

GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0 net.ifnames=0"

Save the file then run the update grub command and reboot (only time I know of where you need to reboot a linux box!).

update-grub
shutdown -r now

Repeat the above for all your nodes. Then you can check config with this:

ifconfig -a

You should see the public IP address against eth0 and the private address against eth1.

You can also view each ethernet devices configuration here:

cat /etc/network/interfaces.d/50-cloud-init.cfg

The file above will already be configured if you selected private networking when you created the droplet.

Take a note of the private IP address for each of your 3 nodes. This information is also available from your Digital Ocean interface when you click through to each droplet.

You can test private networking is working by pinging the private IP address of another node from one of the nodes:

ping 10.130.45.161

Configure replication

Firstly, we need a replication user. Create this user on all 3 nodes.

Log into mysql:

mysql

or if you chose a password for your mysql server earlier, use this:

mysql -u root -p

Enter the root DB password you chose earlier then create a new user for replication purposes (choose a strong password and note it down so we can add it to the configuration files):

CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'password';
GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
FLUSH PRIVILEGES;

Next exit MySQL by typing ‘exit’ then hitting enter, then stop MySQL on all 3 nodes using:

service mysql stop

On node1, customise the configuration file below according to your private IP addresses and replication user password enter it into this file:

vi /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf
  1. Enter the 3 private IP addresses for wsrep_cluster_address, separated by commas.
  2. Enter node 1 private IP address for  wsrep_node_address.
  3. Enter the sst password for wsrep_sst_auth.
  4. Change the name of the node on the line wsrep_node_name

Your file will end up looking something like this (lines in bold are the lines you need to alter from the default config):

[mysqld]
# Path to Galera library
wsrep_provider=/usr/lib/galera3/libgalera_smm.so

# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://10.130.45.161,10.130.47.4,10.130.47.11

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# Slave thread to use
wsrep_slave_threads= 8

wsrep_log_conflicts
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2

# Node IP address
wsrep_node_address=10.130.45.161
# Cluster name
wsrep_cluster_name=pxc-cluster

#If wsrep_node_name is not specified, then system hostname will be used
wsrep_node_name=pxc-cluster-node-1

#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=ENFORCING

# SST method
wsrep_sst_method=xtrabackup-v2

#Authentication for SST method
wsrep_sst_auth="sstuser:password"

Note: You will also need to remove the # comment from the beginning of the lines with the wsrep_node_address and the wsrep_sst_auth.

Copy the contents of the file and then save it. Configure node 2 and node 3 by editing the same file on those nodes and altering 2 rows from the file above:

  1. Change wsrep_node_address to be the private IP address of node 2 (or node 3 for that node)
  2. Change wsrep_node_name to pxc-cluster-node-2 or pxc-cluster-node-3

Once you’ve done this, you’re ready to bootstrap your cluster.

Bootstrap your cluster

On node 1, run the following command:

/etc/init.d/mysql bootstrap-pxc

Check it’s running by logging into mysql and running this command:

show status like 'wsrep%';

Note: The above command can be useful in future to check for replication status – you can see things like how many items are queued to be replicated amongst other details.

On node 2 and 3, run the following:

/etc/init.d/mysql start

You now have a Percona cluster with 3 nodes replicating data to each other.

Install Nginx and PHP 7 on all 3 nodes

On each node, install Nginx and PHP 7 using the following sequence of commands:

apt-get install nginx
apt-get install php7.0
apt-get install php7.0-curl 
apt-get install php7.0-gd 
apt-get install php7.0-intl 
apt-get install php7.0-mysql 
apt-get install php-memcached
apt-get install php7.0-mbstring
apt-get install php7.0-zip
apt-get install php7.0-xml
apt-get install php7.0-mcrypt
apt-get install unzip

A faster way to run all of the above would be using this single line:

apt-get install -y nginx php7.0 php7.0-curl php7.0-gd php7.0-intl php7.0-mysql php-memcached php7.0-mbstring php7.0-zip php7.0-xml php7.0-mcrypt unzip

 

Install Unison for file replication

After much testing, GlusterFS is not well-suited to WordPress file-replication. GlusterFS slows down a LOT when there are a lot of files in each directory. The guide has been updated to use Unison instead. This Unison setup uses a star schema for file replication, with node 1 at the centre of the star.

node 1 <--> node 2 file replication	
node 1 <--> node 3 file replication

That means a file edit on node 3 will replicate to node 1 and then to node 2. A file edit on node 1 will replicate out directly to node 2 and 3. Because of this, it makes sense to make node 1 our wp-admin server where we upload plugin files. Because of this star schema for file replication, node 1 is your most important node. If it goes down, or you switch it off, file replication will be paused until you bring it back online.

On each node, install unison:

apt-get -y install unison openssh-server

This will allow us to run the replication commands later once we have installed the WordPress files.

Configure SSH so nodes can connect to each other

SSH access is required for Unison to be able to replicate files. Run the following on all 3 nodes:

ssh-keygen

Hit enter 3 times to accept 3 defaults inc 2 blank passwords for the keyfile so it works non-interactively
Now, grab a copy of the id_rsa.pub files for each node and paste them into the other 2 nodes authorized_keys file. Find the public keys of each node by running this command:

cat /root/.ssh/id_rsa.pub

Then paste those public keys into the authorized_keys file of the other 2 nodes:

vi /root/.ssh/authorized_keys

Authenticate each node

On node 1, run:

ssh ipofnode2
ssh ipofnode3

You will be asked if you wish to trust the other node. Answer yes.

Repeat this on node 2 and node 3, connecting to the other 2 nodes.

Replicate the web folder files using Unison

Now that we have ssh authentication, we can set up Unison to replicate the website files to node 2 and 3. Run the following commands on node 1 of your cluster:

unison /var/www ssh://10.130.47.4//var/www -owner -group	
unison /var/www ssh://10.130.47.11//var/www -owner -group

Note: replace the IP addresses with your own and the folder names with your own.

Since you have no files yet in /var/www these commands will complete quickly.

Now set up a crontab/cron job for Unison. Run the following command:

crontab -e

Choose whatever editor you prefer when it asks you then append the following to the end of the file:

* * * * * unison -batch /var/www ssh://10.130.47.4//var/www &> /dev/null	
* * * * * unison -batch /var/www ssh://10.130.47.11//var/www &> /dev/null

Change IP addresses and folder locations. Use internal IP addresses so traffic goes over the faster internal network card.

Install WordPress files onto Node 1 only

Because we are using file replication and we already have database replication in our cluster, we only need to install WordPress onto node 1. On node 1, run the following:

wget https://wordpress.org/latest.zip -P /var/www/
unzip /var/www/latest.zip -d /var/www/
mv /var/www/wordpress /var/www/wpicluster
chown www-data:www-data /var/www/wpicluster -R
rm /var/www/latest.zip

Note: Instead of /var/www/wpicluster you could use /var/www/yourdomain.com but if you do, ensure you alter the nginx config files in the next section.

Configure Nginx to load your WordPress site on each node

I’ve created some configuration files to make this part quicker and easier. The configuration files set Nginx up to work over port 80 – later, we will add SSL to our load balancer. This reduces load on our servers since they won’t have to decrypt SSL traffic.

The configuration files here also configure the Nginx fastcgi-cache, so you don’t need to install Varnish. They’re also domain-name independent, so no configuration required.

On all 3 nodes, run the following commands:

git clone https://github.com/dhilditch/wordpress-cluster /root/wordpress-cluster/
cp /root/wordpress-cluster/etc/nginx/* -R /etc/nginx/
ln -s /etc/nginx/sites-available/wpintense.cluster.conf /etc/nginx/sites-enabled/
mkdir /sites/wpicluster/cache -p
service nginx restart

Set up your Load Balancer

Digital Ocean provide a load balancer, but with that approach you have to manually renew your SSL certificates. Plus you get less control – we want control so we can send wp-admin traffic to node 1. So follow the instructions below to set up your own load balancer.

First, create a droplet with Ubuntu 16.04 again, private networking and your SSH keys.

Then log onto your load balancer droplet and run the following commands:

add-apt-repository ppa:nginx/stable
apt-get update
apt-get install nginx

Then create a new file at /etc/nginx/conf.d/loadbalancer.conf.

vi /etc/nginx/conf.d/loadbalancer.conf

This will automatically be loaded when you restart nginx. Enter the following in the file, adjusted for your private IP addresses.

upstream clusterwpadmin {
     server 10.130.45.161; 
}
upstream clusternodes {
     ip_hash;
 server 10.130.47.4 max_fails=3; 
 server 10.130.47.11 max_fails=3;
}
server {
     listen 80;
# this block is for letsencrypt
 root /var/www/html;
 location ~ /.well-known {
      allow all;
  try_files $uri $uri/ =404;
 } 
 server_name _;
 #return 301 https://$host$request_uri;
 location ~ /wp-(admin/|login\.php\b|cron\.php) {
     proxy_pass http://clusterwpadmin;
 proxy_set_header X-Forwarded-Host $host;
 proxy_set_header X-Forwarded-Server $host;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header Host $host;
 }
location / {
     proxy_pass http://clusternodes;
 proxy_set_header X-Forwarded-Host $host;
 proxy_set_header X-Forwarded-Server $host;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header Host $host;
 }
}
server {
     listen 443 ssl;
 #ssl_certificate /etc/letsencrypt/live/yourdomain.com/cert.pem;
 #ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
location ~ /wp-(admin/|login\.php\b|cron\.php) { 
     proxy_pass http://clusterwpadmin; 
 proxy_set_header X-Forwarded-Host $host; 
 proxy_set_header X-Forwarded-Server $host; 
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
 proxy_set_header X-Forwarded-Proto $scheme; 
 proxy_set_header X-Real-IP $remote_addr; 
 proxy_set_header Host $host; 
 }
location / {
     proxy_pass http://clusternodes;
 proxy_set_header X-Forwarded-Host $host;
 proxy_set_header X-Forwarded-Server $host;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header Host $host;
 }
}
#if a user connects to yourdomain.com:9443 they will be directed to node 1. This is where admins should connect to add plugins etc.
server {
     listen 9443 ssl;
 server_name _;
 #ssl_certificate /etc/letsencrypt/live/yourdomain.com/cert.pem;
 #ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
location / {
     proxy_pass http://clusterwpadmin;
 proxy_set_header X-Forwarded-Host $host;
 proxy_set_header X-Forwarded-Server $host;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header Host $host;
 }
}

Save that file then you can restart nginx using:
service nginx restart

SaveNow, log into your DNS provider and point your new domain name at the public IP address of your loadbalancer node.

Configure WordPress

Now that we have database and file replication set up, and a load balancer, we can go about starting the 5-minute install of WordPress.

On node 1, connect to mysql using:

mysql -p (or just mysql if no root password)

Note: you’ll be asked for your password, so paste it in – right-click in putty is paste, and it’ll look like nothing happened because it’s a password field, but it does paste.

create database wpicluster;
grant all privileges on wpicluster.* to wpi@localhost identified by 'CHOOSEASTRONGPASSWORD';

Visit the URL you chose earlier for your loadbalancer, e.g. http://www.yourdomain.com.

Choose your language, then enter the database name: wpicluster, the username: wpi and the password you chose in the GRANT command above.

Set up WordPress Cron on only node 1

WP Cron is awful. It relies on users visiting your site in order to run scheduled tasks. In this case, we don’t even want scheduled jobs running on node 2 or 3, so we’ll disable wp cron across all nodes and then implement a real cron job on node 1.

On node 1, edit /var/www/wpicluster/wp-config.php. This file edit will replicate to your other nodes.

vi /var/www/wpicluster/wp-config.php

and insert the following lines somewhere:

define('WP_DISABLE_CRON', true);
if (strpos($_SERVER['HTTP_X_FORWARDED_PROTO'], 'https') !== false) {
 $_SERVER['HTTPS']='on';
}

Note: Only the first line is to disable WP_CRON. The rest is for later when we forward traffic from our load balancer and we want to ensure WordPress knows to server up static files over HTTPS if that was what the user requested.

If you’re struggling to figure out where to put this code, you can stick it after the define(‘DB_NAME’, ….); line.

This wp-config.php update will replicate out to the other nodes using GlusterFS, so you don’t need to modify this on the other nodes.

Now run:

crontab -e

And add an extra line as follows:

* * * * * wget http://yourdomain.com:9443/wp-cron.php?doing_cron &> /dev/null

Set up SSL on your load balancer

Now get your free SSL certificates from LetsEncrypt. On your load balancer node, run the following:

add-apt-repository ppa:certbot/certbot
apt-get update
apt-get install certbot
certbot certonly --webroot --webroot-path=/var/www/html -d yourdomain.com -d www.yourdomain.com

You should get a note telling you CONGRATULATIONS.  It will also tell you the location the key files were saved to. Now edit the loadbalancer.conf file from earlier to set up SSL. (WordPress installation does not work well over SSL which is why we add SSL after installation)

vi /etc/nginx/conf.d/loadbalancer.conf

Uncomment the ssl_certificate (x2) and ssl_certificate_key (x2) lines and replace the path with the paths provided by the output from LetsEncrypt.

Also uncomment the line “return 301 https://$host$request_uri;”

service nginx restart

Once you have edited the loadbalancer.conf file and restarted nginx, you will have a working SSL certificate on your load balancer.

Note: At this point, if you access your website with https, some CSS will appear broken. There is one final stage we have to complete in order to fix this, which is almost the final step in the entire process.

Update your Site URL in WordPress

Log into node1.yourdomain.com. Visit the WordPress dashboard, then Settings->General.

You will see 2 domain entries, both of which are probably currently tied to your node 1 subdomain, and both of which will be http instead of https.

Replace both of these entries with https://www.yourdomain.com.

Note: Here you enter the domain name you chose for your load balancer, normally www.yourdomain.com or similar.

If you didn’t already, edit your wp-config.php file on Node 1 and just below where you disabled WP_CRON, add the following lines:

if (strpos($_SERVER['HTTP_X_FORWARDED_PROTO'], 'https') !== false) {
  $_SERVER['HTTPS']='on';
}

The traffic is being served over https to your users, but because it’s plain http on each node (between your load balancer and your nodes), you need to make sure WordPress knows it’s HTTPS so any static files are correctly loaded over HTTPS.

Go forth and conquer!

That’s it, a mammoth task complete.

You can visit wp-admin from any server, but you can also force traffic to node 1 for your own admin purposes by visiting https://www.yourdomain.com:9443/wp-admin/. With the configuration above, node 1 is never serving traffic to front-end users, so you can run all kinds of admin jobs on there without impacting slowing down user traffic.

If anyone has any questions, fire away!

WordPress translate month names


function translate_archive_month($list) {
$patterns = array(
'/January/', '/February/', '/March/', '/April/', '/May/', '/June/',
'/July/', '/August/', '/September/', '/October/', '/November/', '/December/'
);
$replacements = array(
'Janar', 'Shkurt', 'Mars', 'Prill', 'Maj', 'Qershor',
'Korrik', 'Gusht', 'Shtator', 'Tetor', 'Nentor', 'Dhjetor'
);
$list = preg_replace($patterns, $replacements, $list);
return $list;
}

Free Web Hosting