Load balancing your SIPIS installation across multiple SIPIS processes on a single machine

If your SIPIS installation hosts between 15000 upto 50000 users we recommend that you split the load across multiple SIPIS processes which run concurrently on a single machine.

As a running example, we are going to split the load across four SIPIS processes. The steps to enable load balancing are as follows.

1. Prepare additional SIPIS settings files

The default SIPIS installation comes with a single /etc/sipis/Settings.xml file, which systemd then passes as a command line parameter to the SIPIS process.

In fact, the SIPIS systemd script starts a sipis process for each Settings*.xml file it finds in the /etc/sipis/ directory. Therefore, to start four SIPIS processes we need to create four settings files by renaming the original /etc/sipis/Settings.xml to Settings1.xml and then copying it as Settings2.xml, Settings3.xml and Settings4.xml.

In the next step we need to edit each of the four settings files, supplying unique values for the following attributes:

<Sipis>

  <Server Name="Sipis" Port="14998" />

  <HttpServer Port="5000" />

  <Lock FileName="/var/run/sipis/sipis.lock"/>

  <Log FileName="/var/log/sipis/sipis.log" />

</Sipis>

The following table shows the values used in each settings file.

Attribute

Settings1.xml

Settings2.xml

Settings3.xml

Settings4.xml

Server[Name]

Sipis1

Sipis2

Sipis3

Sipis4

Server[Port]

14998

14988

14978

14968

HttpServer[Port]

5000

5010

5020

5030

Lock[FileName]

…/sipis1.lock

…/sipis2.lock

…/sipis3.lock

…/sipis4.lock

Log[FileName]

…/sipis1.log

…/sipis2.log

…/sipis3.log

…/sipis4.log

Note

You may have noticed a somewhat arbitrary convention that Server[Port] is decreased and HttpServer[Port] is increased by 10 in each consecutive settings file.

Additionally, make sure each settings file has FilterInstancesAccordingToLoadBalancer2BackendSection attribute set to Yes as follows.

<Sipis>

  <Server Name="…" Port="…"
      FilterInstancesAccordingToLoadBalancer2BackendSection="Yes" />

</Sipis>

2. Update LoadBalancer2 settings file

In the second step we’ll update the Backend section of the LoadBalancer2 settings file located at /etc/acrobits/LoadBalancer2/Settings.xml.

For our example, it should look like the following:

<LoadBalancer2>

  <Backend>

    <Sipis Name="Sipis1" SelectorPrefix="0/2" Address="127.0.0.1:14998" />

    <Sipis Name="Sipis2" SelectorPrefix="4/2" Address="127.0.0.1:14988" />

    <Sipis Name="Sipis3" SelectorPrefix="8/2" Address="127.0.0.1:14978" />

    <Sipis Name="Sipis4" SelectorPrefix="C/2" Address="127.0.0.1:14968" />

  </Backend>

</LoadBalancer2>

Make sure the Sipis[Name] attributes in the LoadBalancer2 settings file match exactly the Server[Name] attributes in the SIPIS settings files as these attributes are used by SIPIS processes to discover its selector prefix(es).

A selector is SHA1 value calculated from SIP account credentials and is used to identify a particular SIP account in a SIPIS installation. The selector is also used as a basis for load balancing scheme in which each sipis process is assigned SIP accounts based on a few most significant bits of the selector.

In our example we used two most significant selector bits (the /2 part) to split the “selector space” into four sub-spaces coresponding to 00, 01, 10 and 11 binary prefixes. The prefix value in the Sipis[SelectorPrefix] attribute needs to be a hexadecimal string, though, so we padded the binary prefixes with binary zeros to the nearest length divisible by four and converted the result into hexadecimal digits, i.e. 00b0000b0 hex, 01b0100b4 hex, 10b1000b8 hex and 11b1100bC hex.

3. Restart SIPIS machine or SIPIS services

To apply all the changes we have made, we can invoke the following commands

systemctl stop sipis
systemctl deamon-reload
systemctl restart LoadBalancer2
systemctl start sipis

or restart the whole SIPIS machine.

At the end you can invoke

top -u sipis

command to check that there is one LoadBalancer2 and four sipis processes running on the machine.