**1) What is EPS?**

EPS stands for *Event Per Second*, and defines the value of the data that comes from the system per second. It can be defined as the captured log count from one or all of the defined sources to the log server system per second.

**2) Why is EPS Important for Log Systems?**

The value of EPS has lost its importance today's technological society. Although most of the log management vendors makes license based on EPS amount, this licensing model doesn't provide the correct results every time, because the EPS value might be changed according to the traffic of your devices. But nowadays, knowing the average EPS amount of the log management systems is important, because we can calculate the data amount and required disk size by it.

As * Logsign *isn't licensed by EPS amount, EPS data is important for us only to calculate the required disk space.

*provides the service of unlimited EPS in all its products.*

**Logsign**When the average EPS count value is "x" in a system, the value can be changed to "y" if somebody hack the system. So that the purchased EPS limit can be exceeded. And the exceeded EPS values won't be captured by the log management system. So * Logsign *guarantees not to lose any logs when an attack occurs by not using EPS-based licensing model.

**3) How is EPS Calculated?**

EPS values never be able to calculated with %100 accuracy, but it can be defined with average values. Even all the sources are integrated and all the logs started to be collected, at the next moment or at another "T" time it'll be almost impossible to predict the EPS amount correctly. This is because the data amount in the system may change any time.

For example, let's think about a system that the internet traffic logs are forwarded to it periodically. When the amount of the traffic logs has a value of "x" at a time, it might decrease at the lunch break to a "y" value as the traffic will decrease at that time, or an attack might cause to increase the EPS value and so the "x" may increase to a "z" value.

* Logsign *provides its users the possibilities of detecting the EPS amounts in their system. The detection of this before the installation has a big importance for the perfectness and persistence of the installation.

**!!! Please contact to Logsign technical support staff about the calculation of EPS amount !!!**

**4) The Affect of EPS Values to Disk Space**

EPS values provide an average opinion for calculating the disk space. Its calculation method can change according to the sources that are integrated. The templates that are used to calculate the average values are just like below.

The size of 1-line Firewall log is about **0,3 Kb **(it can change according to the firewall vendor). If these logs are processed with a normalization method, that means the log management system takes these logs then process and parse them, the size of 1 line log can increase to **0.7 **to **1 Kb**. (if there are more characters or words in 1 line of the log the size can be greater)

The size of 1-line Microsoft log can change between **0,8-1 Kb**. That's because Microsoft contains so much parameters in its logs.

That can be calculated as like this in the systems that uses NoSQL architecture such as * Logsign. *But in the other traditional SQL used systems, these sizes can reach to 3-4 times more than NoSQL architectures.

**5) Calculating the Disk Size**

By the informations above, there can be used a simple way to calculate the average disk size that the system will use.

In * Logsign *architecture, to calculate the minimum disk size you can follow the ways below.

The data amount that will be kept in index and archive must be specified or predicted in advance. As 15-day index data is critical for security, it's recommended that the index data must be kept for the last 15 days as minimum.

The relation between the index and archive data is;

The index data are the logs that are saved without compression, and the archive data are the zipped ones. While the compression rate can change according to the type of data, in traditional calculations this rate is considered as between **1/10 - 1/20**.

**NOTE:** While** **some data can be compressed by the rate of 1/40, some data may not be compressed.

**The Formula**

It can be calculated as **Total Data = (Daily index size * Number of days) + (Daily archive size * Number of days)**

For example, think about a system that the index data are kept for 15 days, and the archive data will be saved for 1 year. The daily index size is considered as 100 GB, and the compression rate is considered as 1/20 for this system.

The formula must be as below;

Index size = 100 GB * 15 = 1.5 TB

Daily archive size = 100 GB * 1/20 = 5 GB

Archive data size = 5 GB * 365 = 1.8 TB

Total data size = Index + Archive = 1.5 TB + 1.8 TB = 3.3 TB

**NOTE: **The system supports the compression in index architecture.

**NOTE: **It's highly recommended that we must keep an extra 500 GB space for possible increasing peak values in the system.

**The Factors that Affect the Calculation of Disk Size**

** 1. Log Normalization: **The log size will increase if the log contains geographical informations inside it. Or it may increase according to the line count of the vendor's logs.

**2. Filtered Values: **If the unwanted data are predefined for the logs we get from the sources, these values will not be kept. So that we will be able to save the logs with smaller sizes.

**3. Redundancy: **This feature can be used optionally in * Logsign*. If the same log is caught for more than one at a defined time interval,

*will keep just the count of them you have specified before. For example in a 1-second time period, if the same log is caught for 15 times, you can get*

**Logsign***keep just 5 of them.*

**Logsign**

**!!! Please contact to Logsign technical support staff about the calculation of disk size !!!**

## Comments