Category Archives: FortiSIEM

Using NFS Storage with AccelOps

Using NFS Storage with AccelOps

When you install FortiSIEM, you have the option to use either local storage or NFS storage. For cluster deployments using Workers, the use of an NFS Server is required for the Supervisor and Workers to communicate with each other. These topics describe how to set up and configure NFS servers for use with FortiSIEM.

Configuring NFS Storage for VMware ESX Server

This topic describes the steps for installing an NFS server on CentOS Linux 6.x and higher for use with VMware ESX Server. If you are using an operating system other than CentOS Linux, follow your typical procedure for NFS server set up and configuration.

  1. Login to CentOS 6.x as root.
  2. Create a new directory in the large volume to share with the FortiSIEM Supervisor and Worker nodes, and change the access permissions to provide FortiSIEM with access to the directory.
  3. Check shared directories.

Related Links

Setting Up NFS Storage in AWS

 

Using NFS Storage with Amazon Web Services

Setting Up NFS Storage in AWS

Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS

Setting Up NFS Storage in AWS

Youtube Talk on NFS Architecture for AWS

Several architecture and partner options for setting up NFS storage that is highly available across availability zone failures are presented by an AWS Solutions Architect in this talk (40 min) and link to slides.

Using EBS Volumes

These instructions cover setting up EBS volumes for NFS storage. EBS volumes have a durability guarantee that is 10 times higher tha n traditional disk drives. This is because data in traditional disk drives is replicated within an availability zone for component failures (RAID equivalent), so adding another layer of RAID does not provide higher durability guarantees. EBS has an annual failure rate (AFR) of 0.1 to 0.5%. In order to have higher durability guarantees, it is necessary to take periodic snapshots of the volumes. Snapshots are stored in AWS S3, which has 99.999999999% durability (via synchronous replication of data across multiple data centers) and 99.99% availability. see the topic Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS for more information.

Using EC2 Reserved Instances for Production

If you are running these machines in production, it is significantly cheaper to use EC2 Reserved Instances (1 or 3 year) as opposed to on-demand instances.

  1. Log in to your AWS account and navigate to the EC2 dashboard.
  2. Click Launch Instance.
  3. Review these configuration options:
Network and

Subnet

Select the VPC you set up for your instance.
Public IP Clear the option Automatically assign a public IP address to your instances if you want to use VPN.
Placement

Group

A placement group is a logical grouping for your cluster instances. Placement groups have low latency, full-bisection 10Gbps bandwidth between instances. Select an existing group or create a new one.
Shutdown

Behavior

Make sure Stop is selected.
Enable

Termination

Protection

Make sure Protect Against Accidental Termination is selected.
EBS

Optimized

Instance

An EBS optimized instance enables dedicated throughput between Amazon EBS and Amazon EC2, providing improved performance for your EBS volumes. Note that if you select this option, additional Amazon charges may apply.
  1. Click Next: Add Storage.
  2. Add EBS volumes up to the capacity you need for EventDB storage.

EventDB Storage Calculation Example

At 5000 EPS, you can calculate daily storage requirements to amount to roughly 22-30GB (300k events are 15-20MB on

average in compressed format stored in EventDB). In order to have 6 months of data available for querying, you need to have 4-6TB of storage. On AWS, the maximum EBS volume is sized at 1TB. In order to have larger disks, you need to create software RAID-0 volumes. You can attach, at most 8 volumes to an instance, which results in 8TB with RAID-0. There’s no advantage in using a different RAID configuration other than RAID-0, because it does not increase durability guarantees. In order to ensure much better durability guarantees, plan on performing regular snapshots which store the data in S3 as described in Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS. Since RAID-0 stripes data across these volumes, the aggregate IOPS you get will be the sum of the IOPS on individual volumes.

  1. Click Next: Tag Instance.
  2. Under Value, enter the Name you want to assign to all the instances you will launch, and then click Create Tag.

After you complete the launch process, you will have to rename each instance to correspond to its role in your configuration, such as Supervisor, Worker1, Worker2.

  1. Click Next: Configure Security Group.
  2. Select Select an Existing Security Group, and then select the default security group for your VPC.

FortiSIEM needs access to HTTPS over port 443 for GUI and API access,  and access to SSH over port 22 for remote management, which are set in the default security group. This group will allow traffic between all instances within the VPC.

  1. Click Review and Launch.
  2. Review all your instance configuration information, and then click Launch.
  3. Select an existing or create a new Key Pair to connect to these instances via SSH.

If you use an existing key pair, make sure you have access to it. If you are creating a new key pair, download the private key and store it in a secure location accessible from the machine from where you usually connect to these AWS instances.

  1. Click Launch Instances.
  2. When the EC2 Dashboard reloads, check that all your instances are up and running.
  3. Select the NFS server instance and click Connect.
  4. Follow the instructions to SSH into the volumes as described in Configuring the Supervisor and Worker Nodes in AWS Configure the NFS mount point access to give the FortiSIEM internal IP full access.
# Update the OS and libraries with the latest patches

$ sudo yum update -y

 

$ sudo yum install -y nfs-utils nfs-utils-lib lvm2

$ sudo su –

# echo  Y | mdadm –verbose –create /dev/md0 –level=0–chunk=256–

# mdadm –detail –scan >  /etc/mdadm.conf

# cat /etc/mdadm.conf

# dd if=/dev/zero of=/dev/md0 bs=512count=1

# pvcreate /dev/md0

# vgcreate VolGroupData /dev/md0

# lvcreate -l 100%vg -n LogVolDataMd0 VolGroupData

# mkfs.ext4 -j /dev/VolGroupData/LogVolDataMd0

# echo “/dev/VolGroupData/LogVolDataMd0 /data       ext4    defaults        1 1”

# mkdir /data

# mount /data

# df -kh

# vi /etc/exports

/data   10.0.0.0/24(rw,no_root_squash)

# exportfs -ar

# chkconfig –levels 2345nfs on

# chkconfig –levels 2345rpcbind on

# service rpcbind start

Starting rpcbind:                                          [  OK  ]

# service nfs start

Starting NFS services:                                     [  OK  ]

Starting NFS mountd:                                       [  OK  ]

Stopping RPC idmapd:                                       [  OK  ]

Starting RPC idmapd:                                       [  OK  ]

Starting NFS daemon:                                       [  OK  ]

raid-devices=4/dev/sdf /dev/sdg /dev/sd

>> /etc/fstab

Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS

In order to have high durability guarantees for FortiSIEM data, you should periodically create EBS snapshots on an hourly, daily, or weekly basis and store them in S3. The EventDB is typically hosted as a RAID-0 volume of several EBS volumes, as described in Setting Up NFS Storage in AWS. In order to reliably snapshot these EBS volumes together, you can use a script, ec2-consistent-snapshot, to briefly freeze the volumes and create a snapshot. You an then use a second script, ec2-expire-snapshots, to schedule cron jobs to delete old snapshots that are no longer needed. CMDB is hosted on a much smaller EBS volume, and you can also use the same scripts to take snapshots of it.

You can find details of how download these scripts and set up periodic snapshots and expiration in this blog post: http://twigmon.blogspot.com/2013/09/installing-ec2-consistent-snapshot.html

You can download the scripts from these from these Github projects:

https://github.com/alestic/ec2-consistent-snapshot https://github.com/alestic/ec2-expire-snapshots

FortiSIEM General Installation

General Installation

Configuring Worker Settings

If you are using an FortiSIEM clustered deployment that includes both Workers and Collectors, you must define the Address of your Worker nodes before you register any Collectors. When you register your Collectors, the Worker information will be retrieved and saved locally to the Collector. The Collector will then upload event and configuration change information to the Worker.

Worker Address in a Non-Clustered Environment

If you are not using an FortiSIEM clustered deployment, you will not have any Worker nodes. In that case, enter the IP address of the Supervisor for the Worker Address, and your Collectors will upload their information directly to the Supervisor.

  1. Log in to your Supervisor node.
  2. Go to Admin > General Settings > System.
  3. For Worker Address, enter a comma-separated list of IP addresses or host names for the Workers.

The Collector will attempt to upload information to the the listed Workers, starting with the first Worker address and proceeding until it finds an available Worker.

 

Registering the Supervisor
  1. In a Web browser, navigate to the Supervisor’s IP address: https://<Supervisor IP> 2. Enter the login credentials associated with your FortiSIEM license, and then click Register.
  2. When the System is ready message appears, click the Here link to log in to FortiSIEM.
  3. Enter the default login credentials.
User ID admin
Password admin*1
Cust/Org ID super
  1. Go to Admin > Cloud Health and check that the Supervisor Health is Normal.
Registering the Worker
  1. Go to Admin > License Management > VA Information.
  2. Click Add, enter the new Worker’s IP address, and then click OK.
  3. When the new Worker is successfully added, click OK.

You will see the new Worker in the list of Virtual Appliances.

  1. Go to Admin > Cloud Health and check that the Worker Health is Normal.
Registering the Collector to the Supervisor

The process for registering a Collector node with your Supervisor node depends on whether you are setting up the Collector as part of an enterprise or multi-tenant deployment. For a multi-tenant deployment,you must first create an organization and add Collectors to it before you register it with the Supervisor. For an enterprise deployment, you install the Collector within your IT infrastructure and then register it with the Supervisor.

Create an Organization and Associate Collectors with it for Multi-Tenant Deployments

Register the Collector with the Supervisor for Enterprise Deployments

Create an Organization and Associate Collectors with it for Multi-Tenant Deployments
  1. Log in to the Supervisor.
  2. Go to Admin > Setup Wizard > Organizations.
  3. Click Add.
  4. Enter Organization Name, Admin User, Admin Password, and Admin Email.
  5. Under Collectors, click New.
  6. Enter the Collector Name, Guaranteed EPS, Start Time, and End Time.
  7. Click Save.

The newly added organization and Collector should be listed on the Organizations tab.

  1. In a Web browser, navigate to https://<Collector-IP>:5480.
  2. Enter the Collector setup information.
Name Collector Name
User ID Organization Admin User
Password Organization Admin Password
Cust/Org ID Organization Name
Cloud URL Supervisor URL

 

  1. Click

The Collector will restart automatically after registration succeeds.

  1. In the Supervisor interface, go to Admin > Collector Health and check that the Collector Health is Normal.
Register the Collector with the Supervisor for Enterprise Deployments
  1. Log in to the Supervisor.
  2. Go to Admin > License Management. and check that Collectors are allowed by the license.
  3. Go to Setup Wizard > General Settings and add at least the Supervisor’s IP address.

This should contain a list of the Supervisor and Worker accessible IP addresses or FQDNs.

  1. Go to Setup Wizard > Event Collector and add the Collector information.
Setting Description
Name Will be used in step 6
Guaranteed EPS This is the number of Events per Second (EPS) that this Collector will be provisioned for
Start Time Select Unlimited
End Time Select Unlimited
  1. Connect to the Collector at https://:<IP Address of the Collector>:5480.
  2. Enter the Name from step 4.
  3. Userid and Password are the same as the admin userid/password for the Supervisor.
  4. The IP address is the IP address of the Supervisor.
  5. For Organization, enter Super.
  6. The Collector will reboot during the registration, and you will be able to see its status on the Collector Health page.

FortiSIEM Installing a Collector on Bare Metal Hardware

Installing a Collector on Bare Metal Hardware

You can install Collectors on bare metal hardware (that is, without a hypervisor layer). Be sure to read the section on Hardware Requirements for Collectors in Browser Support and Hardware Requirements before starting the installation process.

  1. Download the Linux collector ISO image from https://images.FortiSIEM.net/VMs/releases/CO/.
  2. Burn the ISO to a DVD so that you can boot from it to begin the setup.
  3. Before you begin the installation, make sure the host where you want to install the Collector has an Internet connection.
  4. Log into the server where you want to install the Collector as root and make sure your boot DVD is loaded.
  5. Go to /etc/yum.repos.d and make sure these configuration files are in the directory:

CentOS-Base.repo

CentOS-Debuginfo.repo

CentOS-Media.repo

CentOS-Vault.repo

  1. The system will reboot itself when installation completes.
  2. Follow the instructions in Registering the Collector to the Supervisor to complete the Collector set up.

FortiSIEM Installing in VMware ESX

Installing in VMware ESX

Setting the Network Time Protocol (NTP) for ESX

Installing a Supervisor, Worker, or Collector Node in ESX

Importing the Supervisor, Collector, or Worker Image into the ESX Server

Editing the Supervisor, Collector, or Worker Hardware Settings

Setting Local Storage for the Supervisor

Troubleshooting Tips for Supervisor Installations

Configuring the Supervisor, Worker, or Collector from the VM Console

Setting the Network Time Protocol (NTP) for ESX

It’s important that your Virtual Appliance has the accurate time in order to correlate events from multiple devices within the environment.

  1. Log in to your VMWare ESX server.
  2. Select your ESX host server.
  3. Click the Configuration
  4. Under Software, select Time Configuration.
  5. Click Properties.
  6. Select NTP Client Enabled.
  7. Click Options.
  8. Under General, select Start automatically.
  9. Under NTP Setting, click ...
  10. Enter the IP address of the NTP servers to use.

 

  1. Click Restart NTP service.
  2. Click OK to apply the changes.
Installing a Supervisor, Worker, or Collector Node in ESX

The basic process for installing an FortiSIEM Supervisor, Worker, or Collector node is the same. Since Worker nodes are only used in deployments that use NFS storage, you should first configure your Supervisor node to use NFS storage, and then configure your Worker node using the Supervisor NFS mount point as the mount point for the Worker. See Configuring NFS Storage for VMware ESX Server for more information. Collector nodes are only used in multi-tenant deployments, and need to be registered with a running Supervisor node.

Importing the Supervisor, Collector, or Worker Image into the ESX Server

Editing the Supervisor, Collector, or Worker Hardware Settings

Setting Local Storage for the Supervisor

Troubleshooting Tips for Supervisor Installations

When you’re finished with the specific hypervisor setup process, you need to complete your installation by following the steps described under Ge neral Installation.

 

 

 

 

Importing the Supervisor, Collector, or Worker Image into the ESX Server

  1. Download and uncompress the FortiSIEM OVA package from the FortiSIEM image server to the location where you want to install the image.
  2. Log in to the VMware vSphere Client.
  3. In the File menu, select Deploy OVF Template.
  4. Browse to the .ova file (example: FortiSIEM-VA-4.3.1.1145.ova) and select it.

On the OVF Details page you will see the product and file size information.

  1. Click Next.
  2. Click Accept to accept the “End User Licensing Agreement,” and then click Next.
  3. Enter a Name for the Supervisor or Worker, and then click Next.
  4. Select a Storage location for the installed file, and then click Next.

 

Running on VMWare ESX 6.0

If you are importing FortiSIEM VA, Collector, or Report Server images for VMWare on an ESXi 6.0 host, you will need to also “Upgrade VM Compatibility” to ESXi 6.0. If the VM is already started, you need to shutdown the VM, and use the “Actions” menu to do this. Due to some incompatibility created by VMWare, our collector VM processes restarted and the collector could not register with the supervisor. Similar problems are also likely to occur on supervisor, worker, or report server as well, so make sure their VM compatibilities are upgraded as well. More information about VM compatibility is available in the VMWare KB below:

https://kb.vmware.com/kb/1010675

Editing the Supervisor, Collector, or Worker Hardware Settings

Before you start the Supervisor, Worker, or Collector for the first time you need to make some changes to its hardware settings.

  1. In the VMware vSphere client, select the imported Supervisor, Worker, or Collector.
  2. Right-click on the node to open the Virtual Appliance Options menu, and then select Edit Settings… .
  3. Select the Hardware tab, and check that Memory is set to at least 16 GB and CPUs is set to 8.

Setting Local Storage for the Supervisor

Using NFS Storage

You can install the Supervisor using either native ESX storage or NFS storage. These instructions are for creating native EXS storage. See Configuring NFS Storage for VMware ESX Server for more information. If you are using NFS storage, you will set the IP address of the NFS server during Step 15 of the Configuring the Supervisor, Worker, or Collector from the VM Console process.

  1. On Hardware tab, click Add.
  2. In the Add Hardware dialog, select Hard Disk, and then click Next.
  3. Select Create a new virtual disk, and then click Next.
  4. Check that these selections are made in the Create a Disk dialog:
Disk Size 300GB

See the Hardware Requirements for Supervisor and Worker Nodes in the Browser Support and Hardware Requirements topic for more specific disk size recommendations based on Overall EPS.

Disk

Provisioning

Thick Provision Lazy Zeroed
Location Store to the Virtual Machine
  1. In the Advanced Options dialog, make sure that the Independent option for Mode is not selected.
  2. Check all the options for creating the virtual disk, and then click Finish.
  3. In the Virtual Machine Properties dialog, click OK. The Reconfigure virtual machine task will launch.

Troubleshooting Tips for Supervisor Installations

Check the  Supervisor System and Directory Level Permissions Check Backend System Health

Check the  Supervisor System and Directory Level Permissions

Use SSH to connect to the Supervisor and check that the cmdb, data, query, querywkr, and svn permissions match those in this table:

 

[root@super ~]# ls -l / dr-xr-xr-x.   2 root     root      4096 Oct 15 11:09 bin dr-xr-xr-x.   5 root     root      1024 Oct 15 14:50 boot drwxr-xr-x    4 postgres postgres  4096 Nov 10 18:59 cmdb drwxr-xr-x    9 admin    admin     4096 Nov 11 11:32 data drwxr-xr-x   15 root     root      3560 Nov 10 11:11 dev -rw-r–r–    1 root     root        34 Nov 11 12:09 dump.rdb drwxr-xr-x.  93 root     root     12288 Nov 11 12:12 etc drwxr-xr-x.   4 root     root      4096 Nov 10 11:08 home dr-xr-xr-x.  11 root     root      4096 Oct 15 11:13 lib dr-xr-xr-x.   9 root     root     12288 Nov 10 19:13 lib64 drwx——.   2 root     root     16384 Oct 15 14:46 lost+found drwxr-xr-x.   2 root     root      4096 Sep 23  2011 media drwxr-xr-x.   2 root     root      4096 Sep 23  2011 mnt drwxr-xr-x.  10 root     root      4096 Nov 10 09:37 opt drwxr-xr-x    2 root     root      4096 Nov 10 11:10 pbin dr-xr-xr-x  289 root     root         0 Nov 10 11:13 proc drwxr-xr-x    8 admin    admin     4096 Nov 11 00:37 query drwxr-xr-x    8 admin    admin     4096 Nov 10 18:58 querywkr dr-xr-x—.   7 root     root      4096 Nov 10 19:13 root dr-xr-xr-x.   2 root     root     12288 Oct 15 11:08 sbin drwxr-xr-x.   2 root     root      4096 Oct 15 14:47 selinux drwxr-xr-x.   2 root     root      4096 Sep 23  2011 srv drwxr-xr-x    4 apache   apache    4096 Nov 10 18:58 svn drwxr-xr-x   13 root     root         0 Nov 10 11:13 sys drwxrwxrwt.   9 root     root      4096 Nov 11 12:12 tmp drwxr-xr-x.  15 root     root      4096 Oct 15 14:58 usr drwxr-xr-x.  21 root     root      4096 Oct 15 11:01 var

 

Check that the /data , /cmdb, and /svn directory level permissions match those in this table:

 

[root@super ~]# ls -l /data drwxr-xr-x 3 root     root     4096 Nov 11 02:52 archive drwxr-xr-x 3 admin    admin    4096 Nov 11 12:01 cache drwxr-xr-x 2 postgres postgres 4096 Nov 10 18:46 cmdb drwxr-xr-x 2 admin    admin    4096 Nov 10 19:04 custParser drwxr-xr-x 5 admin    admin    4096 Nov 11 00:29 eventdb drwxr-xr-x 2 admin    admin    4096 Nov 10 19:04 jmxXml drwxr-xr-x 2 admin    admin    4096 Nov 11 11:33 mibXml

[root@super ~]# ls -l /cmdb drwx—— 14 postgres postgres  4096 Nov 10 11:08 data

[root@super ~]# ls -l /svn drwxr-xr-x 6 apache apache  4096 Nov 10 18:58 repos

 

Check Backend System Health

Use SSH to connect to the supervisor and run phstatus to see if the system status metrics match those in this table:

 

 

[root@super ~]# phstatus

Every 1.0s: /opt/phoenix/bin/phstatus.py

System uptime:  12:37:58 up 17:24,  1 user,  load average: 0.06, 0.01, 0.00

Tasks: 20 total, 0 running, 20 sleeping, 0 stopped, 0 zombie

Cpu(s): 8 cores, 0.6%us, 0.7%sy, 0.0%ni, 98.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st

Mem: 16333720k total, 5466488k used, 10867232k free, 139660k buffers

Swap: 6291448k total, 0k used, 6291448k free, 1528488k cached

PROCESS                  UPTIME         CPU%           VIRT_MEM       RES_MEM phParser                 12:00:34    0              1788m          280m phQueryMaster            12:00:34    0              944m           63m phRuleMaster             12:00:34    0              596m           85m phRuleWorker             12:00:34    0              1256m          252m phQueryWorker            12:00:34    0              1273m          246m phDataManager            12:00:34    0              1505m          303m phDiscover               12:00:34    0              383m           32m phReportWorker           12:00:34    0              1322m          88m phReportMaster           12:00:34    0              435m           38m phIpIdentityWorker       12:00:34    0              907m           47m phIpIdentityMaster       12:00:34    0              373m           26m phAgentManager           12:00:34    0              881m           200m phCheckpoint             12:00:34    0              98m            23m phPerfMonitor            12:00:34    0              700m           40m phReportLoader           12:00:34    0              630m           233m phMonitor                31:21       0              1120m          25m Apache                   17:23:23    0              260m           11m

Node.js                  17:20:54    0              656m           35m

AppSvr                   17:23:16    0              8183m          1344m

DBSvr                    17:23:34    0              448m           17m

 

 

Configuring the Supervisor, Worker, or Collector from the VM Console
  1. In the VMware vSphere client, select the Supervisor, Worker, or Collector virtual appliance 2. Right-click to open the Virtual Appliance Options menu, and then select Power > Power On.
  2. In the Virtual Appliance Options menu, select Open Console.
  3. In VM console, select Set Timezone and then press Enter.
  4. Select your Location, and then press Enter.
  5. Select your Country, and then press Enter.
  6. Select your Timezone, and then press Enter.
  7. Review your Timezone information, select 1, and then press Enter.
  8. When the Configuration screen reloads, select Login, and then press Enter.
  9. Enter the default login credentials.
Login root
Password ProspectHills
  1. Run the vami_config_net script to configure the network.

 

  1. When prompted, enter the the information for these network components to configure the Static IP address: IP Address, Netmask, Gate way, DNS Server(s).
  2. Enter the Host name, and then press Enter.
  3. For the Supervisor, set either the Local or NFS storage mount point.

For a Worker, use the same IP address of the NFS server you set for the Supervisor.

Supervisor Local storage /dev/sdd
NFS storage <NFS_Server_IP_Address>:/<Directory_Path>

 

After you set the mount point, the Supervisor will automatically reboot, and in 15 to 25 minutes the Supervisor will be successfully configured.

ISO Installation

These topics cover installation of FortiSIEM from an ISO under a native file system such as Linux, also known as installing “on bare metal.”  Installing a Collector on Bare Metal Hardware

FortiSIEM Installing in Microsoft Hyper-V

Installing in Microsoft Hyper-V

These topics describe how to install FortiSIEM on a Microsoft Hyper-V virtual server.

Importing a Supervisor, Collector, or Worker Image into Microsoft Hyper-V

Supported Versions

FortiSIEM has been tested to run on Hyper-V on Microsoft Windows 2012.

 

Importing a Supervisor, Collector, or Worker Image into Microsoft Hyper-V

Using Local or NFS Storage for EventDB in Hyper-V

Before you install an FortiSIEM virtual appliance in Hyper-V, you should decide whether you plan to use NFS storage or local storage to store event information in EventDB. If you decide to use a local disk, you can add a data disk of appropriate size. Typically, this will be named as /dev/sdd if it is the 4th disk. When using local disk, choose the type ‘Dynamically expanding’ (VHDX) format so that you are able to resize the disk if your EventDB will grow beyond the initial capacity.

If you are going to use NFS storage for EventDB, follow the instructions in the topic Configuring NFS Storage for VMware ESX Server.

Disk Formats for Data Storage

FortiSIEM virtual appliances in Hyper-V use dynamically expanding VHD disks for the root and CMDB partitions, and a dynamically expanding VHDX disk for EventDB. Dynamically expanding disks are used to keep the exported Hyper-V image within reasonable limits. See the Microsoft documentation topic Performance Tuning Guidelines for Windows Server 2012 (or R2) for more information.

  1. Download and uncompress the the FortiSIEM OVA package from the FortiSIEM image server to the location where you want to install the image.
  2. Start Hyper-V Manager.
  3. In the Action menu, select Import Virtual Machine.

The Import Virtual Machine Wizard will launch.

  1. Click Next.
  2. Browse to the folder containing the OVA package, and then click Next.
  3. Select the FortiSIEM image, and then click Next.
  4. For Import Type, select Copy the virtual machine, and then click
  5. Select the storage folders for your virtual machine files, and then click Next.
  6. Select the storage folder for your virtual machine’s hard disks, and then click Next.
  7. Verify the installation configuration, and then click Finish.
  8. In Hyper-V Manager, connect to the FortiSIEM virtual appliance and power it on.
  9. Follow the instructions in Configuring the Supervisor, Worker, or Collector from the VM Console to complete the installation.

Related Links

Configuring the Supervisor, Worker, or Collector from the VM Console

FortiSIEM Installing in Linux KVM

Installing in Linux KVM

The basic process for installing an FortiSIEM Supervisor, Worker, or Collector node in Linux KVM is the same as installing these nodes under VMware ESX, and so you should follow the instructions in Installing a Supervisor, Worker, or Collector Node in ESX. Since Worker nodes are only used in deployments that use NFS storage, you should first configure your Supervisor node to use NFS storage, and then configure your Worker node using the Supervisor NFS mount point as the mount point for the Worker. Collector nodes are only used in multi-tenant deployments, and need to be registered with a running Supervisor node.

Setting up a Network Bridge for Installing AccelOps in KVM

Importing the Supervisor, Collector, or Worker Image into KVM Configuring Supervisor Hardware Settings in KVM

Setting up a Network Bridge for Installing AccelOps in KVM

If FortiSIEM is the first guest on KVM, then a bridge network may be required to enable network connectivity. For details see the KVM documentation provided by IBM.

In these instructions, br0 is the initial bridge network, em1 is connected as a management network, and em4 is connected to your local area network.

  1. In the KVM host, go to the directory /etc/sysconfig/network-scripts/.
  2. Create a bridge network config file ifcfg-br0.

 

DEVICE=br0

BOOTPROTO=none

NM_CONTROLLED=yes

ONBOOT=yes

TYPE=Bridge

NAME=”System br0″

  1. Edit network config file ifcfg-em4.

 

DEVICE=em4

BOOTPROTO=shared

NM_CONTROLLED=no

ONBOOT=yes

TYPE=Ethernet

UUID=”24078f8d-67f1-41d5-8eea-xxxxxxxxxxxx”

IPV6INIT=no

USERCTL=no

DEFROUTE=yes

IPV4_FAILURE_FATAL=yes

NAME=”System em4″

HWADDR=F0:4D:00:00:00:00 BRIDGE=br0

  1. Restart the network service.
Importing the Supervisor, Collector, or Worker Image into KVM
  1. Download and uncompress the FortiSIEM OVA package from the FortiSIEM image server to the location where you want to install the image.
  2. Start the KVM Virtual Machine Manager.
  3. Select and right-click on a host to open the Host Options menu, and then select New.
  4. In the New VM dialog, enter a Name for your FortiSIEM node.
  5. Select Import existing disk image, and then click Forward.
  6. Browse to the location of OVA package and select it.
  7. Choose the OS Type and Version you want to use with this installation, and then click Forward.
  8. Allocate Memory and CPUs to the FortiSIEM node as recommended in the topic Browser Support and Hardware Requirements, and then click Forward.
  9. Confirm the installation configuration of your node, and then click Finish.
Configuring Supervisor Hardware Settings in KVM
  1. In KVM Virtual Machine Manager, select the FortiSIEM Supervisor, and then click Open.
  2. Click the Information icon to view the Supervisor hardware settings.
  3. Select the Virtual Network Interface.
  4. For Source Device, select an available bridge network.

See Setting up a Network Bridge for Installing FortiSIEM in KVM for more information.

  1. For Device model, select Hypervisor default, and then click Apply.
  2. In the Supervisor Hardware settings, select Virtual Disk.
  3. In the Virtual Disk dialog, open the Advanced options, and for Disk bus, select IDE.
  4. Click Add Hardware, and then select Storage.
  5. Select the Select managed or other existing storage option, and then browse to the location for your storage.

You will want to set up a disk for both CMDB (60GB) and SVN (60GB). If you are setting up FortiSIEM Enterprise, you may also want to create a storage disk for EventDB, with Storage format set to Raw.

  1. In the KVM Virtual Machine Manager, connect to the FortiSIEM Supervisor and power it on.
  2. Follow the instructions in Configuring the Supervisor, Worker, or Collector from the VM Console to complete the installation.

Related Links

Configuring the Supervisor, Worker, or Collector from the VM Console

Hypervisor Installations

Hypervisor Installations

Topics in this section cover the instructions for importing the AccelOps disk image into specific hypervisors and configuring the AccelOps virtual appliance. See the topics under General Installation for information on installation tasks that are common to all hypervisors.

Installing in Amazon Web Services (AWS)

Determining the Storage Type for EventDB in AWS

Configuring Local Storage in AWS for EventDB

Setting Up Supervisor, Worker and Collector Nodes in AWS

Setting Up AWS Instances

Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS

Configuring the Supervisor and Worker Nodes in AWS

Registering the Collector to the Supervisor in AWS

Setting up a Network Bridge for Installing AccelOps in KVM

Importing the Supervisor, Collector, or Worker Image into KVM Configuring Supervisor Hardware Settings in KVM

Importing a Supervisor, Collector, or Worker Image into Microsoft Hyper-V

Setting the Network Time Protocol (NTP) for ESX

Installing a Supervisor, Worker, or Collector Node in ESX

Importing the Supervisor, Collector, or Worker Image into the ESX Server

Editing the Supervisor, Collector, or Worker Hardware Settings

Setting Local Storage for the Supervisor

Troubleshooting Tips for Supervisor Installations

Configuring the Supervisor, Worker, or Collector from the VM Console

Installing in Amazon Web Services (AWS)

You Must Use an Amazon Virtual Public Cloud with AccelOps

You must set up a Virtual Public Cloud (VPC) in Amazon Web Services for FortiSIEM deployment rather than classic-EC2. FortiSIEM does not support installation in classic-EC2. See the Amazon VPC documentation for more information on setting up and configuring a VPC. See Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS for information on how to prevent the public IPs of your instances from changing when they are stopped and started.

Using NFS Storage with Amazon Web Services

If the aggregate EPS for your FortiSIEM installation requires a cluster (an FortiSIEM virtual appliance + worker nodes), then you must set up an NFS server. If your storage requirements for the EventDB are more than 1TB, it is strongly recommended that you use an NFS server where you can configure LVM+RAID0. For more information, see Setting Up NFS Storage in AWS.

 

Determining the Storage Type for EventDB in AWS

Configuring Local Storage in AWS for EventDB

Setting Up Supervisor, Worker and Collector Nodes in AWS

Setting Up AWS Instances

Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS

Configuring the Supervisor and Worker Nodes in AWS

Registering the Collector to the Supervisor in AWS

Note: SVN password reset issue after system reboot for FortiSIEM 3.7.6 customers in AWS Virtual Private Cloud (VPC)

FortiSIEM uses SVN to store monitored device configurations. In AWS VPC setup, we have noticed that FortiSIEM SVN password gets changed if the system reboots – this prevents FortiSIEM from storing new configuration changes and viewing old configurations. The following procedure can be used to reset the SVN password to FortiSIEM factory default so that FortiSIEM can continue working correctly.

This script needs to be run only once.

  1. Logon to Super
  2. Copy the attached “ao_svnpwd_reset.sh” script to Super on EC2+VPC deployment
  3. Stop all backend processes before running script by issuing the following command: phtools –stop all
  4. Run following command to change script permissions: “chmod +x ao_svnpwd_reset.sh”
  5. Execute “ao_svnpwd_reset.sh” as root user: “./ao_svnpwd_reset.sh”
  6. The system will reboot
  7. Check SVN access to make sure that old configurations can be viewed.
Determining the Storage Type for EventDB in AWS

If the aggregate EPS for your FortiSIEM installation requires a cluster (a virtual appliance +  Worker nodes), then you must set up an NFS server as described in Using NFS Storage with Amazon Web Services. If your storage requirement for EventDB is more than 1TB, it is recommended that you use an NFS server where you can configure LVM+RAID0, which is also described in those topics. Although it is possible to set up a similar LVM+RAID0 on the FortiSIEM virtual appliance itself, this has not been tested.

Here’s an example of how to calculate storage requirements: At 5000 EPS, you can calculate daily storage requirements to be about 22-30GB (300k events take roughly 15-20MB on average in compressed format stored in eventDB). So, in order to have 6 months of data available for querying, you need to have 4 – 6TB of storage.

If you only need one FortiSIEM node and your storage requirements are lower than 1TB, and is not expected to ever grow beyond this limit, you can avoid setting up an NFS server and use a local EBS volume for EventDB. For this option, see the topic Configuring Local Storage in AWS for EventDB.

Configuring Local Storage in AWS for EventDB

Create the Local Storage Volume

Attach the Local Storage Volume to the Supervisor

Create the Local Storage Volume

  1. Log in to AWS.
  2. In the E2 dashboard, click Volumes.
  3. Click Create Volume.
  4. Set Size to 100 GB to 1 TB (depending on storage requirement).
  5. Select the same Availability Zone region as the FortiSIEM Supervisor instance.
  6. Click Create.

Attach the Local Storage Volume to the Supervisor

  1. In the EC2 dashboard, select the local storage volume.
  2. In the Actions menu, select Attach Volume.
  3. For Instance, enter the Supervisor ID.
  4. For Device, enter /dev/xvdi.
  5. Click Attach.

 

Setting Up Supervisor, Worker and Collector Nodes in AWS

The basic process for installing an FortiSIEM Supervisor, Worker, or Collector node is the same. Since Worker nodes are only used in deployments that use NFS storage, you should first configure your Supervisor node to use NFS storage, and then configure your Worker node using the Supervisor NFS mount point as the mount point for the Worker. See Configuring NFS Storage for VMware ESX Server for more information. Collector nodes are only used in multi-tenant deployments, and need to be registered with a running Supervisor node.

Setting Up AWS Instances

Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS

Configuring the Supervisor and Worker Nodes in AWS

Registering the Collector to the Supervisor in AWS

When you’re finished with the specific hypervisor setup process, you need to complete your installation by following the steps described under Ge neral Installation.

You Must Use an Amazon Virtual Public Cloud with AccelOps

You must set up a Virtual Public Cloud (VPC) in Amazon Web Services for FortiSIEM deployment rather than classic-EC2. FortiSIEM does not support installation in classic-EC2. See the Amazon VPC documentation for more information on setting up and configuring a VPC. See Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS for information on how to prevent the public IPs of your instances from changing when they are stopped and started.

Using NFS Storage with Amazon Web Services

If the aggregate EPS for your FortiSIEM installation requires a cluster (an FortiSIEM virtual appliance + worker nodes), then you must set up an NFS server. If your storage requirements for the EventDB are more than 1TB, it is strongly recommended that you use an NFS server where you can configure LVM+RAID0. For more information, see Setting Up NFS Storage in AWS.

Setting Up AWS Instances

You Must Use an Amazon Virtual Public Cloud with AccelOps

You must set up a Virtual Public Cloud (VPC) in Amazon Web Services for FortiSIEM deployment rather than classic-EC2. FortiSIEM does not support installation in classic-EC2. See the Amazon VPC documentation for more information on setting up and configuring a VPC. See Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS for information on how to prevent the public IPs of your instances from changing when they are stopped and started.

Using NFS Storage with Amazon Web Services

If the aggregate EPS for your FortiSIEM installation requires a cluster (an FortiSIEM virtual appliance + worker nodes), then you must set up an NFS server. If your storage requirements for the EventDB are more than 1TB, it is strongly recommended that you use an NFS server where you can configure LVM+RAID0. For more information, see Setting Up NFS Storage in AWS.

  1. Log in to your AWS account and navigate to the EC2 dashboard.
  2. Click Launch Instance.
  3. Click Community AMIs and search for the AMI ID associated with your version of FortiSIEM. The latest AMI IDs are on the image server where you download the other hypervisor images.
  4. Click Select.
  5. Click Compute Optimized.

Using C3 Instances

You should select one of the C3 instances with a Network Performance rating of High, or 10Gb performance. The current generation of C3 instances run on the latest Intel Xeons that AWS provides. If you are running these machines in production, it is significantly cheaper to use EC2 Reserved Instances (1 or 3 year) as opposed to on-demand instances.

  1. Click Next: Configure Instance Details.
  2. Review these configuration options:
Network and Subnet Select the VPC you set up for your instance.
Number of

Instances

For enterprise deployments, set to 1. For a configuration of 1 Supervisor + 2 Workers, set to 3. You can also add instances later to meet your needs.
Public IP Clear the option Automatically assign a public IP address to your instances if you want to use VPN.
Placement

Group

A placement group is a logical grouping for your cluster instances. Placement groups have low latency, full-bisection 10Gbps bandwidth between instances. Select an existing group or create a new one.
EBS

Optimized

Instance

An EBS optimized instance enables dedicated throughput between Amazon EBS and Amazon EC2, providing improved performance for your EBS volumes. Note that if you select this option, additional Amazon charges may apply.
  1. Click Next: Add Storage.
  2. For Size, Volume Type, and IOPS, set options for your configuration.
  3. Click Next: Tag Instance.
  4. Under Value, enter the Name you want to assign to all the instances you will launch, and then click Create Tag.

After you complete the launch process, you will have to rename each instance to correspond to its role in your configuration, such as

Supervisor, Worker1, Worker2.

  1. Click Next: Configure Security Group.
  2. Select Select an Existing Security Group, and then select the default security group for your VPC.

FortiSIEM needs access to HTTPS over port 443 for GUI and API access,  and access to SSH over port 22 for remote management, which are set in the default security group. This group will allow traffic between all instances within the VPC.

  1. Click Review and Launch.
  2. Review all your instance configuration information, and then click Launch.
  3. Select an existing or create a new Key Pair to connect to these instances via SSH.

If you use an existing key pair, make sure you have access to it. If you are creating a new key pair, download the private key and store it in a secure location accessible from the machine from where you usually connect to these AWS instances.

  1. Click Launch Instances.
  2. When the EC2 Dashboard reloads, check that all your instances are up and running.
  3. All your instances will be tagged with the Name you assigned in Step 11, select an instance to rename it according to its role in your deployment.
  4. For all types of instances, follow the instructions to SSH into the instances as described in Configuring the Supervisor and Worker Nodes in AWS, and then run the script sh to check the health of the instances.

Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS

You need to create VPC-based Elastic IPs and attach them to your nodes so the public IPs don’t change when you stop and start instances.

  1. Log in to the Amazon VPC Console.
  2. In the navigation pane, click Elastic IPs.
  3. Click Allocate New Address.
  4. In the Allocate New Address dialog box, in the Network platform list, select EC2-VPC, and then click Yes, Allocate.
  5. Select the Elastic IP address from the list, and then click Associate Address.
  6. In the Associate Address dialog box, select the network interface for the NAT instance. Select the address to associate the EIP with from the Private IP address list, and then click Yes, Associate.

Configuring the Supervisor and Worker Nodes in AWS

  1. From the EC2 dashboard, select the instance, and then click Connect.
  2. Select Connect with a standalone SSH client, and follow the instructions for connecting with an SSH client.

For the connection command, follow the example provided in the connection dialog, but substitute the FortiSIEM root user name for ec2user@xxxxxx. The ec2-user .name is used only for Amazon Linux NFS server.

  1. SSH to the Supervisor.
  2. Run cd /opt/phoenix/deployment/jumpbox/aws.
  3. Run the script pre-deployment.sh to configure host name and NFS mount point.
  4. Accept the License Agreements.
NFS Storage <NFS Server IP>:/data

For <NFS Server IP>, use the 10.0.0.X IP address of the NFS Server running within the VPC

Local Storage /dev/xvdi
  1. The system will reboot.
  2. Log in to the Supervisor.
  3. Register the Supervisor by following steps in
  4. Run cd /opt/phoenix/deployment/jumpbox/aws.
  5. Run the script sh (now includes running post-deployment.sh automatically).
  6. The system will reboot and is now ready.
  7. To install a worker node, follow steps 1-9 and the worker is ready
  8. To add a Worker to the cluster (assume Worker is already installed)
    1. Log in to the FortiSIEM GUI
    2. Go to Admin > License Management > VA Information
    3. Click Add
    4. Enter the private address of the Worker Node

Registering the Collector to the Supervisor in AWS

  1. Locate a Windows machine on AWS.
  2. Open a Remote desktop session from your PC to that Windows machine on AWS.
  3. Within the remote desktop session, launch a browser and navigate to https://<Collector-IP>:5480
  4. Enter the Collector setup information.
Name Collector Name
User ID Admin User
Password Admin Password
Cust/Org ID Organization Name
Cloud URL Supervisor URL
  1. Click

The Collector will restart automatically after registration succeeds.

Browser Support and Hardware Requirements

Browser Support and Hardware Requirements

Supported Operating Systems and Browsers

Hardware Requirements for Supervisor and Worker Nodes

Hardware Requirements for Collector Nodes

Hardware Requirements for Report Server Nodes

Supported Operating Systems and Browsers

These are the browsers and operating systems that are supported for use with the FortiSIEM web client.

OS Supported Browsers Supported
 Windows Firefox, Chrome, Internet Explorer 11.x, Microsoft Edge
Mac OS X Firefox, Chrome, Safari
Linux Firefox, Chrome

 

Hardware Requirements for Supervisor and Worker Nodes

The FortiSIEM Virtual Appliance can be installed using either storage configured within the ESX server or NFS storage. See the topic Configuring NFS Server for more information on working with NFS storage.

Event Data Storage Requirements

The storage requirement shown in the Event Data Storage column is only for the eventdb data, but the /data partition also includes CMDB backups and queries. You should set the /data partition to a larger amount of storage to accommodate for this.

Encryption for Communication Between FortiSIEM Virtual Appliances

All communication between Collectors that are installed on-premises and FortiSIEM Supervisors and Workers is secured by TLS 1.2 encryption. Communications are managed by OpenSSL/Apache  HTTP Server/mod_ssl on the Supervisor/Worker side, and libcurl, using the NSS library for SSL, on the Collector side.The FortiSIEM Supervisor/Workers use RSA certificate with 2048 bits as default.

 

You can control the exact ciphers used for communications between virtual appliances by editing the SSLCipherSuite section in the file /etc/httpd/conf.d/ssl.conf on FortiSIEM Supervisors and Workers. You can test the ciphersuite for your Super or worker using the following nmap command:

nmap –script ssl-cert,ssl-enum-ciphers -p 443 <super_or_worker_fqdn>

Calculating Events per Second (EPS) and Exceeding the License Limit

AccelOps calculates the EPS for your system using a counter that records the total number of received events in a three minute time interval. Every second, a thread wakes up and checks the counter value. If the counter is less than 110% of the license limit (using the calculation 1.1 x EPS License x 180) , then AccelOps will continue to collect events. If you exceed 110% of your licensed EPS, events are dropped for the remainder of the three minute window, and an email notification is triggered. At the end of the three minute window the counter resets and resumes receiving events.

Overall EPS Quantity Host SW Processor Memory OS/App and CMDB Storage Event Data Storage

(1 year)

1,500 1 ESXi (4.0 or later preferred) 4 Core 3 GHz, 64 bit 16 GB

24 GB

(4.5.1+)

200GB (80GB OS/App, 60GB CMDB, 60G

B SVN)

3 TB
4,500 1 ESXi (4.0 or later preferred) 4 Core 3 GHz, 64 bit 16 GB

24 GB

(4.5.1+)

200GB (80GB OS/App, 60GB CMDB, 60G

B SVN)

8 TB
7,500 1 Super

1 Worker

ESXi (4.0 or later preferred) Super: 8 Core 3 GHz, 64 bit

Worker: 4 Core 3

GHz, 64 bit

Super: 24 GB Worker:

16 GB

Super: 200GB (80GB OS/App, 60GB CMDB, 60GB SVN)

Worker: 200GB (80GB OS/App)

12 TB
10,000 1 Super

1 Worker

ESXi (4.0 or later preferred) Super: 8 Core 3 GHz, 64 bit

Worker: 4 Core 3

GHz, 64 bit

Super: 24 GB Worker:

16 GB

Super: 200GB (80GB OS/App, 60GB CMDB, 60GB SVN)

Worker: 200GB (80GB OS/App)

17 TB
20,000 1 Super

3 Workers

ESXi (4.0 or later preferred) Super: 8 Core 3 GHz, 64 bit

Worker: 4 Core 3

GHz, 64 bit

Super: 24 GB Worker:

16 GB

Super: 200GB (80GB OS/App, 60GB CMDB, 60GB SVN)

Worker: 200GB (80GB OS/App)

34 TB
30,000 1 Super

5 Workers

ESXi (4.0 or later preferred) Super: 8 Core 3 GHz, 64 bit

Worker: 4 Core 3

GHz, 64 bit

Super: 24 GB Worker:

16 GB

Super: 200GB (80GB OS/App, 60GB CMDB, 60GB SVN)

Worker: 200GB (80GB OS/App)

50 TB
Higher than

30,000

Consult

FortiSIEM

         
Hardware Requirements for Collector Nodes
Component Quantity Host SW Processor Memory OS/App Storage
Collector 1 ESX 2 Core 2 GHz, 64 bit 4 GB 40 GB
Collector 1 Native Linux

Suggested Platform: Dell PowerEdge R210 Rack Server

2 Core, 64 bit 4GB 40 GB
Hardware Requirements for Report Server Nodes
Component Quantity Host

SW

Processor Memory OS/App Storage Reports Data Storage (1 year)
Report

Server

1 ESX 8 Core 3

GHz, 64 bit

16 GB 200GB (80GB OS/App, 60GB

CMDB, 60GB SVN)

See recommendations under Hardware Requirements for

Supervisor and Worker nodes

 

 

 

Information Prerequisites for All FortiSIEM Installations

You should have this information ready before you begin installing the FortiSIEM virtual appliance on ESX:

  1. The static IP address and subnet mask for your FortiSIEM virtual appliance.
  2. The IP address of NFS mount point and NFS share name if using NFS storage. See the topics Configuring NFS Storage for VMware ESX Server and Setting Up NFS Storage in AWS for more information.
  3. The FortiSIEM host name within your local DNS server.
  4. The VMWare ESX datastore location where the virtual appliance image will be stored if using ESX storage.