Category Archives: FortiSIEM

Upgrading a FortiSIEM Single Node Deployment

Upgrading a FortiSIEM Single Node Deployment

These instructions cover the upgrade process for an FortiSIEM Enterprise deployment with a single Supervisor.

  1. Using SSH, log in to the FortiSIEM virtual appliance as the root user.

Your console will display the progress of the upgrade process.

  1. When the upgrade process is complete, your FortiSIEM virtual appliance will reboot.
  2. Log in to your virtual appliance, and in the Admin > Cloud Health page, check that you are running the upgraded version of FortiSIEM.

Upgrading a FortiSIEM Cluster Deployment

Overview

Upgrading Supervisors and Workers

Upgrading Collectors

Overview

Follow these steps while upgrading a VA cluster

  1. Shutdown all Workers. Collectors can be up and running.
  2. Upgrade Super first (while all workers are shutdown)
  3. After Super is up and running, upgrade worker one by one.
  4. Upgrade collectors

Step #1 prevents the accumulation of Report files while Super is not available during upgrade (#2). If these steps are not followed, Supervisor may not be able to come up after upgrade because of excessive unprocessed report fie accumulation.

Note: Both Super and Worker MUST be on the same FortiSIEM version, else various software modules may not work properly. However, Collectors can be in older versions – they will work except that they may not have the latest discovery and performance monitoring features in the Super/Worker versions. So FortiSIEM recommends that you also upgrade Collectors within a short period of time.

If you have Collectors in your deployment, make sure you have configured an image server to use as a repository for the Collector

Upgrading Supervisors and Workers

For both Supervisor and Worker nodes, follow the upgrade process described here, but be sure to upgrade the Supervisor node first.

  1. Using SSH, log in to the FortiSIEM virtual appliance as the root user.

Your console will display the progress of the upgrade process.

  1. When the upgrade process is complete, your FortiSIEM virtual appliance will reboot.
  2. Log in to your virtual appliance, and in the Admin > Cloud Health page, check that you are running the upgraded version of FortiSIEM.
Upgrading Collectors

The process for upgrading Collectors is similar to the process for Supervisors and Workers, but you must initiate the Collector process from the Supervisor.

  1. Log in to the Supervisor node as an administrator.
  2. Go to Admin > General Settings.
  3. Under Image Server Settings, enter the download path to the upgrade image, and the Username and Password associated with your license.
  4. Go to Admin > Collector Health.
  5. Click Download Image, and then click Yes to confirm the download.

As the download progresses you can click Refresh to check its status.

  1. When Finished appears in the Download Status column of the Collector Health page, click Install Image.

The upgrade process will begin, and when it completes, your virtual appliance will reboot. The amount of time it takes for the upgrade to complete depends on the network speed between your Supervisor node and the Collectors.

  1. When the upgrade is complete, make sure that your Collector is running the upgraded version of FortiSIEM.

Upgrading FortiSIEM Windows Agent and Agent Manager

Upgrade from V1.0 to V1.1

Upgrade from V1.1 to V2.0

Upgrade from V2.0 to V2.1

Upgrading Windows Agent License

Uninstalling Agents

Upgrade from V1.0 to V1.1

Version 1.0 and 1.1 Backward Incompatibility

Note 1.0 Agents and Agent Managers communicate only over HTTP while 1.1 Agents and Agent Managers communicate only over HTTPS. Subsequently, 1.1 Agents and Agent managers are not backward compatible with 1.0 Agents and Agent Managers. You have to completely upgrade the entire system of Agents and Agent Managers.

  1. Uninstall V1.0 Agents
  2. Close V1.0 Agent Manager Application. 3. Uninstall V1.0 Agent Manager
  3. Bind Default Website with HTTPS as described in Pre-requisite in Installing FortiSIEM Windows Agent Manager.
  4. Install V1.1 Agent Manager following Installing FortiSIEM Windows Agent Manager.
    1. In Database Settings dialog, enter the V1.0 database path as the “FortiSIEM Windows Agent Manager” SQL Server database path (Procedures Step 6 in Installing FortiSIEM Windows Agent Manager).
    2. Enter the same Administrator username and password (as the previous installation) in the Agent Manager Administrator account creation dialog
  5. Install V1.1 Agents
  6. Assign licenses again. Use the Export and Import feature.
Upgrade from V1.1 to V2.0
Windows Agent Manager
  1. Enable TLS 1.2 on Agent Manager – FortiSIEM Supervisor/Worker 4.6.3 and above enforces the use of TLS 1.2 for tighter security. However, by default only SSL3 / TLS 1.0 is enabled in Windows Server 2008-R2. Therefore, enable TLS 1.2 for Windows Agent Manager 2.0 for operating with FortiSIEM Supervisor/Worker 4.6.3 and above.
    1. Start elevated Command Prompt (i.e., with administrative privilege) to Windows Agent Manager 1.1
    2. Run the following commands sequentially as shown.
    3. Restart computer
  2. Uninstall Agent Manager 1.1
  3. Install SQL Server 2012-SP1 Feature Pack on Agent manager available at https://www.microsoft.com/en-in/download/details.aspx?id=35
    1. Select the language of your choice and mark the following two MSIs (choose x86 or x64 depending on your platform) for download:
      1. msi
      2. msi
    2. Click on the Download button to download those two MSIs. Then double-click on those MSIs to install those one by one.
  4. Install Agent Manager 2.0
    1. In Database Settings dialog, set the old database path as AccelOpsCAC database path.
    2. Enter the same Administrator username and password (as in the previous installation) in the new Agent Manager Administrator account creation dialog.
  5. Run Database migration utility to convert from 1.1 to 2.0
    1. Open a Command Prompt window
    2. Go to the installation directory (say, C:\Program Files\AccelOps\Server)
    3. Run AOUpdateManager.exe with script.zip as the command line parameter. You will find script.zip alongside the MSI.
  6. Register Windows Agent Manager 2.0 to FortiSIEM.
 Windows Agent
  1. Uninstall V1.0 Agents
  2. Install Agents
Upgrade from V2.0 to V2.1
Windows Agent Manager
  1. Uninstall Agent Manager 2.0
  2. Install Agent Manager 2.1
    1. In Database Settings dialog, set the old database path as AccelOpsCAC database path.
    2. Enter the same Administrator username and password (as in the previous installation) in the new Agent Manager Administrator account creation dialog.
  3. Run Database migration utility to convert from 2.0 to 2.1
    1. Open a Command Prompt window
    2. Go to the installation directory (say, C:\Program Files\AccelOps\Server)
    3. Run AOUpdateManager.exe with script.zip as the command line parameter. You will find script.zip alongside the MSI.
  4. Register Windows Agent Manager 2.1 to FortiSIEM.
 Windows Agent
  1. Uninstall V2.0 Agents
  2. Install 2.1 Agents
Upgrading Windows Agent License

Follow these steps if you have bought additional Windows Agent licenses or extended the term of the license.

  1. Login to AccelOps Supervisor using admin account
  2. Go to Admin > License Management and make sure that the license is updated
  3. Go to Admin > Setup Wizard > Windows Agent
  4. Edit each Windows Agent Manager entry and modify the agent count and license expiry date if needed

The new license will be automatically pushed to each Windows Agent Manager. You can now logon to each Windows Agent Manager and allocate the additional licenses if needed.

Uninstalling Agents
Single Agent

Simply uninstall like a regular Windows service

Multiple Agents using Group Policy

Go to the Group Policy you created during Agent installation. Right click and select Edit

In the Group Policy Management Editor, go to MyGPO > Computer Configuration > Policies > Software Settings > Software

Installation

Right click on FortiSIEM Windows Agent <version>

Click All Tasks > Remove

In Remove Software dialog, choose the option Immediately uninstall the software from users and computers. Then click OK.

The FortiSIEM Windows Agent <version> entry will disappear from the right pane. Close the Group Policy Management Editor. Force the group policy update

On Domain Controller > cmd, run gpupdate /force

On Agent server > cmd, run gpupdate Restart each Agent Computer to complete the uninstall.

Automatic OS Upgrades during Reboot

In order to patch CentOS and system packages for security updates as well as bugfixes and make the system on-par with a fresh installed FortiSIEM node, the following script is made available. Internet connectivity to CentOS mirrors should be working in order for the following script to be successful, otherwise the script will print and error and exit. This script is available on all nodes starting from 4.6.3: Supervisor, Workers,

Collectors, and Report Server

/opt/phoenix/phscripts/bin/phUpdateSystem.sh

The above script is also invoked during system boot up and is invoked in the following script:

/etc/init.d/phProvision.sh

The ensures that the node is up to date right after an upgrade and system reboot. If you are running a node that was first installed in an older release and upgraded to 4.6.3, then there are many OS/system packages that will be downloaded and installed the first time. Therefore, upgrade time is longer than usual. On subsequent upgrades and reboots, the updates will be small.

Nodes that are deployed in bandwidth constrained environments can disable this by commenting out the line phUpdateSystem.sh in phProvision.sh above. However, it is strongly recommended to keep this in-place to ensure that your node has security fixes from CentOS and minimize the risk of an exploit. Alternatively, in bandwidth constrained environments, you can deploy a freshly installed collector to ensure that security fixes are up to date.

Upgrading to 4.6.3 for TLS 1.2

Upgrading to 4.6.3 for TLS 1.2

Enforcing TLS 1.2 requires that the following steps be followed in strict order for upgrade to succeed. Additional steps for TLS 1.2 compatibility are marked in bold.

  1. Remove /etc/yum.repos.d/accelops* and Run “yum update” on Collectors, Worker(s), Supervisor and to get all TLS 1.2 related libraries up to date. Follow this yum update order Collectors Worker(s) 
  2. If your environment has a collector and it is running AccelOps 4.5.2 or earlier (with JDK 1.7), then first patch the Collector for TLS 1.2 compatibility (see here). This step is not required for Collectors running AccelOps 4.6.1 or later.
  3. Pre-upgrade step for upgrading Supervisor: Stop FortiSIEM (previously AccelOps) processes all Workers by running “phtools –stop ALL”.

Collectors can be up and running. This is to avoid build up of report files.

  1. Upgrade Supervisor following usual steps.
  2. If your environment has Worker nodes, Upgrade Workers following usual steps.
  3. If your environment has AccelOps Windows Agents, then upgrade Windows Agent Manager from 1.1 to 2.0. Note there are special pre-upgrade steps to enable TLS 1.2 (see here).
  4. If your environment has Collectors, upgrade Collectors following usual steps.

Setting Up the Image Server for Collector Upgrades

If you want to upgrade a multi-tenant deployment that includes Collectors, you must set up and then specify an image server that will be used as a repository for the Collector upgrade files. You can use a standard HTTP server for this purpose, but there is a preferred directory structure for the server. These instruction describe how to set up that structure, and then add a reference to the image server in your Supervisor node.

Setting Up the Image Server Directories
  1. Log into the image server with Admin rights.
  2. Create the directory images/collector/upgrade.
  3. Download the latest collector image upgrade file from https://images.FortiSIEM.net/upgrade/offline/co/latest4/ to images/collector/u

pgrade.

  1. Untar the file.
  2. Test the image server locations by entering one of the following addresses into a browser:

http://images.myserver.net/vms/collector/upgrade/latest/ https://images.myserver.net/vms/collector/upgrade/latest/

Setting the Image Server in the Supervisor
  1. Log in to your Supervisor node.
  2. Go to Admin > General Settings > System.
  3. Under Image Server, enter the URL or IP address for your image server.
  4. Enter the authentication credentials for your image server.
  5. Click Save.

Migrating a KVM NFS-based Deployment via a Staging System

Migrating a KVM NFS-based Deployment via a Staging System

Overview

In this migration method, the production 3.7.x FortiSIEM systems are left untouched. A separate mirror image 3.7.x system is first created, and then upgraded to 4.2.1. The NFS storage is mounted to the upgraded 4.2.1 system, and the collectors are redirected to the upgraded 4.2.1 system. The upgraded 4.2.1 system now becomes the production system, while the old 3.7.6 system can be decommissioned. The collectors can then be upgraded one by one. The advantages of this method is minimal downtime in which incidents aren’t triggered, and no upgrade risk. If for some reason the upgrade fails, it can be aborted without any risk to your production CMDB data. The disadvantages of this method are the requirement for hardware to set up the mirror 3.7.x mirror system, and longer time to complete the upgrade because of the time needed to set up the mirror system.

The steps in this process are:

Overview

Prerequisites

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Mounting the NFS Storage on Supervisors and Workers

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Create the 3.7.x CMDB Archive

  1. Log in to your running 3.7.x production AccelOp virtual appliance as root.
  2. Change the directory to /root.
  3. Copy the migration script ao-db-migration-4.2.1.tar to the /root
  4. Untar the migration script.
  5. Make sure that the owner of ao-db-migration.sh and ao-db-migration-archiver.sh files is root.
  6. Run the archive script, specifying the directory where you want the archive file to be created.
  7. Check that the archived files were successfully created in the destination directory.

You should see two files, cmdb-migration-*.tar, which will be used to migrate the 3.7.x CMDB, and opt-migration-*.tar, which contains files stored outside of CMDM that will be needed to restore the upgraded CMDB to your new 4.2.1 virtual appliance.

  1. Copy the cmdb-migration-*.tar file to the 3.7.x staging Supervisor, using the same directory name you used in Step 6.
  2. Copy the opt-migration-*.tar file to the /root directory of the 4.2.1 Supervisor.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Mounting the NFS Storage on Supervisors and Workers

Follow this process for each Supervisor and Worker in your deployment.

  1. Log in to your virtual appliance as root over SSH.
  2. Run the mount command to check the mount location.
  3. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
  4. Reboot the Supervisor or Worker.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

Migrating Collectors

  1. After migrating all your Supervisors and Workers to 4.2.1, install the 4.2.1 Collectors.
  2. SSH to the 3.7.x Collector as root.
  3. Change the directory to /opt/phoenix/cache/parser/events.
  4. Copy the files from this directory to the same directory on the 4.2.1 system.
  5. Change the directory to /opt/phoenix/cache/parser/upload/svn.
  6. Copy the files from this directory to the same directory on the 4.2.1 system.
  7. Power off the 3.7.x Collector.
  8. SSH to the 4.2.1 Collector and change its IP address to the same as the 3.7.x Collector by running the vami_config_net
  9. In a browser, navigate to https://<4.2.1_Collector_IP_address>:5480 and fill in the administration information to complete the Collector setup/

 

 

Migrating the SVN Repository to a Separate Partition on a Local Disk

If you are using NFS storage, your SVN repository will be migrated to a local disk to improve performance and reliability. If you are using local storage only, the SVN repository will be moved out of the /data partition and into an /svn partition.

  1. Download ao-svn-migration.sh script from image server. (https://images.FortiSIEM.net/upgrade/va/4.3.1)
  2. Copy or move the ao-svn-migration.sh script to /root.
  3. Run ls -al to check that root is the owner of ao-svn-migration.sh.
  4. Run chmod to change the permissions on ao-svn-migration.sh to 755.
  5. Reboot the machine.
  6. Log into the Supervisor as root.
  7. When the script executes, you will be asked to confirm that you have 60GB of local storage available for the migration. When the script completes, you will see the message Upgrade Completed. SVN disk migration done.
  8. Run df –h to confirm that the /svn partition was completed.

Special pre-upgrade instruction for 4.3.3

  1. SSH as root into the Supervisor node
  2. Download “phupdateinstall-4.3.3.sh” script
  3. Copy or move the phupdateinstall-4.3.3.sh script to /root
  4. Run chmod to change the permissions on phupdateinstall-4.3.3.sh to 755

Special pre-upgrade instruction for 4.6.1

Instructions for Supervisor node

Run the following command as root.

Instructions for Collector  nodes

Run the following command as root on each collector prior to upgrading the collector from the GUI, or the upgrade will fail:

Enabling TLS 1.2 Patch On Old Collectors

Older AccelOps collectors 4.5.2 or earlier running JDK 1.7 do not have TLS 1.2 enabled. To enable them to communicate to FortiSIEM 4.6.3, follow these steps

  1. SSH to Collector and edit /opt/phoenix/bin/runJavaAgent.sh

 

Migrating a KVM NFS-based Deployment In Place

Migrating a KVM NFS-based Deployment In Place

Overview

In this migration method, the production FortiSIEM systems are upgraded in-place, meaning that the production 3.7.x virtual appliance is stopped and used for migrating the CMDB to the 4.2.1 virtual appliance. The advantage of this approach is that no extra hardware is needed, while the disadvantage is extended downtime during the CMDB archive and upgrade process. During this downtime events are not lost but are buffered at the collector. However, incidents are not triggered while events are buffered. Prior to the CDMB upgrade process, you might want to take a snapshot of CMDB to use as a backup if needed.

The steps for this process are:

Overview

Prerequisites

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Mounting the NFS Storage on Supervisors and Workers

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance

This file will be used during the CMDB restoration process.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Mounting the NFS Storage on Supervisors and Workers

Follow this process for each Supervisor and Worker in your deployment.

  1. Log in to your virtual appliance as root over SSH.
  2. Run the mount command to check the mount location.
  3. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
  4. Reboot the Supervisor or Worker.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

Migrating a KVM Local Disk-based Deployment using an RSYNC Tool

Migrating a KVM Local Disk-based Deployment using an RSYNC Tool

Overview

This migration process is for an FortiSIEM deployment with a single virtual appliance and the CMDB data stored on a local VMware disk, and where you intend to run the 4.2.1 version on a different physical machine as the 3.7.x version. This process requires these steps:

Overview

Prerequisites

Copy the 3.7.x CMDB to a 4.2.1 Virtual Appliance Using rsync

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

  1. Log in to the 4.2.1 virtual appliance as root.
  2. Check the disk size in the remote system to make sure that there is enough space for the database to be copied over.
  3. Copy the directory /data from the 3.7.x virtual appliance to the 4.2.1 virtual appliance using the rsync tool.
  4. After copying is complete, make sure that the size of the event database is identical to the 3.7.x system.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance This file will be used during the CMDB restoration process.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor

  1. In the vSphere client, power off the 3.7.x Supervisor.

The IP Address for the 3.7.x Supervisor will be transferred to the  4.2.1 Supervisor.

  1. Log in to the 3.7.x Supervisor as root over SSH.
  2. Run the vami_config_net

Your virtual appliance will reboot when the IP address change is complete.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

 

Migrating a KVM Local Disc-based Deployment In Place

Migrating a KVM Local Disc-based Deployment In Place

Overview

This migration process is for an FortiSIEM deployment with a single virtual appliance and the CMDB data stored on a local VMware disk, and where you intend to run a 4.2.x version on the same physical machine as the 3.7.x version, but as a new virtual machine. This process requires these steps:

Overview

Prerequisites

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Use More Storage for Your 4.2.1 Virtual Appliance

Install the 4.2.1 virtual appliance on the same host as the 3.7.x version with a local disk that is larger than the original 3.7.x version. You will need the extra disk space for copying operations during the migration.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance This file will be used during the CMDB restoration process.

Removing the Local Disk from the 3.7.x Virtual Appliance

  1. Log in to your vSphere client.
  2. Select your 3.7.x virtual appliance and power it off.
  3. Open the Hardware properties for your virtual appliance.
  4. Select IDE Disk 2, and then click Remove.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Adding the Local Disk to the 4.2.1 Virtual Appliance

  1. Log in to Virtual Machine Manager.
  2. Select your 4.2.1 virtual appliance and power it off.
  3. Go the Hardware settings for your virtual appliance and select IDE Disk 3.
  4. Click Remove.
  5. Click Add Hardware.
  6. Select
  7. Select the option to use managed or existing storage, and then browse to the location of the detached 3.7.x disk.
  8. Click Finish.
  9. Select Use an existing virtual disk, and then click Next.
  10. Browse to the location of the migrated virtual disk that was created by the migration script, and then click OK.
  11. Power on the virtual appliance.

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor

  1. In the vSphere client, power off the 3.7.x Supervisor.

The IP Address for the 3.7.x Supervisor will be transferred to the  4.2.1 Supervisor.

  1. Log in to the 3.7.x Supervisor as root over SSH.
  2. Run the vami_config_net

Your virtual appliance will reboot when the IP address change is complete.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

 

Migrating KVM-based deployments

Migrating KVM-based deployments

This section covers migrating FortiSIEM KVM based Virtual Appliances from 3.7.x to 4.2.1. Since FortiSIEM 4.2.1 has new CentOS version, the procedure is unlike a regular upgrade (say from 3.7.5 to 3.7.6) – certain special procedures have to be followed.

Very broadly, 3.7.6 CMDB have to be first migrated to a 4.2.1 CMDB on a 3.7.6 based system and then the migrated 4.2.1 CMDB has to be imported to a 4.2.1 system.

There are 4 choices based on

NFS or a single Virtual appliance based deployment

In-place or Staging or rsync based method is chosen for data migration

The various methods are explained later, but stated simply

Staging approach take more hardware but minimizes downtime and CMDB migration risk compared to the in-place approach rsync method takes longer to finish as event database has to be copied

If in-place method is to be deployed, then a snapshot method is highly recommended for recovery purposes.

 

Note: Internet access is needed for migration to succeed. A third party library needs to access the schema website.

Migrating an AWS EC2 NFS-based Deployment via a Staging System

Migrating an AWS EC2 NFS-based Deployment via a Staging System

Overview

Overview

Prerequisites

Create the 3.7.x CMDB Archive

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Mounting the NFS Storage on Supervisors and Workers

Change the IP Addresses Associated with Your Virtual Appliances

Registering Workers to the Supervisor

Setting the 4.2.1 SVN Password to the 3.7.x Password

In this migration method, the production 3.7.x FortiSIEM systems are left untouched. A separate mirror image 3.7.x system is first created, and then upgraded to 4.2.1. The NFS storage is mounted to the upgraded 4.2.1 system, and the collectors are redirected to the upgraded 4.2.1 system. The upgraded 4.2.1 system now becomes the production system, while the old 3.7.6 system can be decommissioned. The collectors can then be upgraded one by one. The advantages of this method is minimal downtime in which incidents aren’t triggered, and no upgrade risk. If for some reason the upgrade fails, it can be aborted without any risk to your production CMDB data. The disadvantages of this method are the requirement for hardware to set up the mirror 3.7.x mirror system, and longer time to complete the upgrade because of the time needed to set up the mirror system.

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Create the 3.7.x CMDB Archive

  1. Log in to your running 3.7.x production AccelOp virtual appliance as root.
  2. Change the directory to /root.
  3. Copy the migration script ao-db-migration-4.2.1.tar to the /root
  4. Untar the migration script.
  5. Make sure that the owner of ao-db-migration.sh and ao-db-migration-archiver.sh files is root.
  6. Run the archive script, specifying the directory where you want the archive file to be created.
  7. Check that the archived files were successfully created in the destination directory.

You should see two files, cmdb-migration-*.tar, which will be used to migrate the 3.7.x CMDB, and opt-migration-*.tar, which contains files stored outside of CMDM that will be needed to restore the upgraded CMDB to your new 4.2.1 virtual appliance.

  1. Copy the cmdb-migration-*.tar file to the 3.7.x staging Supervisor, using the same directory name you used in Step 6.
  2. Copy the opt-migration-*.tar file to the /root directory of the 4.2.1 Supervisor.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Mounting the NFS Storage on Supervisors and Workers

Follow this process for each Supervisor and Worker in your deployment.

  1. Log in to your virtual appliance as root over SSH.
  2. Run the mount command to check the mount location.
  3. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
  4. Reboot the Supervisor or Worker.

Change the IP Addresses Associated with Your Virtual Appliances

  1. Log in to the AWS EC2 dashboard.
  2. Click Elastic IPS, and then select the public IP associated with your 4.2.1 virtual appliance.
  3. Click Disassociate Address, and then Yes, Disassociate.
  4. In Elastic IPs, select the IP address associated with your 3.7.x virtual appliance.
  5. Click Disassociate Address, and then Yes, Disassociate.
  6. In Elastic IPs, select the production public IP of your 3.7.x virtual appliance, and click Associate Address to associate it with your 4.2.1 virtual appliance.

The virtual appliance will reboot automatically after the IP address is changed.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully