Migrating an AWS EC2 NFS-based Deployment via a Staging System
Overview
Overview
Prerequisites
Create the 3.7.x CMDB Archive
Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance
Mounting the NFS Storage on Supervisors and Workers
Change the IP Addresses Associated with Your Virtual Appliances
Registering Workers to the Supervisor
Setting the 4.2.1 SVN Password to the 3.7.x Password
In this migration method, the production 3.7.x FortiSIEM systems are left untouched. A separate mirror image 3.7.x system is first created, and then upgraded to 4.2.1. The NFS storage is mounted to the upgraded 4.2.1 system, and the collectors are redirected to the upgraded 4.2.1 system. The upgraded 4.2.1 system now becomes the production system, while the old 3.7.6 system can be decommissioned. The collectors can then be upgraded one by one. The advantages of this method is minimal downtime in which incidents aren’t triggered, and no upgrade risk. If for some reason the upgrade fails, it can be aborted without any risk to your production CMDB data. The disadvantages of this method are the requirement for hardware to set up the mirror 3.7.x mirror system, and longer time to complete the upgrade because of the time needed to set up the mirror system.
Prerequisites
Contact AccelOps Support to reset your license
Take a snapshot of your 3.7.x installation for recovery purposes if needed
Make sure the 3.7.x virtual appliance has Internet access
Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.
Create the 3.7.x CMDB Archive
- Log in to your running 3.7.x production AccelOp virtual appliance as root.
- Change the directory to /root.
- Copy the migration script ao-db-migration-4.2.1.tar to the /root
- Untar the migration script.
- Make sure that the owner of ao-db-migration.sh and ao-db-migration-archiver.sh files is root.
- Run the archive script, specifying the directory where you want the archive file to be created.
- Check that the archived files were successfully created in the destination directory.
You should see two files, cmdb-migration-*.tar, which will be used to migrate the 3.7.x CMDB, and opt-migration-*.tar, which contains files stored outside of CMDM that will be needed to restore the upgraded CMDB to your new 4.2.1 virtual appliance.
- Copy the cmdb-migration-*.tar file to the 3.7.x staging Supervisor, using the same directory name you used in Step 6.
- Copy the opt-migration-*.tar file to the /root directory of the 4.2.1 Supervisor.
Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance
- Log in to your 4.2.1 virtual appliance as root.
- Change the directory to /opt/phoenix/deployment/.
- Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
- When the migration script completes the virtual appliance will reboot.
Mounting the NFS Storage on Supervisors and Workers
Follow this process for each Supervisor and Worker in your deployment.
- Log in to your virtual appliance as root over SSH.
- Run the mount command to check the mount location.
- Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
- Reboot the Supervisor or Worker.
Change the IP Addresses Associated with Your Virtual Appliances
- Log in to the AWS EC2 dashboard.
- Click Elastic IPS, and then select the public IP associated with your 4.2.1 virtual appliance.
- Click Disassociate Address, and then Yes, Disassociate.
- In Elastic IPs, select the IP address associated with your 3.7.x virtual appliance.
- Click Disassociate Address, and then Yes, Disassociate.
- In Elastic IPs, select the production public IP of your 3.7.x virtual appliance, and click Associate Address to associate it with your 4.2.1 virtual appliance.
The virtual appliance will reboot automatically after the IP address is changed.
Registering Workers to the Supervisor
- Log in to the Supervisor as admin.
- Go to Admin > License Management.
- Under VA Information, click Add, and add the Worker.
- Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.
Setting the 4.2.1 SVN Password to the 3.7.x Password
- Log in to the 4.2.1 Supervisor as root over SSH.
- Change the directory to /opt/phoenix/deployment/jumpbox.
- Run the SVN password reset script ./phsetsvnpwd.sh
- Enter the following full admin credential to reset SVN password
Organization: Super
User: admin
Password:****
Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully