Tag Archives: fortigate ha cluster

Configuring virtual clustering with two VDOMs and VDOM partitioning – web-based manager

Configuring virtual clustering with two VDOMs and VDOM partitioning – web-based manager

These procedures assume you are starting with two FortiGate units with factory default settings.

 

To configure the FortiGate units for HA operation

1. Register and apply licenses to the FortiGate unit. This includes FortiCloud activation, FortiClient licensing, and

FortiToken licensing, and entering a license key if you purchased more than 10 Virtual Domains (VDOMS).

2. You can also install any third-party certificates on the primary FortiGate before forming the cluster. Once the cluster is formed third-party certificates are synchronized to the backup FortiGate.

3. On the System Information dashboard widget, beside Host Name select Change.

4. Enter a new Host Name for this FortiGate unit.

 

New Name                     FGT_ha_1

5. Select OK.

6. Go to System > HA and change the following settings.

Mode                                           Active-Passive

Group Name                              vexample2.com

Password                                   vHA_pass_2

7. Select OK.

The FortiGate unit negotiates to establish an HA cluster. When you select OK you may temporarily lose connectivity with the FortiGate unit as the HA cluster negotiates and the FGCP changes the MAC address of the FortiGate unit interfaces (see Cluster virtual MAC addresses). The MAC addresses of the FortiGate interfaces change to the following virtual MAC addresses:

  • port1 interface virtual MAC: 00-09-0f-09-00-00
  • port10 interface virtual MAC: 00-09-0f-09-00-01 l  port11 interface virtual MAC: 00-09-0f-09-00-02 l  port12 interface virtual MAC: 00-09-0f-09-00-03 l  port13 interface virtual MAC: 00-09-0f-09-00-04 l  port14 interface virtual MAC: 00-09-0f-09-00-05 l  port15 interface virtual MAC: 00-09-0f-09-00-06 l  port16 interface virtual MAC: 00-09-0f-09-00-07 l  port17 interface virtual MAC: 00-09-0f-09-00-08 l  port18 interface virtual MAC: 00-09-0f-09-00-09 l  port19 interface virtual MAC: 00-09-0f-09-00-0a l  port2 interface virtual MAC: 00-09-0f-09-00-0b
  • port20 interface virtual MAC: 00-09-0f-09-00-0c
  • port3 interface virtual MAC: 00-09-0f-09-00-0d l  port4 interface virtual MAC: 00-09-0f-09-00-0e l  port5 interface virtual MAC: 00-09-0f-09-00-0f l  port6 interface virtual MAC: 00-09-0f-09-00-10 l  port7 interface virtual MAC: 00-09-0f-09-00-11 l  port8 interface virtual MAC: 00-09-0f-09-00-12 l  port9 interface virtual MAC: 00-09-0f-09-00-13

To be able to reconnect sooner, you can update the ARP table of your management PC by deleting the ARP table entry for the FortiGate unit (or just deleting all arp table entries). You may be able to delete the arp table of your management PC from a command prompt using a command similar to arp -d.

You can use the get hardware nic (or diagnose hardware deviceinfo nic) CLI command to view the virtual MAC address of any FortiGate unit interface. For example, use the following command to view the port1 interface virtual MAC address (Current_HWaddr) and the port1 permanent MAC address (Permanent_HWaddr):

get hardware nic port1

 

MAC: 00:09:0f:09:00:00

Permanent_HWaddr: 02:09:0f:78:18:c9

 

7. Power off the first FortiGate unit.

8. Repeat these steps for the second FortiGate unit.

Set the second FortiGate unit host name to:

New Name                                  FGT_ha_2

To connect the cluster to the network

1. Connect the port1 interfaces of FGT_ha_1 and FGT_ha_2 to a switch connected to the Internet.

2. Connect the port5 interfaces of FGT_ha_1 and FGT_ha_2 to switch connected to the Internet.

You could use the same switch for the port1 and port5 interfaces.

3. Connect the port2 interfaces of FGT_ha_1 and FGT_ha_2 to a switch connected to the internal network.

4. Connect the port6 interfaces of FGT_ha_1 and FGT_ha_2 to a switch connected to the engineering network.

5. Connect the port3 interfaces of the cluster units together. You can use a crossover Ethernet cable or regular

Ethernet cables and a switch.

6. Connect the port4 interfaces of the cluster units together. You can use a crossover Ethernet cable or regular

Ethernet cables and a switch.

7. Power on the cluster units.

The units start and negotiate to choose the primary unit and the subordinate unit. This negotiation occurs with no user intervention.

When negotiation is complete you can continue.

Configuring HA for virtual clustering

Configuring HA for virtual clustering

If your cluster uses VDOMs, you are configuring virtual clustering. Most virtual cluster HA options are the same as normal HA options. However, virtual clusters include VDOM partitioning options. Other differences between configuration options for regular HA and for virtual clustering HA are described below.

To configure HA options for a cluster with VDOMs enabled:

  • Log into the global web-based manager and go to System > HA.
  • From the CLI, log into the Global Configuration:

The following example shows how to configure active-active virtual clustering:

config global config system ha

set mode a-a

set group-name vexample1.com set password vHA_pass_1

end end

The following example shows how to configure active-passive virtual clustering:

config global config system ha

set mode a-p

set group-name vexample1.com set password vHA_pass_1

end end

The following example shows how to configure VDOM partitioning for virtual clustering. In the example, the FortiGate unit is configured with three VDOMs (domain_1, domain_2, and domain_3) in addition to the root VDOM. The example shows how to set up a basic HA configuration that sets the device priority of virtual cluster 1 to 200. The example also shows how to enable vcluster2, how to set the device priority of virtual cluster 2 to 100 and how to add the virtual domains domain_2 and domain_3 to virtual cluster 2.

When you enable multiple VDOMs, vcluster2 is enabled by default. Even so the command to enable vcluster2 is included in this example in case for some reason it has been disabled. When vcluster2 is enabled, override is also enabled.

The result of this configuration would be that the cluster unit that you are logged into becomes the primary unit for virtual cluster 1. This cluster unit processes all traffic for the root and domain_1 virtual domains.

config global config system ha

set mode a-p

set group-name vexample1.com set password vHA_pass_1

set priority 200

set vcluster2 enable config secondary-vcluster

set vdom domain_2 domain_3 set priority 100

end end

end

The following example shows how to use the execute ha manage command to change the device priorities for virtual cluster 1 and virtual cluster 2 for the other unit in the cluster. The commands set the device priority of virtual cluster 1 to 100 and virtual cluster 2 to 200.

The result of this configuration would be that the other cluster unit becomes the primary unit for virtual cluster 2. This other cluster unit would process all traffic for the domain_2 and domain_3 virtual domains.

 

config global

execute ha manage 1 config system ha

set priority 100

set vcluster2 enable config secondary-vcluster

set priority 200 end

end end

end

 

Example virtual clustering with two VDOMs and VDOM partitioning

This section describes how to configure the example virtual clustering configuration shown below. This configuration includes two virtual domains, root and Eng_vdm and includes VDOM partitioning that sends all root VDOM traffic to FGT_ha_1 and all Eng_vdom VDOM traffic to FGT_ha_2. The traffic from the internal network and the engineering network is distributed between the two FortiGate units in the virtual cluster. If one of the cluster units fails, the remaining unit will process traffic for both VDOMs.

The procedures in this example describe some of many possible sequences of steps for configuring virtual clustering. For simplicity many of these procedures assume that you are starting with new FortiGate units set to the factory default configuration. However, this is not a requirement for a successful HA deployment. FortiGate HA is flexible enough to support a successful configuration from many different starting points.

 

Example virtual clustering network topology

The following figure shows a typical FortiGate HA virtual cluster consisting of two FortiGate units (FGT_ha_1 and FGT_ha_2) connected to and internal network, an engineering network and the Internet. To simplify the diagram the heartbeat connections are not shown.

The traffic from the internal network is processed by the root VDOM, which includes the port1 and port2 interfaces. The traffic from the engineering network is processed by the Eng_vdm VDOM, which includes the port5 and port6 interfaces. VDOM partitioning is configured so that all traffic from the internal network is processed by FGT_ha_1 and all traffic from the engineering network is processed by FGT_ha_2.

This virtual cluster uses the default FortiGate heartbeat interfaces (port3 and port4).

 

Example virtual cluster showing VDOM partitioning

General configuration steps

The section includes web-based manager and CLI procedures. These procedures assume that the FortiGate units are running the same FortiOS firmware build and are set to the factory default configuration.

 

General configuration steps

1. Apply licenses to the FortiGate units to become the cluster.

2. Configure the FortiGate units for HA operation.

  • Optionally change each unit’s host name.
  • Configure HA.

2. Connect the cluster to the network.

3. Configure VDOM settings for the cluster:

  • Enable multiple VDOMs.
  • Add the Eng_vdm VDOM.
  • Add port5 and port6 to the Eng_vdom.

4. Configure VDOM partitioning.

5. Confirm that the cluster units are operating as a virtual cluster and add basic configuration settings to the cluster.

  • View cluster status from the web-based manager or CLI.
  • Add a password for the admin administrative account.
  • Change the IP addresses and netmasks of the port1, port2, port5, and port6 interfaces.
  • Add a default routes to each VDOM.

 

Active-active HA cluster in Transparent mode

Activeactive HA cluster in Transparent mode

This section describes a simple HA network topology that includes an HA cluster of two generic FortiGate units installed between an internal network and the Internet and running in Transparent mode.

 

Example Transparent mode HA network topology

The figure below shows a Transparent mode FortiGate HA cluster consisting of two FortiGate units (FGT_ha_1 and FGT_ha_2) installed between the Internet and internal network. The topology includes a router that performs NAT between the internal network and the Internet. The cluster management IP address is 10.11.101.100.

 

Transparent mode HA network topology

Port3 and port4 are used as the heartbeat interfaces. Because the cluster consists of two FortiGate units, you can make the connections between the heartbeat interfaces using crossover cables. You could also use switches and regular ethernet cables.

 

General configuration steps

This section includes web-based manager and CLI procedures. These procedures assume that the FortiGate units are running the same FortiOS firmware build and are set to the factory default configuration.

In this example, the configuration steps are identical to the NAT/Route mode configuration steps until the cluster is operating. When the cluster is operating, you can switch to Transparent mode and add basic configuration settings to cluster.

 

General configuration steps

1. Apply licenses to the FortiGate units to become the cluster.

2. Configure the FortiGate units for HA operation.

  • Optionally change each unit’s host name.
  • Configure HA.

2. Connect the cluster to the network.

3. Confirm that the cluster units are operating as a cluster.

4. Switch the cluster to Transparent mode and add basic configuration settings to the cluster.

  • Switch to Transparent mode, add the management IP address and a default route.
  • Add a password for the admin administrative account.
  • View cluster status from the web-based manager or CLI.