Category Archives: FortiOS 5.4 Handbook

The complete handbook for FortiOS 5.4

Operating a cluster

Operating a cluster

With some exceptions, you can operate a cluster in much the same way as you operate a standalone FortiGate unit. This chapter describes those exceptions and also the similarities involved in operating a cluster instead of a standalone FortiGate unit.

 

Operating a cluster

The configurations of all of the FortiGate units in a cluster are synchronized so that the cluster units can simulate a single FortiGate unit. Because of this synchronization, you manage the HA cluster instead of managing the individual cluster units. You manage the cluster by connecting to the web-based manager using any cluster interface configured for HTTPS or HTTP administrative access. You can also manage the cluster by connecting to the CLI using any cluster interface configured for SSH or telnet administrative access.

The cluster web-based manager dashboard displays the cluster name, the host name and serial number of each cluster member, and also shows the role of each unit in the cluster. The roles can be master (primary unit) and slave (subordinate units). The dashboard also displays a cluster unit front panel illustration.

You can also go to System > HA to view the cluster members list. This includes status information for each cluster unit. You can also use the cluster members list for a number of cluster management functions including changing the HA configuration of an operating cluster, changing the host name and device priority of a subordinate unit, and disconnecting a cluster unit from a cluster. See Cluster members list on page 1480.

You can use log messages to view information about the status of the cluster. SeeClusters and logging on page 1472. You can use SNMP to manage the cluster by configuring a cluster interface for SNMP administrative access. Using an SNMP manager you can get cluster configuration information and receive traps.

You can configure a reserved management interface to manage individual cluster units. You can use this interface to access the web-based manager or CLI and to configure SNMP management for individual cluster units. See Managing individual cluster units using a reserved management interface on page 1465.

You can manage individual cluster units by using SSH, telnet, or the CLI console on the web-based manager dashboard to connect to the CLI of the cluster. From the CLI you can use the execute ha manage command to connect to the CLI of any unit in the cluster.

You can also manage individual cluster units by using a null-modem cable to connect to any cluster unit CLI. From there you can use the execute ha manage command to connect to the CLI of each unit in the cluster.

 

Operating a virtual cluster

Managing a virtual cluster is very similar to managing a cluster that does not contain multiple virtual domains. Most of the information in this chapter applies to managing both kinds of clusters. This section describes what is different when managing a virtual cluster.

If virtual domains are enabled, the cluster web-based manager dashboard displays the cluster name and the role of each cluster unit in virtual cluster 1 and virtual cluster 2.

The configuration and maintenance options that you have when you connect to a virtual cluster web-based manager or CLI depend on the virtual domain that you connect to and the administrator account that you use to connect.

If you connect to a cluster as the administrator of a virtual domain, you connect directly to the virtual domain. Since HA virtual clustering is a global configuration, virtual domain administrators cannot see HA configuration options. However, virtual domain administrators see the host name of the cluster unit that they are connecting to on the web browser title bar or CLI prompt. This host name is the host name of the primary unit for the virtual domain. Also, when viewing log messages the virtual domain administrator can select to view log messages for either of the cluster units.

If you connect to a virtual cluster as the admin administrator you connect to the global web-based manager or CLI. Even so, you are connecting to an interface and to the virtual domain that the interface has been added to. The virtual domain that you connect to does not make a difference for most configuration and maintenance operations. However, there are a few exceptions. You connect to the FortiGate unit that functions as the primary unit for the virtual domain. So the host name displayed on the web browser title bar and on the CLI is the host name of this primary unit.

 

Managing individual cluster units using a reserved management interface

You can provide direct management access to all cluster units by reserving a management interface as part of the HA configuration. Once this management interface is reserved, you can configure a different IP address, administrative access and other interface settings for this interface for each cluster unit. Then by connecting this interface of each cluster unit to your network you can manage each cluster unit separately from a different IP address. Configuration changes to the reserved management interface are not synchronized to other cluster units.

The reserved management interface provides direct management access to each cluster unit and gives each cluster unit a different identity on your network. This simplifies using external services, such as SNMP, to separately monitor and manage each cluster unit.

The reserved management interface is not assigned an HA virtual MAC address like other cluster interfaces. Instead the reserved management interface retains the per- manent hardware address of the physical interface unless you change it using the config system interface command.

The reserved management interface and IP address should not be used for managing a cluster using FortiManager. To correctly manage a FortiGate HA cluster with FortiManager use the IP address of one of the cluster unit interfaces.

If you enable SNMP administrative access for the reserved management interface you can use SNMP to monitor each cluster unit using the reserved management interface IP address. To monitor each cluster unit using SNMP, just add the IP address of each cluster unit’s reserved management interface to the SNMP server configuration. You must also enable direct management of cluster members in the cluster SNMP configuration.

If you enable HTTPS or HTTP administrative access for the reserved management interfaces you can connect to the web-based manager of each cluster unit. Any configuration changes made to any of the cluster units is automatically synchronized to all cluster units. From the subordinate units the web-based manager has the same features as the primary unit except that unit-specific information is displayed for the subordinate unit, for example:

  • The Dashboard System Information widget displays the subordinate unit serial number but also displays the same information about the cluster as the primary unit
  • On the Cluster members list (go to System > HA) you can change the HA configuration of the subordinate unit that you are logged into. For the primary unit and other subordinate units you can change only the host name and device priority.
  • Log Access displays the logs of the subordinate that you are logged into fist, You use the HA Cluster list to view the log messages of other cluster units including the primary unit.

If you enable SSH or TELNET administrative access for the reserved management interfaces you can connect to the CLI of each cluster unit. The CLI prompt contains the host name of the cluster unit that you have connected to. Any configuration changes made to any of the cluster units is automatically synchronized to all cluster units. You can also use the execute ha manage command to connect to other cluster unit CLIs.

The reserved management interface is available in NAT/Route and in Transparent mode. It is also available if the cluster is operating with multiple VDOMs. In Transparent mode you cannot normally add an IP address to an interface. However, you can add an IP address to the reserved management interface.

Full mesh HA

Full mesh HA

This chapter provides an introduction to full mesh HA and also contains general procedures and configuration examples that describe how to configure FortiGate full mesh HA.

The examples in this chapter include example values only. In most cases you will substitute your own values. The examples in this chapter also do not contain detailed descriptions of configuration parameters.

 

Full mesh HA overview

When two or more FortiGate units are connected to a network in an HA cluster the reliability of the network is improved because the HA cluster replaces a single FortiGate unit as a single point of failure. With a cluster, a single FortiGate unit is replaced by a cluster of two or more FortiGate units.

However, even with a cluster, potential single points of failure remain. The interfaces of each cluster unit connect to a single switch and that switch provides a single connection to the network. If the switch fails or if the connection between the switch and the network fails service is interrupted to that network.

The HA cluster does improve the reliability of the network because switches are not as complex components as FortiGate units, so are less likely to fail. However, for even greater reliability, a configuration is required that includes redundant connections between the cluster the networks that it is connected to.

FortiGate models that support 802.3ad Aggregate or Redundant interfaces can be used to create a cluster configuration called full mesh HA. Full mesh HA is a method of reducing the number of single points of failure on a network that includes an HA cluster.

This redundant configuration can be achieved using FortiGate 802.3ad Aggregate or Redundant interfaces and a full mesh HA configuration. In a full mesh HA configuration, you connect an HA cluster consisting of two or more FortiGate units to the network using 802.3ad Aggregate or Redundant interfaces and redundant switches. Each 802.3ad Aggregate or Redundant interface is connected to two switches and both of these switches are connected to the network. In addition you must set up an IEEE 802.1Q (also called Dot1Q) or ISL link between the redundant switches connected to the Aggregate or Redundant interfaces.

The resulting full mesh configuration, an example is shown below, includes redundant connections between all network components. If any single component or any single connection fails, traffic automatically switches to the redundant component and connection and traffic flow resumes.

Troubleshooting virtual clustering

Troubleshooting virtual clustering

Troubleshooting virtual clusters is similar to troubleshooting any cluster (see FGCP configuration examples and troubleshooting on page 1354). This section describes a few testing and troubleshooting techniques for virtual clustering.

 

To test the VDOM partitioning configuration

You can do the following to confirm that traffic for different VDOMs will be distributed among both FortiGate units in the virtual cluster. These steps assume the cluster is otherwise operating correctly.

1. Log into the web-based manager or CLI using the IP addresses of interfaces in each VDOM.

Confirm that you have logged into the FortiGate unit that should be processing traffic for that VDOM by checking the HTML title displayed by your web browser or the CLI prompt. Both of these should include the host name of the cluster unit that you have logged into. Also on the system Dashboard, the System Information widget displays the serial number of the FortiGate unit that you logged into. From the CLI the get system status command displays the status of the cluster unit that you logged into.

2. To verify that the correct cluster unit is processing traffic for a VDOM:

  • Add security policies to the VDOM that allow communication between the interfaces in the VDOM.
  • Optionally enable traffic logging and other monitoring for that VDOM and these security policies.
  • Start communication sessions that pass traffic through the VDOM.
  • Log into the web-based manager and go to System > HA > View HA Statistics. Verify that the statistics display shows more active sessions, total packets, network utilization, and total bytes for the unit that should be processing all traffic for the VDOM.
  • Optionally check traffic logging and the Top Sessions Widget for the FortiGate unit that should be processing traffic for that VDOM to verify that the traffic is being processed by this FortiGate unit.

 

Configuring inter-VDOM links in a virtual clustering configuration

Configuring inter-VDOM links in a virtual clustering configuration

Configuring inter-VDOM links in a virtual clustering configuration is very similar to configuring inter-VDOM links for a standalone FortiGate unit. The main difference the config system vdom-link command includes the vcluster keyword. The default setting for vcluster is vcluster1. So you only have to use the vcluster keyword if you are added an inter-VDOM link to virtual cluster 2.

 

To add an inter-VDOM link to virtual cluster 1

This procedure describes how to create an inter-VDOM link to virtual cluster 1 that results in a link between the root and vdom_1 virtual domains.

Inter-VDOM links are also called internal point-to-point interfaces.

1. Add an inter-VDOM link called vc1link.

config global

config system vdom-link edit vc1link

end

Adding the inter-VDOM link also adds two interfaces. In this example, these interfaces are called vc1link0 and vc1link1. These interfaces appear in all CLI and web-based manager interface lists. These interfaces can only be added to virtual domains in virtual cluster 1.

2. Bind the vc1link0 interface to the root virtual domain and bind the vc1link1 interface to the vdom_1 virtual domain.

config system interface edit vc1link0

set vdom root

next

edit vc1link1

set vdom vdom_1 end

 

To add an inter-VDOM link to virtual cluster 2

This procedure describes how to create an inter-VDOM link to virtual cluster 2 that results in a link between the vdom_2 and vdom_3 virtual domains.

1. Add an inter-VDOM link called vc2link.

config global

config system vdom-link edit vc2link

set vcluster vcluster2 end

Adding the inter-VDOM link also adds two interfaces. In this example, these interfaces are called vc2link0 and vc2link1. These interfaces appear in all CLI and web-based manager interface lists. These interfaces can only be added to virtual domains in virtual cluster 2.

2. Bind the vc2link0 interface to the vdom_2 virtual domain and bind the vc2link1 interface to the vdom_3 virtual domain.

config system interface edit vc2link0

set vdom vdom_2 next

edit vc2link1

set vdom vdom_3 end

 

Configuring virtual clustering with two VDOMs and VDOM partitioning – web-based manager

Configuring virtual clustering with two VDOMs and VDOM partitioning – web-based manager

These procedures assume you are starting with two FortiGate units with factory default settings.

 

To configure the FortiGate units for HA operation

1. Register and apply licenses to the FortiGate unit. This includes FortiCloud activation, FortiClient licensing, and

FortiToken licensing, and entering a license key if you purchased more than 10 Virtual Domains (VDOMS).

2. You can also install any third-party certificates on the primary FortiGate before forming the cluster. Once the cluster is formed third-party certificates are synchronized to the backup FortiGate.

3. On the System Information dashboard widget, beside Host Name select Change.

4. Enter a new Host Name for this FortiGate unit.

 

New Name                     FGT_ha_1

5. Select OK.

6. Go to System > HA and change the following settings.

Mode                                           Active-Passive

Group Name                              vexample2.com

Password                                   vHA_pass_2

7. Select OK.

The FortiGate unit negotiates to establish an HA cluster. When you select OK you may temporarily lose connectivity with the FortiGate unit as the HA cluster negotiates and the FGCP changes the MAC address of the FortiGate unit interfaces (see Cluster virtual MAC addresses). The MAC addresses of the FortiGate interfaces change to the following virtual MAC addresses:

  • port1 interface virtual MAC: 00-09-0f-09-00-00
  • port10 interface virtual MAC: 00-09-0f-09-00-01 l  port11 interface virtual MAC: 00-09-0f-09-00-02 l  port12 interface virtual MAC: 00-09-0f-09-00-03 l  port13 interface virtual MAC: 00-09-0f-09-00-04 l  port14 interface virtual MAC: 00-09-0f-09-00-05 l  port15 interface virtual MAC: 00-09-0f-09-00-06 l  port16 interface virtual MAC: 00-09-0f-09-00-07 l  port17 interface virtual MAC: 00-09-0f-09-00-08 l  port18 interface virtual MAC: 00-09-0f-09-00-09 l  port19 interface virtual MAC: 00-09-0f-09-00-0a l  port2 interface virtual MAC: 00-09-0f-09-00-0b
  • port20 interface virtual MAC: 00-09-0f-09-00-0c
  • port3 interface virtual MAC: 00-09-0f-09-00-0d l  port4 interface virtual MAC: 00-09-0f-09-00-0e l  port5 interface virtual MAC: 00-09-0f-09-00-0f l  port6 interface virtual MAC: 00-09-0f-09-00-10 l  port7 interface virtual MAC: 00-09-0f-09-00-11 l  port8 interface virtual MAC: 00-09-0f-09-00-12 l  port9 interface virtual MAC: 00-09-0f-09-00-13

To be able to reconnect sooner, you can update the ARP table of your management PC by deleting the ARP table entry for the FortiGate unit (or just deleting all arp table entries). You may be able to delete the arp table of your management PC from a command prompt using a command similar to arp -d.

You can use the get hardware nic (or diagnose hardware deviceinfo nic) CLI command to view the virtual MAC address of any FortiGate unit interface. For example, use the following command to view the port1 interface virtual MAC address (Current_HWaddr) and the port1 permanent MAC address (Permanent_HWaddr):

get hardware nic port1

 

MAC: 00:09:0f:09:00:00

Permanent_HWaddr: 02:09:0f:78:18:c9

 

7. Power off the first FortiGate unit.

8. Repeat these steps for the second FortiGate unit.

Set the second FortiGate unit host name to:

New Name                                  FGT_ha_2

To connect the cluster to the network

1. Connect the port1 interfaces of FGT_ha_1 and FGT_ha_2 to a switch connected to the Internet.

2. Connect the port5 interfaces of FGT_ha_1 and FGT_ha_2 to switch connected to the Internet.

You could use the same switch for the port1 and port5 interfaces.

3. Connect the port2 interfaces of FGT_ha_1 and FGT_ha_2 to a switch connected to the internal network.

4. Connect the port6 interfaces of FGT_ha_1 and FGT_ha_2 to a switch connected to the engineering network.

5. Connect the port3 interfaces of the cluster units together. You can use a crossover Ethernet cable or regular

Ethernet cables and a switch.

6. Connect the port4 interfaces of the cluster units together. You can use a crossover Ethernet cable or regular

Ethernet cables and a switch.

7. Power on the cluster units.

The units start and negotiate to choose the primary unit and the subordinate unit. This negotiation occurs with no user intervention.

When negotiation is complete you can continue.

Configuring HA for virtual clustering

Configuring HA for virtual clustering

If your cluster uses VDOMs, you are configuring virtual clustering. Most virtual cluster HA options are the same as normal HA options. However, virtual clusters include VDOM partitioning options. Other differences between configuration options for regular HA and for virtual clustering HA are described below.

To configure HA options for a cluster with VDOMs enabled:

  • Log into the global web-based manager and go to System > HA.
  • From the CLI, log into the Global Configuration:

The following example shows how to configure active-active virtual clustering:

config global config system ha

set mode a-a

set group-name vexample1.com set password vHA_pass_1

end end

The following example shows how to configure active-passive virtual clustering:

config global config system ha

set mode a-p

set group-name vexample1.com set password vHA_pass_1

end end

The following example shows how to configure VDOM partitioning for virtual clustering. In the example, the FortiGate unit is configured with three VDOMs (domain_1, domain_2, and domain_3) in addition to the root VDOM. The example shows how to set up a basic HA configuration that sets the device priority of virtual cluster 1 to 200. The example also shows how to enable vcluster2, how to set the device priority of virtual cluster 2 to 100 and how to add the virtual domains domain_2 and domain_3 to virtual cluster 2.

When you enable multiple VDOMs, vcluster2 is enabled by default. Even so the command to enable vcluster2 is included in this example in case for some reason it has been disabled. When vcluster2 is enabled, override is also enabled.

The result of this configuration would be that the cluster unit that you are logged into becomes the primary unit for virtual cluster 1. This cluster unit processes all traffic for the root and domain_1 virtual domains.

config global config system ha

set mode a-p

set group-name vexample1.com set password vHA_pass_1

set priority 200

set vcluster2 enable config secondary-vcluster

set vdom domain_2 domain_3 set priority 100

end end

end

The following example shows how to use the execute ha manage command to change the device priorities for virtual cluster 1 and virtual cluster 2 for the other unit in the cluster. The commands set the device priority of virtual cluster 1 to 100 and virtual cluster 2 to 200.

The result of this configuration would be that the other cluster unit becomes the primary unit for virtual cluster 2. This other cluster unit would process all traffic for the domain_2 and domain_3 virtual domains.

 

config global

execute ha manage 1 config system ha

set priority 100

set vcluster2 enable config secondary-vcluster

set priority 200 end

end end

end

 

Example virtual clustering with two VDOMs and VDOM partitioning

This section describes how to configure the example virtual clustering configuration shown below. This configuration includes two virtual domains, root and Eng_vdm and includes VDOM partitioning that sends all root VDOM traffic to FGT_ha_1 and all Eng_vdom VDOM traffic to FGT_ha_2. The traffic from the internal network and the engineering network is distributed between the two FortiGate units in the virtual cluster. If one of the cluster units fails, the remaining unit will process traffic for both VDOMs.

The procedures in this example describe some of many possible sequences of steps for configuring virtual clustering. For simplicity many of these procedures assume that you are starting with new FortiGate units set to the factory default configuration. However, this is not a requirement for a successful HA deployment. FortiGate HA is flexible enough to support a successful configuration from many different starting points.

 

Example virtual clustering network topology

The following figure shows a typical FortiGate HA virtual cluster consisting of two FortiGate units (FGT_ha_1 and FGT_ha_2) connected to and internal network, an engineering network and the Internet. To simplify the diagram the heartbeat connections are not shown.

The traffic from the internal network is processed by the root VDOM, which includes the port1 and port2 interfaces. The traffic from the engineering network is processed by the Eng_vdm VDOM, which includes the port5 and port6 interfaces. VDOM partitioning is configured so that all traffic from the internal network is processed by FGT_ha_1 and all traffic from the engineering network is processed by FGT_ha_2.

This virtual cluster uses the default FortiGate heartbeat interfaces (port3 and port4).

 

Example virtual cluster showing VDOM partitioning

General configuration steps

The section includes web-based manager and CLI procedures. These procedures assume that the FortiGate units are running the same FortiOS firmware build and are set to the factory default configuration.

 

General configuration steps

1. Apply licenses to the FortiGate units to become the cluster.

2. Configure the FortiGate units for HA operation.

  • Optionally change each unit’s host name.
  • Configure HA.

2. Connect the cluster to the network.

3. Configure VDOM settings for the cluster:

  • Enable multiple VDOMs.
  • Add the Eng_vdm VDOM.
  • Add port5 and port6 to the Eng_vdom.

4. Configure VDOM partitioning.

5. Confirm that the cluster units are operating as a virtual cluster and add basic configuration settings to the cluster.

  • View cluster status from the web-based manager or CLI.
  • Add a password for the admin administrative account.
  • Change the IP addresses and netmasks of the port1, port2, port5, and port6 interfaces.
  • Add a default routes to each VDOM.

 

Virtual clusters

Virtual clusters

This chapter provides an introduction to virtual clustering and also contains general procedures and configuration examples that describe how to configure FortiGate HA virtual clustering.

 

Virtual clustering overview

Virtual clustering is an extension of the FGCP for a cluster of 2 FortiGate units operating with multiple VDOMS enabled. Virtual clustering operates in active-passive mode to provide failover protection between two instances of a VDOM operating on two different cluster units. You can also operate virtual clustering in active-active mode to use HA load balancing to load balance sessions between cluster units. Alternatively, by distributing VDOM processing between the two cluster units you can also configure virtual clustering to provide load balancing by distributing sessions for different VDOMs to each cluster unit.

The figure below shows an example virtual cluster configuration consisting of two FortiGate units. The virtual cluster has two virtual domains, root and Eng_vdm.

The root virtual domain includes the port1 and port2 interfaces. The Eng_vdm virtual domain includes the port5 and port6 interfaces. The port3 and port4 interfaces (not shown in the diagram) are the HA heartbeat interfaces.

FortiGate virtual clustering is limited to a cluster of 2 FortiGate units with multiple VDOMs enabled. If you want to create a cluster of more than 2 FortiGate units oper- ating with multiple VDOMS you could consider other solutions that either do not include multiple VDOMs in one cluster or employ a feature such as standalone session synchronization. See FortiGate Session Life Support Protocol (FGSP) on page 1579.

 

Virtual clustering and failover protection

Virtual clustering operates on a cluster of two (and only two) FortiGate units with VDOMs enabled. Each VDOM creates a cluster between instances of the VDOMs on the two FortiGate units in the virtual cluster. All traffic to and from the VDOM stays within the VDOM and is processed by the VDOM. One cluster unit is the primary unit for each VDOM and one cluster unit is the subordinate unit for each VDOM. The primary unit processes all traffic for the VDOM. The subordinate unit does not process traffic for the VDOM. If a cluster unit fails, all traffic fails over to the cluster unit that is still operating.

 

Virtual clustering and heartbeat interfaces

The HA heartbeat provides the same HA services in a virtual clustering configuration as in a standard HA configuration. One set of HA heartbeat interfaces provides HA heartbeat services for all of the VDOMs in the cluster. You do not have to add a heartbeat interface for each VDOM.

 

Virtual clustering and HA override

For a virtual cluster configuration, override is enabled by default for both virtual clusters when you:

  • Enable VDOM partionning from the web-based manager by moving virtual domains to virtual cluster 2
  • Enter set vcluster2 enable from the CLI config system ha command to enable virtual cluster 2.

Usually you would enable virtual cluster 2 and expect one cluster unit to be the primary unit for virtual cluster 1 and the other cluster unit to be the primary unit for virtual cluster 2. For this distribution to occur override must be enabled for both virtual clusters. Otherwise you will need to restart the cluster to force it to renegotiate.

If override is enabled the cluster may renegotiate too often.You can choose to disable override at any time. If you decide to disable override, for best results, you should dis- able it for both cluster units.

For more information about HA override see HA override.

Troubleshooting HA clusters

Troubleshooting HA clusters

This section describes some HA clustering troubleshooting techniques.

 

Ignoring hardware revisions

Some FortiGate platforms have gone through multiple hardware versions. In some cases the hardware changes between versions have meant that by default you cannot form a cluster if the FortiGate units in the cluster have different hardware versions. If you run into this problem you can use the following command on each FortiGate unit to cause the cluster to ignore different hardware versions:

execute ha ignore-hardware-revision {disable | enable | status}

This command is only available on FortiGate units that have had multiple hardware revisions. By default the command is set to prevent FortiOS from forming clusters between FortiGate units with different hardware revisions. You can enable this command to be able to create a cluster consisting of FortiGate units with different hardware revisions. Use the status option to verify the whether ignoring hardware revisions is enabled or disabled.

Affected models include but are not limited to:

  • FortiGate-100D l  FortiGate-300C l  FortiGate-600C
  • FortiGate-800C
  • FortiGate-80C and FortiWiFi-80C
  • FortiGate-60C

 

Before you set up a cluster

Before you set up a cluster ask yourself the following questions about the FortiGate units that you are planning to use to create a cluster.

1. Do all the FortiGate units have the same hardware configuration? Including the same hard disk configuration and the same AMC cards installed in the same slots?

2. Do all FortiGate units have the same firmware build?

3. Are all FortiGate units set to the same operating mode (NAT or Transparent)?

4. Are all the FortiGate units operating in single VDOM mode?

5. If the FortiGate units are operating in multiple VDOM mode do they all have the same VDOM configuration?

In some cases you may be able to form a cluster if different FortiGate units have dif- ferent firmware builds, different VDOM configurations, and are in different operating modes. However, if you encounter problems they may be resolved by installing the same firmware build on each unit, and give them the same VDOM configuration and operating mode.

 

Troubleshooting the initial cluster configuration

This section describes how to check a cluster when it first starts up to make sure that it is configured and operating correctly. This section assumes you have already configured your HA cluster.

 

To verify that a cluster can process traffic and react to a failure

1. Add a basic security policy configuration and send network traffic through the cluster to confirm connectivity.

For example, if the cluster is installed between the Internet and an internal network, set up a basic internal to external security policy that accepts all traffic. Then from a PC on the internal network, browse to a website on the Internet or ping a server on the Internet to confirm connectivity.

2. From your management PC, set ping to continuously ping the cluster, and then start a large download, or in some other way establish ongoing traffic through the cluster.

3. While traffic is going through the cluster, disconnect the power from one of the cluster units.

You could also shut down or restart a cluster unit. Traffic should continue with minimal interruption.

4. Start up the cluster unit that you disconnected.

The unit should re-join the cluster with little or no affect on traffic.

5. Disconnect a cable for one of the HA heartbeat interfaces.

The cluster should keep functioning, using the other HA heartbeat interface.

6. If you have port monitoring enabled, disconnect a network cable from a monitored interface.

Traffic should continue with minimal interruption.

 

 

To verify the cluster configuration – web-based manager

1. Log into the cluster web-based manager.

2. Check the system dashboard to verify that the System Information widget displays all of the cluster units.

3. Check the cluster member graphic to verify that the correct cluster unit interfaces are connected.

4. Go to System > HA and verify that all of the cluster units are displayed on the cluster members list.

5. From the cluster members list, edit the primary unit (master) and verify the cluster configuration is as expected.

 

To troubleshoot the cluster configuration – web-based manager

1. Connect to each cluster unit web-based manager and verify that the HA configurations are the same.

2. To connect to each web-based manager, you may need to disconnect some units from the network to connect to the other if the units have the same IP address.

3. If the configurations are the same, try re-entering the cluster Password on each cluster unit in case you made an error typing the password when configuring one of the cluster units.

4. Check that the correct interfaces of each cluster unit are connected.

Check the cables and interface LEDs.

Use the Unit Operation dashboard widget, system network interface list, or cluster members list to verify that each interface that should be connected actually is connected.

If Link is down re-verify the physical connection. Try replacing network cables or switches as required.