SLBC – Session-aware Load Balancing Cluster

SLBC – Session-aware Load Balancing Cluster

The Session-aware Load Balancing Cluster (SLBC) protocol is used for clusters consisting of FortiControllers that perform load balancing of both TCP and UDP sessions. As session-aware load balancers, FortiControllers, with FortiASIC DP processors, are capable of directing any TCP or UDP session to any worker installed in the same chassis. It also means that more complex networking features such as NAT, fragmented packets, complex UDP protocols and others such as Session Initiation Protocol (SIP), a communications protocol for signaling and controlling multimedia communication sessions, can be load balanced by the cluster.

Currently, only three FortiController models are available for SLBC: FortiController-5103B, FortiController-

5903C, and FortiController-5913C. Supported workers include the FortiGate-5001B, 5001C, 5101C, and 5001D.

FortiGate-7000 series products also support SLBC.

An SLBC with two FortiControllers can operate in active-passive mode or dual mode. In active-passive mode, if

the active FortiController fails, traffic is transferred to the backup FortiController. In dual mode both FortiControllers load balance traffic and twice as many network interfaces are available.

SLBC clusters consisting of more than one FortiController use the following types of communication between FortiControllers to operate normally:

  • Heartbeat: Allows the FortiControllers in the cluster to find each other and share status information. If a FortiController stops sending heartbeat packets it is considered down by other cluster members. By default heartbeat traffic uses VLAN 999.
  • Base control: Communication between FortiControllers on subnet 10.101.11.0/255.255.255.0 using VLAN 301.
  • Base management: Communication between FortiControllers on subnet 10.101.10.0/255.255.255.0 using VLAN 101. l Session synchronization: If one FortiController fails, session synchronization allows another to take its place and maintain active communication sessions. FortiController-5103B session sync traffic uses VLAN 2000.

FortiController-5903C and FortiController-5913C session sync traffic between the FortiControllers in slot 1 uses VLAN 1900 and between the FortiControllers in slot 2 uses VLAN 1901. You cannot change these VLANs.

Note that SLBC does not support session synchronization between workers in the same chassis. The FortiControllers in a cluster keep track of the status of the workers in their chassis and load balance sessions to the workers. If a worker fails the FortiController detects the failure and stops load balancing sessions to that worker. The sessions that the worker is processing when it fails are lost.

Changing the heartbeat VLAN

To change the VLAN from the FortiController GUI, from the System Information dashboard widget, beside HA Status, select Configure. Change the VLAN to use for HA heartbeat traffic(1-4094) setting.

You can also change the heartbeat VLAN ID from the FortiController CLI. For example, to change the heartbeat VLAN ID to 333, enter the following:

config system ha set hbdev-vlan-id 333

end

Setting the mgmt interface as a heartbeat interface

To add the mgmt interface to the list of heartbeat interfaces used, on the FortiController-5103B, enter the following:

config system ha set hbdev b1 b2 mgmt end

This example adds the mgmt interface for heartbeats to the B1 and B2 interfaces. The B1 and B2 ports are recommended because they are 10G ports and the Mgmt interface is a 100Mb interface.

Changing the heartbeat interface mode

By default, only the first heartbeat interface (usually B1) is used for heartbeat traffic. If this interface fails on any of the FortiControllers in a cluster, then the second heartbeat interface is used (B2).

To simultaneously use all heartbeat interfaces for heartbeat traffic, enter the following command:

config load-balance-setting set base-mgmt-interface-mode active-active end

Changing the base control subnet and VLAN

You can change the base control subnet and VLAN from the FortiController CLI. For example, to change the base control subnet to 10.122.11.0/255.255.255.0 and the VLAN ID to 320, enter the following:

config load-balance setting set base-ctrl-network 10.122.11.0 255.255.255.0 config base-ctrl-interfaces edit b1 set vlan-id 320

next edit b2 set vlan-id 320

end

Changing the base management subnet and VLAN

You can change the base management subnet from the FortiController GUI under Load Balance > Config and changing the Internal Management Network.

You can also change the base management subnet and VLAN ID from the FortiController CLI. For example, to change the base management subnet to 10.121.10.0/255.255.255.0 and the VLAN to 131, enter the following:

config load-balance setting set base-mgmt-internal-network 10.121.10.0 255.255.255.0 config base-mgt-interfaces edit b1 set vlan-id 131

next edit b2 set vlan-id 131

end

If required, you can use different VLAN IDs for the B1 and B2 interface.

Changing this VLAN only changes the VLAN used for base management traffic between chassis. Within a chassis the default VLAN is used.

Enabling and configuring the session sync interface

To enable session synchronization in a two chassis configuration, enter the following command:

config load-balance setting set session-sync enable

end

You will then need to select the interface to use for session sync traffic. The following example sets the FortiController-5103B session sync interface to F4: config system ha set session-sync-port f4

end

The FortiController-5903C and FortiController-5913C use b1 and b2 as the session sync interfaces so no configuration changes are required.

FGCP to SLBC migration

You can convert a FGCP virtual cluster (with VDOMs) to an SLBC cluster. The conversion involves replicating the VDOM, interface, and VLAN configuration of the FGCP cluster on the SLBC cluster primary worker, then backing up the configuration of each FGCP cluster VDOM. Each of the VDOM configuration files is manually edited to adjust interface names. These modified VDOM configuration files are then restored to the corresponding SLBC cluster primary worker VDOMs.

For this migration to work, the FGCP cluster and the SLBC workers must be running the same firmware version, the VDOMS are enabled on the FGCP cluster, and the SLBC workers have been registered and licensed. However, the FGCP cluster units do not have to be the same model as the SLBC cluster workers.

Only VDOM configurations are migrated. You have to manually configure primary worker management and global settings.

Conversion steps

  1. Add VDOM(s) to the SLBC primary worker with names that match those of the FGCP cluster.
  2. Map FGCP cluster interface names to SLBC primary worker interface names. For example, you can map the FGCP cluster port1 and port2 interfaces to the SLBC primary worker fctl/f1 and fctl/f2 interfaces. You can also map FGCP cluster interfaces to SLBC trunks, and include aggregate interfaces.
  3. Add interfaces to the SLBC primary worker VDOMs according to your mapping. This includes moving SLBC physical interfaces into the appropriate VDOMs, creating aggregate interfaces, and creating SLBC trunks if required.
  4. Add VLANs to the SLBC primary worker that match VLANs in the FGCP cluster. They should have the same names as the FGCP VLANs, be added to the corresponding SLBC VDOMs and interfaces, and have the same VLAN IDs.
  5. Add inter-VDOM links to the SLBC primary worker that match the FGCP cluster.
  6. Backup the configurations of each FGCP cluster VDOM, and SLBC primary worker VDOM.
  7. Use a text editor to replace the first four lines of each FGCP cluster VDOM configuration file with the first four lines of the corresponding SLBC primary worker VDOM configuration file. Here are example lines from an SLBC primary

worker VDOM configuration file:

#config-version=FG-5KB-5.02-FW-build670-150318:opmode=0:vdom=1:user=admin

#conf_file_ver=2306222306838080295

#buildno=0670

#global_vdom=0:vd_name=VDOM1

  1. With the text editor, edit each FGCP cluster VDOM configuration file and replace all FGCP cluster interface names with the corresponding SLBC worker interface names, according to the mapping you created in step 2.
  2. Set up a console connection to the SLBC primary worker to check for errors during the following steps.
  3. From the SLBC primary worker, restore each FGCP cluster VDOM configuration file to each corresponding SLBC primary worker VDOM.
  4. Check the following on the SLBC primary worker:
    • Make sure set type fctrl-trunk is enabled for SLBC trunk interfaces.
    • Enable the global and management VDOM features that you need, including SNMP, logging, connections to FortiManager, FortiAnalyzer, and so on.
    • If there is a FortiController in chassis slot 2, make sure the worker base2 interface status is up. l Remove snmp-index entries for each interface.
    • Since you can manage the workers from the FortiController you can remove management-related configurations using the worker mgmt1 and mgmt2 interfaces (Logging, SNMP, admin access, etc.) if you are not going to use these interfaces for management.

How to set up SLBC with one FortiController-5103B

This example describes the basics of setting up a Session-aware Load Balancing Cluster (SLBC) that consists of one FortiController-5103B, installed in chassis slot 1, and three FortiGate-5001C workers, installed in chassis slots 3, 4, and 5.

This SLBC configuration can have up to eight 10Gbit network connections.

Configuring the hardware

  1. Install a FortiGate-5000 series chassis and connect it to power. Install the FortiController in slot 1. Install the workers in slots 3, 4, and 5. Power on the chassis.
  2. Check the chassis, FortiController, and FortiGate LEDs to verify that all components are operating normally. (To check normal operation LED status see the FortiGate-5000 series documents available here.)
  3. Check the FortiSwitch-ATCA release notes and install the latest supported firmware on the FortiController and on the workers. Get FortiController firmware from the Fortinet Support site. Select the FortiSwitch-ATCA product.

Configuring the FortiController

To configure the FortiController, you will need to either connect to the FortiController GUI or CLI with the default IP address of http://192.168.1.99. Log in using the admin account (no password).

  1. Add a password for the admin account. Use the Administrators widget in the GUI, or enter the following CLI command:

config admin user edit admin set password <password>

end

  1. Change the FortiController mgmt interface IP address. Use the Management Port widget in the GUI, or enter the following CLI command:

config system interface edit mgmt set ip 172.20.120.151/24

end

  1. If you need to add a default route for the management IP address, enter the following command:

config route static edit route 1 set gateway 172.20.121.2

end

  1. To set the chassis type that you are using, enter the following CLI command:

config system global set chassic-type fortigate-5140

end

  1. Go to Load Balance > Config and add workers to the cluster by selecting Edit and moving the slots that contain workers to the Member The Config page shows the slots in which the cluster expects to find workers. Since the workers have not been configured yet, their status is Down.

Configure the External Management IP/Netmask. Once the workers are connected to the cluster, you can use

this IP address to manage and configure them.

  1. You can also enter the following CLI command to add slots 3, 4, and 5 to the cluster:

config load-balance setting config slots edit 3

next edit 4

next edit 5

end

end

  1. You can also enter the following command to configure the external management IP/Netmask and management access to the following address:

config load-balance setting set base-mgmt-external-ip 172.20.120.100 255.255.255.0 set base-mgmt-allowaccess https ssh ping

end

Adding the workers

Before you begin adding workers to the cluster, make sure you enter the execute factoryreset command in the CLI so the workers are set to factory default settings. If the workers are going to run FortiOS Carrier, add the FortiOS Carrier licence instead – this will reset the worker to factory default settings.

Also make sure to register and apply licenses to each worker, including FortiClient licensing, FortiCloud activation, and entering a license key if you purchased more than 10 Virtual Domains (VDOMs). You can also install any third-party certificates on the primary worker before forming the cluster. Once the cluster is formed, third-party certificates are synchronized to all of the workers. FortiToken licenses can be added at any time, which will also synchronize across all of the workers.

  1. Log in to each of the worker’s CLI and enter the following CLI command to set the worker to operate in FortiController mode:

config system elbc set mode fortincontroller

end

Once the command is entered, the worker restarts and joins the cluster.

  1. On the FortiController, go to Load Balance > Status. You will see the workers appear in their appropriate slots. The worker in the lowest slot number usually becomes the primary unit.

You can now manage the workers in the same way as you would manage a standalone FortiGate. You can connect to the worker GUI or CLI using the External Management IP. If you had configured the worker mgmt1 or mgmt2 interfaces you can also connect to one of these addresses to manage the cluster.

To operate the cluster, connect networks to the FortiController front panel interfaces and connect to a worker GUI or CLI to configure the workers to process the traffic they receive. When you connect to the External Management IP you connect to the primary worker. When you make configuration changes they are synchronized to all workers in the cluster.

Managing the devices in an SLBC with the External Management IP

The External Management IP address is used to manage all of the individual devices in a SLBC by adding a special port number. This special port number begins with the standard port number for the protocol you are using and is followed by two digits that identify the chassis number and slot number. The port number can be calculated

using the following formula:

service_port x 100 + (chassis_id – 1) x 20 + slot_id Where:

  • service_port is the normal port number for the management service (80 for HTTP, 443 for HTTPS and so on). l chassis_id is the chassis ID specified as part of the FortiController HA configuration and can be 1 or 2.
  • slot_id is the number of the chassis slot.

By default, chassis 1 is the primary chassis and chassis 2 is the backup chassis. However, the actual primary chassis is the one with the primary FortiController, which can be changed independently of the chassis number. Additionally, the chassis_id is defined by the chassis number, not whether the chassis contains the primary

FortiController.

Some examples:

  • HTTPS, chassis 1, slot 2: 443 x 100 + (1 – 1) x 20 + 2 = 44300 + 0 + 2 = 44302: browse to: https://172.20.120.100:44302
  • HTTP, chassis 2, slot 4: 80 x 100 + (2 – 1) x 20 + 4 = 8000 + 20 + 4 = 8024: browse to http://172.20.120.100/8024
  • HTTPS, chassis 1, slot 10: 443 x 100 + (1 – 1) x 20 + 10 = 44300 + 0 + 10 = 44310: browse to https://172.20.120.100/44310

Single chassis or chassis 1 special management port numbers

Slot number HTTP (80) HTTPS (443) Telnet (23) SSH (22) SNMP (161)
Slot 1 8001 44301              2301 2201 16101
Slot 2 8002 44302              2302 2202 16102
Slot 3 8003 44303              2303 2203 16103
Slot 4 8004 44304              2304 2204 16104
Slot 5 8005 44305              2305 2205 16105
Slot 6 8006 44306              2306 2206 16106
Slot 7 8007 44307              2307 2207 16107
Slot 8 8008 44308              2308 2208 16108
Slot 9 8009 44309              2309 2209 16109
Slot number HTTP (80) HTTPS (443) Telnet (23) SSH (22) SNMP (161)
Slot 10 8010 44310              2310 2210 16110
Slot 11 8011 44311              2311 2211 16111
Slot 12 8012 44312              2312 2212 16112
Slot 13 8013 44313              2313 2213 16113
Slot 14 8014 44314              2314 2214 16114

Chassis 2 special management port numbers

Slot number HTTP (80) HTTPS (443) Telnet (23) SSH (22) SNMP (161)
Slot 1 8021 44321              2321 2221 16121
Slot 2 8022 44322              2322 2222 16122
Slot 3 8023 44323              2323 2223 16123
Slot 4 8024 44324              2324 2224 16124
Slot 5 8025 44325              2325 2225 16125
Slot 6 8026 44326              2326 2226 16126
Slot 7 8027 44327              2327 2227 16127
Slot 8 8028 44328              2328 2228 16128
Slot 9 8029 44329              2329 2229 16129
Slot 10 8030 44330              2330 2230 16130
Slot 11 8031 44331              2331 2231 16131
Slot 12 8032 44332              2332 2232 16132
Slot 13 8033 44333              2333 2233 16133
Slot 14 8034 44334              2334 2234 16134

For more detailed information regarding FortiController SLBC configurations, see the FortiController SessionAware Load Balancing (SLBC) Guide.

 

FGFM – FortiGate to FortiManager protocol

FGFM – FortiGate to FortiManager protocol

The FortiGate to FortiManager (FGFM) protocol is designed for FortiGate and FortiManager deployment scenarios, especially where NAT is used. These scenarios include the FortiManager on public internet while the FortiGate unit is behind NAT, FortiGate unit is on public internet while FortiManager is behind NAT, or both FortiManager and FortiGate unit have routable IP addresses.

The FortiManager unit’s Device Manager uses FGFM to create new device groups, provision and add devices, and install policy packages and device settings.

Port 541 is the default port used for FortiManager traffic on the internal management network.

Adding a FortiGate to the FortiManager

Adding a FortiGate unit to a FortiManager requires configuration on both devices. This section describes the basics to configure management using a FortiManager device.

FortiGate configuration

Adding a FortiGate unit to FortiManager will ensure that the unit will be able to receive antivirus and IPS updates and allow remote management through the FortiManager system, or FortiCloud service. The FortiGate unit can be in either NAT or transparent mode. The FortiManager unit provides remote management of a FortiGate unit over TCP port 541.

You must first enable Central Management on the FortiGate so management updates to firmware and FortiGuard services are available:

  1. Go to System > Settings.
  2. Set Central Management to FortiManager.
  3. Enter the FortiManager’s IP/Domain Name in the field provided, and select Send Request.

You can also select Registration Password and enter a password to connect to the FortiManager.

To configure the previous steps in the CLI, enter the following:

config system central-management set fmg <ip_address>

end

To use the registration password, enter the following:

execute central-mgmt register-device <fmg-serial-no><fmg-registerpassword><fgtusrname><fgt-password>

Configuring an SSL connection

The default encryption automatically sets high and medium encryption algorithms. Algorithms used for High, Medium, and Low follow the openssl definitions below:

FGFM                   to FortiManager protocol

Encryption level Key strength Algorithms used
High Key lengths larger than 128 bits, and some cipher suites with 128-bit keys. DHE-RSA-AES256-SHA:AES256-SHA: EDH-RSA-

DES-CBC3-SHA: DES-CBC3-SHA:DES-CBC3-

MD5:DHE-RSA-AES128-SHA:AES128-SHA

Medium Key strengths of 128 bit encryption. RC4-SHA:RC4-MD5:RC4-MD
Low Key strengths of 64 or 56 bit encryption algorithms but excluding export cipher suites. EDH-RSA-DES-CDBC-SHA; DES-CBC-SHA; DESCBC-MD5

An SSL connection can be configured between the two devices and an encryption level selected. To configure the connection in the CLI, Enter the following:

config system central-management set status enable

set enc-algorithm (default | high | low) – default automatically sets high and medium encryption algorithms. end

FortiManager configuration

Use the Device Manager pane to add, configure, and manage devices.

You can add existing operational devices, unregistered devices, provision new devices, and add multiple devices at a time.

Adding an operating FortiGate HA cluster to the Device Manager pane is similar to adding a standalone device. Type the IP address of the master device. The FortiManager will handle the cluster as a single managed device.

To confirm that a device model or firmware version is supported by current firmware version running on FortiManager, enter the following CLI command: diagnose dvm supported-platforms list

See the FortiManager Administration Guide for full details on adding devices, under Device Manager.

FGFM is also used in ADOMs (Administrative Domains) set to Normal Mode. Normal Mode has Read/Write privileges, where the administrator is able to make changes to the ADOM and manage devices from the FortiManager. FortiGate units in the ADOM will query their own configuration every five seconds. If there has been a configuration change, the FortiGate unit will send a revision on the change to the FortiManager using the FGFM protocol.

To configure central management on the FortiGate unit, enter the following on the FortiGate:

config system central-management set mode backup set fortimanager-fds-override enable set fmg <FortiManager_IP_address> end

FGFM – FortiGate to FortiManager protocol

Replacing a FortiGate in a FortiManager configuration

FGFM can be used in order to re-establish a connection between a FortiGate unit and a FortiManager configuration. This is useful for if you need a FortiGate unit replaced following an RMA hardware replacement. This applies to a FortiGate running in HA as the primary units; it does not apply to subordinate units.

When the FortiGate unit is replaced, perform a Device Manager Connectivity check or Refresh on teh FortiManager to establish the FGFM management tunnel to the FortiGate. If it fails, to establish, you can force the tunnel by executing the following command on the FortiManager:

exec fgfm reclaim-dev-tunnel <device_name>

Debugging FGFM on FortiManager

  • To display diagnostic information for troubleshooting (Set the debug level of FGFM daemon. Enter a device name to only show messages related to that device): diag debug application fgfmsd <integer> <device_name>
  • To view installation session, object, and session lists:

diag fgfm install-session diag fgfm object-list diag fgfm session-list <device_ID> l To reclaim a management tunnel (device name is optional): execute fgfm reclaim-dev-tunnnel <device_name> l To view the link-local address assigned to the FortiManager: diag fmnetwork interface list

Debugging FGFM on FortiGate

  • To view information about the Central Management System configuration: get system central-management l To produce realtime debugging information: diag debug application fgfmd -1
  • To view the link-local address assigned to the FortiManager:

diag fmnetwork interface list

 

FGSP – FortiGate Session Life Support Protocol

FGSP – FortiGate Session Life Support Protocol

FortiGate Session Life Support Protocol (FGSP) distributes sessions between two FortiGate units and the FGSP performs session synchronization. If one of the peers fails, session failover occurs and active sessions fail over to the peer that is still operating. This failover occurs without any loss of data. Also, the external routers or load balancers will detect the failover and re-distribute all sessions to the peer that is still operating. The two FortiGate units must be the same model and must be running the same firmware.

You can also use the config system cluster-sync command to configure FGSP between two FortiGate units.

The FortiGate’s HA Heartbeat listens on ports TCP/703, TCP/23, or ETH Layer 2/8890.

In previous versions of FortiOS, FGSP was called TCP session synchronization or standalone session synchronization. However, FGSP has been expanded to include both IPv4 and IPv6 TCP, UDP, ICMP, expectation, NAT sessions, and IPsec tunnels.

FGSP – FortiGate Session Life Support

Configuration synchronization

Configuration synchronization can also be performed, allowing you to make configuration changes once for both FortiGate units instead of requiring multiple configuration changes on each FortiGate unit. However interface IP addresses, BGP neighbor settings, and other settings that identify the FortiGate unit on the network are not synchronized. You can enable configuration synchronization by entering the following command:

config system ha set standalone-config-sync enable

end

UDP and ICMP (connectionless) session synchronization

In many configurations, due to their non-stateful nature, UDP and ICMP sessions don’t need to be synchronized to naturally failover. However, if it is required, you can configure the FGSP to also synchronize UDP and ICMP sessions by entering the following command:

config system ha set session-pickup enable

set session-pickup-connectionless enable

end

Expectation (asymmetric) session synchronization

Synchronizing asymmetric traffic can be very useful in situations where multiple Internet connections from different ISPs are spread across two FortiGates.

The FGSP enforces firewall policies for asymmetric traffic, including cases where the TCP 3-way handshake is split between two FortiGates. For example, FGT-A receives the TCP-SYN, FGT-B receives the TCP-SYN-ACK, and FGT-A receives the TCP-ACK. Under normal conditions a firewall will drop this connection since the 3-way handshake was not seen by the same firewall. However two FortiGates with FGSP configured will be able to properly pass this traffic since the firewall sessions are synchronized.

If traffic will be highly asymmetric, as described above, the following command must be enabled on both FortiGates:

config system ha set session-pickup enable set session-pickup-expectation enable

end

Security profile inspection with asymmetric and symmetric traffic

Security profile inspection, flow or proxy based, is not expected to work properly if the traffic in the session is load balanced across more than one FortiGate in either direction. However, flow-based inspection should be used in FGSP deployments.

For symmetric traffic, security profile inspection can be used but with the following limitations:

  • No session synchronization for the sessions inspected using proxy-based inspection. Sessions will drop and need to be reestablished after data path failover.
  • Sessions with flow-based inspection will failover, and inspection of sessions after a failover may not work.

FGSP                  Session Life Support Protocol

Improving session synchronization performance

Two HA configuration options are available to reduce the performance impact of enabling session failover (also known as session pickup): reducing the number of sessions that are synchronized by adding a session pickup delay, and using more FortiGate interfaces for session synchronization.

Reducing the number of sessions that are synchronized

If session pickup is enabled, as soon as new sessions are added to the primary unit session table they are synchronized to the other cluster units. Enable the session-pickup-delay CLI option to reduce the number of sessions that are synchronized by synchronizing sessions only if they remain active for more than 30 seconds. Enabling this option could greatly reduce the number of sessions that are synchronized if a cluster typically processes very many short duration sessions, which is typical of most HTTP traffic for example.

Use the following command to enable a 30 second session pickup delay:

config system ha set session-pickup-delay enable

end

Enabling session pickup delay means that if a failover occurs more sessions may not be resumed after a failover. In most cases short duration sessions can be restarted with only a minor traffic interruption. However, if you notice too many sessions not resuming after a failover you might want to disable this setting.

Using multiple FortiGate interfaces for session synchronization

Using the session-sync-dev option, you can select one or more FortiGate interfaces to use for synchronizing sessions as required for session pickup. Normally session synchronization occurs over the HA heartbeat link. Using this HA option means only the selected interfaces are used for session synchronization and not the HA heartbeat link. If you select more than one interface, session synchronization traffic is load balanced among the selected interfaces.

Moving session synchronization from the HA heartbeat interface reduces the bandwidth required for HA heartbeat traffic and may improve the efficiency and performance of the cluster, especially if the cluster is synchronizing a large number of sessions. Load balancing session synchronization among multiple interfaces can further improve performance and efficiency if the cluster is synchronizing a large number of sessions.

Use the following command to perform cluster session synchronization using the port10 and port12 interfaces.

config system ha set session-sync-dev port10 port12

end

Session synchronization packets use Ethertype 0x8892. The interfaces to use for session synchronization must be connected together either directly using the appropriate cable (possible if there are only two units in the cluster) or using switches. If one of the interfaces becomes disconnected the cluster uses the remaining interfaces for session synchronization. If all of the session synchronization interfaces become disconnected, session synchronization reverts back to using the HA heartbeat link. All session synchronization traffic is between the primary unit and each subordinate unit.

Since large amounts of session synchronization traffic can increase network congestion, it is recommended that you keep this traffic off of your network by using dedicated connections for it.

FGSP – FortiGate Session Life Support

NAT session synchronization

NAT sessions are not synchronized by default. You can enable NAT session synchronization by entering the following command:

config system ha set session-pickup enable set session-pickup-nat enable

end

Note that, after a failover with this configuration, all sessions that include the IP addresses of interfaces on the failed FortiGate unit will have nowhere to go since the IP addresses of the failed FortiGate unit will no longer be on the network. If you want NAT sessions to resume after a failover you should not configure NAT to use the destination interface IP address, since the FGSP FortiGate units have different IP addresses. To avoid this issue, you should use IP pools with the type set to overload (which is the default IP pool type), as shown in the example below:

config firewall ippool edit FGSP-pool set type overload set startip 172.20.120.10 set endip 172.20.120.20

end

In NAT/Route mode, only sessions for route mode security policies are synchronized. FGSP HA is also available for FortiGate units or virtual domains operating in Transparent mode. Only sessions for normal Transparent mode policies are synchronized.

IPsec tunnel synchronization

When you use the config system cluster-sync command to enable FGSP, IPsec keys and other runtime data are synchronized between cluster units. This means that if one of the cluster units goes down the cluster unit that is still operating can quickly get IPsec tunnels re-established without re-negotiating them. However, after a failover, all existing tunnel sessions on the failed FortiGate have to be restarted on the still operating FortiGate.

IPsec tunnel sync only supports dialup IPsec. The interfaces on both FortiGates that are tunnel endpoints must have the same IP addresses and external routers must be configured to load balance IPsec tunnel sessions to the FortiGates in the cluster.

Standalone configuration synchronization uses a very similar process as FGCP. There is a similar relationship between the two FortiGates but only in regards to configuration synchronization, not session information. The primary unit is selected by using priority/override. The heartbeat is used to check the primary unit’s health. Once heartbeat loss is detected, a new primary unit is selected.

FGSP                  Session Life Support Protocol

Automatic session synchronization after peer reboot

The following command allows you to configure an automatic session synchronization after a peer FGSP unit has rebooted. FGSP will send out heartbeat signals (every 1 – 10 seconds, as shown below) if one FortiGate is rebooting and the other FortiGate fails.

To configure automatic session synchronization:

config system session-sync edit 1

set down-intfs-before-sess-sync <interfaces> – List of interfaces to be turned down before session synchronization is complete.

set-hb-interval <integer> – (1 – 10 seconds) set hb-lost-threshold <integer> – (1 – 10)

next end

 

FGCP – FortiGate Clustering Protocol

FGCP – FortiGate Clustering Protocol

In an active-passive HA configuration, the FortiGate Clustering Protocol (FGCP) provides failover protection, whereby the cluster can provide FortiGate services even when one of the cluster units loses connection. FGCP is also a Layer 2 heartbeat that specifies how FortiGate units communicate in an HA cluster and keeps the cluster operating.

The FortiGate’s HA Heartbeat listens on ports TCP/703, TCP/23, or ETH Layer 2/8890.

Virtual MAC addresses

FGCP assigns virtual MAC addresses to each primary unit interface in an HA cluster. Virtual MAC addresses are in place so that, if a failover occurs, the new primary unit interfaces will have the same MAC addresses as the failed primary unit interfaces. If the MAC addresses were to change after a failover, the network would take longer to recover because all attached network devices would have to learn the new MAC addresses before they could communicate with the cluster.

If a cluster is operating in Transparent mode, FGCP assigns a virtual MAC address for the primary unit management IP address. Since you can connect to the management IP address from any interface, all of the FortiGate interfaces appear to have the same virtual MAC address.

When a cluster starts up, after a failover, the primary unit sends gratuitous ARP packets to update the switches connected to the cluster interfaces with the virtual MAC address. The switches update their MAC forwarding tables with this MAC address. As a result, the switches direct all network traffic to the primary unit. Depending on the cluster configuration, the primary unit either processes this network traffic itself or load balances the network traffic among all of the cluster units.

You cannot disable sending gratuitous ARP packets, but you can change the number of packets that are sent (160 ARP packets) by entering the following command:

config system ha set arps <integer>

end

You can change the time between ARP packets (1-20 seconds) by entering the following command:

config system ha set arps-interval <integer>

end

Assigning virtual MAC addresses

Virtual MAC addresses are determined based on the following formula:

00-09-0f-09-<group-id_hex>-<vcluster_integer><idx> where:

  • <group-id_hex>: The HA group ID for the cluster converted to hexadecimal. The table below lists some example virtual MAC addresses set for each group ID:
Integer Group ID Hexadecimal Group ID
0 00
1 01
2 02
3 03
10 0a
11 0b
63 3f
255 ff
  • <vcluster_integer>: This value is 0 for virtual cluster 1 and 2 for virtual cluster 2. If virtual domains are not enabled, HA sets the virtual cluster to 1 and by default all interfaces are in the root virtual domain. Including virtual cluster and virtual domain factors in the virtual MAC address formula means that the same formula can be used whether or not virtual domains and virtual clustering is enabled.
  • <idx>: The index number of the interface. In NAT/Route mode, interfaces are numbered from 0 to x (where x is the number of interfaces). The interfaces are listed in alphabetical order on the web-based manager and CLI. The interface at the top of the interface list is first in alphabetical order by name and has an index of 0. The second interface in the list has an index of 1 and so on. In Transparent mode, the index number foe the management IP address is 0.

Every FortiGate unit physical interface has two MAC addresses: the current hardware address and the permanent hardware address. The permanent hardware address cannot be changed, as it is the actual MAC address of the interface hardware. The current hardware address can be changed, but only when a FortiGate unit is not operating in HA. For an operating cluster, the current hardware address of each cluster unit interface is changed to the HA virtual MAC address by the FGCP.

You cannot change an interface MAC address and you cannot view MAC addresses from the system interface CLI command.

You can use the get hardware nic <interface_name_str> (or diagnose hardware

deviceinfo nic <interface_str>) command to display both MAC addresses for any FortiGate

interface. This command displays hardware information for the specified interface, including the current hardware address (as Current_HWaddr) and the permanent hardware address (as Permanent_HWaddr). For some interfaces, the current hardware address is displayed as MAC.

Failover protection

FGCP supports three kinds of failover protection:

  1. Device failover: Automatically replaces a failed device and restarts traffic flow with minimal impact on the network. All subordinate units in an active-passive HA cluster are constantly waiting to negotiate to become primary units. Only the heartbeat packets sent by the primary unit keep the subordinate units from becoming primary units. Each received heartbeat packet resets negotiation timers in the subordinate units. If this timer is allowed to run out because the subordinate units do not receive heartbeat packets from the primary unit, the subordinate units assume that the primary unit has failed, and negotiate to become primary units themselves. The default time interval between HA heartbeats is 200 ms.
  2. Link failover: Maintains traffic flow if a link fails. In this case, the primary unit does not stop operating, and therefore participates in the negotiation of selecting a new primary unit. The old primary unit then joins the cluster as a subordinate unit. Furthermore, any subordinate units with a link failure are unlikely to become the primary unit in future negotiations.
  3. Session failover: With session failover (also called session pickup) enabled, the primary unit informs the subordinate units of changes to the primary unit connection and state tables, keeping the subordinate units up-todate with the traffic currently being processed by the cluster. This helps new primary units resume communication sessions with minimal loss of data, avoiding the need to restart active sessions.

Synchronization of configurations

The FGCP uses a combination of incremental and periodic synchronization to make sure that the configuration of all cluster units is synchronized to that of the primary unit. However, there are certain settings that are not synchronized between cluster units:

l HA override l HA device priority l The virtual cluster priority l The FortiGate unit host name l The HA priority setting for a ping server (or dead gateway detection) configuration l The system interface settings of the HA reserved management interface l The HA default route for the reserved management interface, set using the ha-mgmt-interface-gateway option of the config system ha command.

You can disable configuration synchronization by entering the following command:

config system ha set sync-config disable

end

The command execute ha synchronize can be used to perform a manual synchronization.

The FGCP heartbeat operates on TCP port 703 with an independent IP address not assigned to any FortiGate interface. You can create an FGCP cluster of up to four FortiGate units. Below is an example of FGCP used to create an HA cluster installed between an internal network and the Internet.

FGCP HA provides a solution for two key requirements of critical enterprise networking: enhanced reliability and increased performance, through device, link, and remote link failover protection. Extended FGCP features include full mesh HA and virtual clustering. You can also fine tune the performance of the FGCP to change how a cluster forms and shares information among cluster units and how the cluster responds to failures.

Before configuring an FGCP HA cluster, make sure your FortiGate interfaces are configured with static IP addresses. If any interface gets its address using DHCP or PPPoE you should temporarily switch it to a static address and enable DHCP or PPPoE after the cluster has been established.

Heartbeat traffic, such as FGCP, uses multicast on port number 6065 and uses linklocal IPv4 addresses in the 169.254.0.x range. HA heartbeat packets have an Ethertype field value of 0x8890.

Synchronization traffic, such as FGSP, uses unicast on port number 6066 and the IP address 239.0.0.2. HA sessions that synchronize the cluster have an Ethertype field value of 0x8893.

The HA IP addresses are hard-coded and cannot be configured.

How to set up FGCP clustering

This example describes how to enhance the reliability of a network protected by a FortiGate unit by adding a second FortiGate unit to create a FortiGate Clustering Protocol (FGCP) HA cluster. The FortiGate already on the network will be configured to become the primary unit by increasing its device priority and enabling override. The new FortiGate will be prepared by setting it to factory defaults to wipe any configuration changes. Then it will be licensed, configured for HA, and then connected to the FortiGate already on the network. The new FortiGate becomes the backup unit and its configuration is overwritten by the primary unit.

If you have not already done so, register the primary FortiGate and apply licenses to it before setting up the cluster. This includes FortiCloud activation and FortiClient licensing, and entering a license key if you purchased more than 10 Virtual Domains (VDOMs). You can also install any third-party certificates on the primary FortiGate before forming the cluster.

The FortiGates should be running the same FortiOS firmware version, and their interfaces should not be configured to get their addresses from DHCP or PPPoE.

Configuring the primary FortiGate

  1. Connect to the primary FortiGate and go to Dashboard > System Information. Change the unit’s Host Name to identify it as the primary FortiGate. You can also enter this CLI command:

config system global set hostname Primary_FortiGate

end

  1. You then need to set the HA mode to active-passive. Enter the following CLI command to set the HA mode to active-passive, set a group name and password, increase the device priority to a higher value (for example, 250) and enable override: config system ha set mode a-p

set group-name My-HA-Cluster set password set priority 250 set override enable

set hbdev ha1 50 ha2 50 end

This command also selects ha1 and ha2 to be the heartbeat interfaces, with their priorities set to 50.

Enabling override and increasing the priority ensures that this FortiGate should become the primary unit.

Configuring the backup FortiGate

  1. Enter the CLI command below to reset the new FortiGate to factory default settings (skip this step if the FortiGate is fresh from the factory). It is recommended to set it back to factory defaults to reduce the chance of synchronization problems.: execute factoryreset
  2. Make sure to change the firmware running on the new FortiGate to the same version running on the primary unit, register, and apply licenses to it before adding it to the cluster.
  3. Then go to Dashboard > System Information. Change the unit’s Host Name to identify it as the backup

FortiGate.

You can also enter this CLI command:

config system global set hostname Backup_FortiGate

end

  1. Duplicate the primary unit’s HA settings, except make sure to set the backup device’s priority to a lower value and do not enable override.

Connecting the cluster

Connect the HA cluster as shown in the initial diagram above. Making these connections will disrupt network traffic as you disconnect and re-connect cables.

When connected, the primary and backup FortiGates find each other and negotiate to form an HA cluster. The primary unit synchronizes its configuration with the backup FortiGate. Forming the cluster happens automatically with minimal or no disruption to network traffic.

Heartbeat packet Ethertypes

Normal IP packets are 802.3 packets that have an Ethernet type (Ethertype) field value of 0x0800. Ethertype values other than 0x0800 are understood as level 2 frames rather than IP packets.

By default, HA heartbeat packets use the following Ethertypes:

  • HA heartbeat packets for NAT/Route mode clusters use Ethertype 0x8890. These packets are used by cluster units to find other cluster units and to verify the status of other cluster units while the cluster is operating. You can change the Ethertype of these packets using the ha-eth-type option of the config system ha command.
  • HA heartbeat packets for Transparent mode clusters use Ethertype 0x8891. These packets are used by cluster units to find other cluster units and to verify the status of other cluster units while the cluster is operating. You can change the Ethertype of these packets using the hc-eth-type option of the config system ha command.
  • HA telnet sessions between cluster units over HA heartbeat links use Ethertype 0x8893. The telnet sessions are used to synchronize the cluster configurations. Telnet sessions are also used when an administrator uses the execute ha manage command to connect from one cluster unit CLI to another. You can change the Ethertype of these packets using the l2ep-eth-type option of the config system ha command.

Because heartbeat packets are recognized as level 2 frames, the switches and routers on your heartbeat network that connect to heartbeat interfaces must be configured to allow them. If level2 frames are dropped by these network devices, heartbeat traffic will not be allowed between the cluster units.

Some third-party network equipment may use packets with these Ethertypes for other purposes. For example,

Cisco N5K/Nexus switches use Ethertype 0x8890 for some functions. When one of these switches receives Ethertype 0x8890 packets from an attached cluster unit, the switch generates CRC errors and the packets are not forwarded. As a result, FortiGate units connected with these switches cannot form a cluster.

In some cases, if the heartbeat interfaces are connected and configured so regular traffic flows but heartbeat traffic is not forwarded, you can change the configuration of the switch that connects the HA heartbeat interfaces to allow level2 frames with Ethertypes 0x8890, 0x8891, and 0x8893 to pass.

Alternatively, you can use the following CLI options to change the Ethertypes of the HA heartbeat packets:

config system ha set ha-eth-type <ha_ethertype_4-digit_hex set hc-eth-type <hc_ethertype_4-digit_ex> set l2ep-eth-type <l2ep_ethertype_4-digit_hex>

end

For example, use the following command to change the Ethertype of the HA heartbeat packets from 0x8890 to 0x8895 and to change the Ethertype of HA Telnet session packets from 0x8891 to 0x889f:

config system ha set ha-eth-type 8895 set l2ep-eth-type 889f

end

Enabling or Disabling HA heartbeat encryption and authentication

You can enable HA heartbeat encryption and authentication to encrypt and authenticate HA heartbeat packets. HA heartbeat packets should be encrypted and authenticated if the cluster interfaces that send HA heartbeat packets are also connected to your networks.

If HA heartbeat packets are not encrypted the cluster password and changes to the cluster configuration could be exposed and an attacker may be able to sniff HA packets to get cluster information. Enabling HA heartbeat message authentication prevents an attacker from creating false HA heartbeat messages. False HA heartbeat messages could affect the stability of the cluster.

HA heartbeat encryption and authentication are disabled by default. Enabling HA encryption and authentication could reduce cluster performance. Use the following CLI command to enable HA heartbeat encryption and authentication.

config system ha set authentication enable set encryption enable

end

HA authentication and encryption uses AES-128 for encryption and SHA1 for authentication.

 

3rd-Party Servers Open Ports

3rd-Party Servers Open Ports

Incoming Ports

Purpose

Protocol/Port
FortiAnalyzer LDAP & PKI Authentication TCP/389, UDP/389
Log & Report TCP/21, TCP/22
Configuration Backups TCP/22
Alert Emails TCP/25
DNS UDP/53
NTP UDP/123
SNMP Traps UDP/162
Report Query TCP/389
Syslog & OFTP TCP or UDP/514
RADIUS UDP/1812

3rd-Party Servers

Incoming Ports

Purpose

Protocol/Port
FortiAuthenticator SMTP, Alerts, Virus Sample TCP/25
DNS UDP/52
Windows AD TCP/88
NTP UDP/123
LDAP TCP or UDP/389
Domain Control TCP/445
LDAPS TCP/636
FSSO & Tiers TCP/8002, TCP/8003
FortiManager DNS UDP/53
NTP UDP/123
SNMP Traps UDP/162
Proxied HTTPS Traffic TCP/443
RADIUS UDP/1812
Outgoing Ports

Purpose

Protocol/Port
FortiAuthenticator FSSO & Tiers TCP/8002, TCP/8003
FortiGate FSSO TCP/8000

FortiSandbox Open Ports

FortiSandbox Open Ports

Incoming Ports

Purpose

Protocol/Port
FortiGate OFTP TCP/514
Others SSH CLI Management TCP/22
Telnet CLI Management TCP/23
Web Admin TCP/80, TCP/443
OFTP Communication with FortiGate & FortiMail TCP/514
Third-party proxy server for ICAP servers ICAP: TCP/1344

ICAPS: TCP/11344

Outgoing Ports

Purpose

Protocol/Port
FortiGuard

(FortiSandbox will use a random port

picked by the kernel)

FortiGuard Distribution Servers TCP/8890
FortiGuard Web Filtering Servers UDP/53, UDP/8888

Services and port numbers required for FortiSandbox                                                           FortiSandbox

Outgoing Ports

Purpose

Protocol/Port
FortiSandbox

Community

Cloud

(FortiSandbox will use a random port

picked by the kernel)

Upload detected malware information TCP/443, UDP/53

Services and port numbers required for FortiSandbox

The tables above show all the services required for FortiSandbox to function correctly. You can use the diagnostic FortiSandbox command test-network to verify that all the services are allowed by the upstream. If the result is Passed, then there is no issue. If there is an issue with a specific service, it will be shown in the command output, and inform you which port needs to be opened.

This command checks:

  • VM Internet access l Internet connection l System DNS resolve speed l VM DNS resolve speed l Ping speed l Wget speed
  • Web Filtering service l FortiSandbox Community Cloud service

FortiManager Open Ports

FortiManager Open Ports

Incoming Ports

Purpose

Protocol/Port
FortiClient FortiGuard Queries UDP/53, UDP/8888
FortiGate Management TCP/541
IPv6 TCP/542
Log & Report TCP or UDP/514
Secure SNMP UDP/161, UDP/162
FortiGuard Queries TCP/8890, UDP/53
FortiGuard AV/IPS UDP/9443
FortiMail Reg, Config Backup, Config/Firmware

Pull

TCP/443
SNMP Traps UDP/162
FortiManager FortiClient Manager TCP/6028

FortiManager Open Ports

Incoming Ports

Purpose

Protocol/Port
Others SSH CLI Management TCP/22
Telnet CLI Management TCP/23
SNMP Traps UDP/162
Web Admin TCP/80, TCP/443
Outgoing Ports

Purpose

Protocol/Port
FortiAnalyzer Syslog & OFTP TCP/514, UDP/514
Registration TCP/541
FortiGate AV/IPS Push UDP/9443
SSH CLI Management TCP/22
Management TCP/541
SNMP Poll UDP/161, UDP/162
FortiGuard Queries TCP/443
FortiGuard AV/IPS Updates, URL/AS Update,

Firmware, SMS, FTM, Licensing, Policy

Override Authentication

TCP/443
Registration TCP/80
FortiMail Config/Firmware Push TCP/22
SNMP Poll UDP/161
FortiManager FortiClient Manager TCP/6028
3rd-Party Servers DNS UDP/53
NTP UDP/123
SNMP Traps UDP/162
Proxied HTTPS Traffic TCP/443
RADIUS UDP/1812

 

FortiManager

FortiMail Open Ports

FortiMail Open Ports

FortiMail Open Ports

Incoming Ports

Purpose

Protocol/Port
Email Client Quarantine View/Retrieve TCP/80 or TCP/443 or TCP/110
SMTP or SMTPS TCP/25 or TCP/465
POP3 or POP3S TCP/110 or TCP/995 (server mode only)
IMAP or IMAPS TCP/143 or TCP/993 (server mode only)
FortiManager Config/Firmware Push TCP/22
SNMP Poll TCP/161
FortiGuard AV Push UDP/9443
External Email Server SMTP or SMTPS TCP/25 or 465
Protected Email Server SMTP or SMTPS TCP/25 or 465
Outgoing Ports

Purpose

Protocol/Port
FortiAnalyzer Syslog UDP/514
FortiManager Reg, Config Backup, Config/Firmware

Pull

TCP/443
SNMP Traps UDP/162
FortiGuard AS Rating UDP/53
AV/AS Update TCP/443
External

Email Server

SMTP or SMTPS TCP/25 or TCP/465
Protected Email Server SMTP or SMTPS TCP/25 or TCP/465
POP3 Auth TCP/110
IMAP Auth TCP/143

FortiMail Open Ports

Outgoing Ports

Purpose

Protocol/Port
Others Dyn DNS TCP/80 etc.
DNS, RBL UDP/53
NTP UDP/123
Alert Email TCP/25
LDAP or LDAPS TCP/389 or TCP/636
RADIUS Auth TCP/1812
NAS TCP/21, TCP/22, TCP/2049

Note that FortiMail uses the following URLs to access the FortiGuard Distribution Network (FDN):

  • fortiguard.net l service.fortiguard.net l support.fortinet.com

Furthermore, FortiMail performs these queries and updates listed below using the following ports and protocols:

  • FortiGuard Antispam rating queries: UDP 53, 8888, 8889 l FortiGuard AntiVirus Push updates: UDP 9443 l FortiGuard Antispam or AntiVirus updates: TCP 443