Category Archives: FortiOS 5.4 Handbook

The complete handbook for FortiOS 5.4

Deployment example – KVM

Deployment example – KVM

Once you have downloaded the FORTINET.out.kvm.zip file and extracted virtual hard drive image file fortios.qcow2, you can create the virtual machine in your KVM environment.

The following topics are included in this section: Create the FortiGate VM virtual machine

  • Configure FortiGate VM hardware settings
  • Start the FortiGate VM

 

Create the FortiGate VM virtual machine

To create the FortiGate VM virtual machine:

1. Launch Virtual Machine Manager (virt-manager) on your KVM host server.

The Virtual Machine Manager home page opens.

2. In the toolbar, select Create a new virtual machine.

3. Enter a Name for the VM, FGT-VM for example.

4. Ensure that Connection is localhost. (This is the defaul)

5. Select Import existing disk image.

6. Select Forward.

7. In OS Type select Linux.

8. In Version, select a Generic version with virtio.

9. Select Browse.

10. If you copied the fortios.qcow2 file to /var/lib/libvirt/images, it will be visible on the right. If you saved it somewhere else on your server, select Browse Local and find it.

11. Choose Choose Volume.

12. Select Forward.

13. Specify the amount of memory and number of CPUs to allocate to this virtual machine. The amounts must not exceed your license limits. See FortiGate VM Overview on page 2677.

14. Select Forward.

15. Expand Advanced options. A new virtual machine includes one network adapter by default. Select a network adapter on the host computer. Optionally, set a specific MAC address for the virtual network interface. Set Virt Type to virtio and Architecture to qcow2.

16. Select Finish.

Deployment example – MS Hyper-V

Deployment example – MS Hyper-V

Once you have downloaded the FGT_VMxx_HV-v5-build0xxx-FORTINET.out.hyperv.zip file and extracted the package contents to a folder on your Microsoft server, you can deploy the VHD package to your Microsoft Hyper-V environment.

The following topics are included in this section: Create the FortiGate VM virtual machine

Configure FortiGate VM hardware settings High Availability Hyper-V configuration Start the FortiGate VM

 

Create the FortiGate VM virtual machine

To create the FortiGate VM virtual machine:

1. Launch the Hyper-V Manager in your Microsoft server.

The HyperV Manager home page opens.

2. Select the server in the right-tree menu. The server details page is displayed.

3. Right-click the server and select New and select Virtual Machine from the menu. Optionally, in the Actionmenu, select New and select Virtual Machine from the menu.

The New Virtual Machine Wizard opens.

4. Select Next to create a virtual machine with a custom configuration.

The Specify Name and Location page is displayed.

5. Enter a name for this virtual machine. The name is displayed in the Hyper-V Manager.

Select Next to continue. The Assign Memory page is displayed.

6. Specify the amount of memory to allocate to this virtual machine. The default memory for FortiGate VM is 1GB (1024MB).

Select Next to continue. The Configure Networking page is displayed.

7. Each new virtual machine includes a network adapter. You can configure the network adapter to use a virtual switch, or it can remain disconnected. FortiGate VM requires four network adapters. You must configure network adapters in the Settings page.

Select Next to continue. The Connect Virtual Hard Disk page is displayed.

8. Select to use an existing virtual hard disk and browse for the vhd file that you downloaded from the Fortinet Customer Service & Support portal.

Select Next to continue. The Summary page is displayed.

9. To create the virtual machine and close the wizard, select Finish.

Deployment example – VMware

Deployment example – VMware

Once you have downloaded the FGT_VMxx-v5-build0xxx-FORTINET.out.ovf.zip file from http://support.fortinet.com and extracted the package contents to a folder on your local computer, you can use the vSphere client to create the virtual machine from the deployment package OVF template.

The following topics are included in this section:

  • Open the FortiGate VM OVF file with the vSphere client
  • Configure FortiGate VM hardware settings Transparent mode VMware configuration High Availability VMware configuration Power on your FortiGate VM

Open the FortiGate VM OVF file with the vSphere client

 

To deploy the FortiGate VM OVF template:

1. Launch the VMware vSphere client, enter the IP address or host name of your server, enter your user name and password and select Login.

The vSphere client home page opens.

2. Select File > Deploy OVF Template to launch the OVF Template wizard.

The OVF Template Source page opens.

3. Select the source location of the OVF file. Select Browse and locate the OVF file on your computer. Select Nexto continue.

 

The OVF Template Details page opens.

4. Verify the OVF template details. This page details the product name, download size, size on disk, and description.

Select Next to continue.

 

The OVF Template End User License Agreement page opens.

5. Read the end user license agreement for FortiGate VM. Select Accept and then select Next to continue.

 

The OVF Template Name and Location page opens.

6. Enter a name for this OVF template. The name can contain up to 80 characters and it must be unique within the inventory folder. Select Next to continue.

 

The OVF Template Disk Format page opens.

7. Select one of the following:

  • Thick Provision Lazy Zeroed: Allocates the disk space statically (no other volumes can take the space), but does not write zeros to the blocks until the first write takes place to that block during runtime (which includes a full disk format).
  • Thick Provision Eager Zeroed: Allocates the disk space statically (no other volumes can take the space), and writes zeros to all the blocks.
  • Thin Provision: Allocates the disk space only when a write occurs to a block, but the total volume size is reported by VMFS to the OS. Other volumes can take the remaining space. This allows you to float space between your servers, and expand your storage when your size monitoring indicates there is a problem. Note that once a Thin Provisioned block is allocated, it remains on the volume regardless if you have deleted data, etc.

8. Select Next to continue.

 

The OVF Template Network Mapping page opens.

9. Map the networks used in this OVF template to networks in your inventory. Network 1 maps to port1 of the FortiGate VM. You must set the destination network for this entry to access the device console. Select Next to continue.

 

The OVF Template Ready to Complete page opens.

10. Review the template configuration. Make sure that Power on after deployment is not enabled. You might need to configure the FortiGate VM hardware settings prior to powering on the FortiGate VM.

11. Select Finish to deploy the OVF template. You will receive a Deployment Completed Successfully dialog box once the FortiGate VM OVF template wizard has finished.

 

Configure FortiGate VM hardware settings

Before powering on your FortiGate VM you must configure the virtual memory, virtual CPU, and virtual disk configuration to match your FortiGate VM license.

 

Transparent mode VMware configuration

If you want to use your FortiGate-VM in transparent mode, your VMware server’s virtual switches must operate in promiscuous mode. This permits these interfaces to receive traffic that will pass through the FortiGate unit but was not addressed to the FortiGate unit.

 

In VMware, promiscuous mode must be explicitly enabled:

1. In the vSphere client, select your VMware server in the left pane and then select the Configuration tab in the right pane.

2. In Hardware, select Networking.

3. Select Properties of vSwitch0.

4. In the Properties window left pane, select vSwitch and then select Edit.

5. Select the Security tab, set Promiscuous Mode to Accept, then select OK.

6. Select Close.

7. Repeat steps 3 through 6 for other vSwitches that your transparent mode FortiGate-VM uses.

 

High Availability VMware configuration

If you want to combine two or more FortiGate-VM instances into a FortiGate Clustering Protocol (FGCP) High Availability (HA) cluster the VMware server’s virtual switches used to connect the heartbeat interfaces must operate in promiscuous mode. This permits HA heartbeat communication between the heartbeat interfaces. HA heartbeat packets are non-TCP packets that use Ethertype values 0x8890, 0x8891, and 0x8890. The FGCP uses link-local IP4 addresses in the 169.254.0.x range for HA heartbeat interface IP addresses.

 

To enable promiscuous mode in VMware:

1. In the vSphere client, select your VMware server in the left pane and then select the Configuration tab in the right pane.

2. In Hardware, select Networking.

3. Select Properties of a virtual switch used to connect heartbeat interfaces.

4. In the Properties window left pane, select vSwitch and then select Edit.

5. Select the Security tab, set Promiscuous Mode to Accept, then select OK.

6. Select Close.

 

You must also set the virtual switches connected to other FortiGate interfaces to allow MAC address changes and to accept forged transmits. This is required because the FGCP sets virtual MAC addresses for all FortiGate interfaces and the same interfaces on the different VM instances in the cluster will have the same virtual MAC addresses.

To make the required changes in VMware:

1. In the vSphere client, select your VMware server in the left pane and then select the Configuration tab in the right pane.

2. In Hardware, select Networking.

3. Select Properties of a virtual switch used to connect FortiGate VM interfaces.

4. Set MAC Address ChangestoAccept.

5. Set Forged Transmits to Accept.

 

Power on your FortiGate VM

You can now proceed to power on your FortiGate VM. There are several ways to do this:

  • Select the name of the FortiGate VM you deployed in the inventory list and select Power on the virtual machine in the Getting Started tab.
  • In the inventory list, right-click the name of the FortiGate VM you deployed, and select Power > Power On.
  • Select the name of the FortiGate VM you deployed in the inventory list. Click the Power On button on the toolbar.

Select the Console tab to view the console. To enter text, you must click in the console pane. The mouse is then captured and cannot leave the console screen. As the FortiGate console is text-only, no mouse pointer is visible. To release the mouse, press Ctrl-Alt.

Chapter 28 – VM Installation

Chapter 28 – VM Installation

This document describes how to deploy a FortiGate virtual appliance in several virtualization server environments. This includes how to configure the virtual hardware settings of the virtual appliance.

 

This document assumes:

  •  you have already successfully installed the virtualization server on the physical machine,
  • lyou have installed appropriate VM management software on either the physical server or a computer to be used for VM management.

This document does not cover configuration and operation of the virtual appliance after it has been successfully installed and started. For these issues, see the FortiGate 5.2 Handbook.

 

This document includes the following sections:

  • FortiGate VM Overview
  • Deployment example – VMware
  • Deployment example – MS Hyper-V
  • Deployment example – KVM
  • Deployment example – OpenXen
  • Deployment example – Citrix XenServer

 

What’s new in FortiOS 5.4

 

FortiGate VM Overview

  • The following topics are included in this section: FortiGate VM models and licensing
  • Registering FortiGate VM with Customer Service & Support Downloading the FortiGate VM deployment package Deployment package contents
  • Deploying the FortiGate VM appliance

 

FortiGate VM models and licensing

Fortinet offers the FortiGate VM in five virtual appliance models determined by license. When configuring your FortiGate VM, be sure to configure hardware settings within the ranges outlined below. Contact your Fortinet Authorized Reseller for more information.

 

FortiGate VM model information

Technical Specification                               FG-VM00   FG-VM01   FG-VM02   FG-VM04   FG-VM08

Virtual CPUs

(min / max)

1 / 1             1 / 1             1 / 2             1 / 4             1 / 8

 

Virtual Network

Interfaces (min / max)

2 / 10

 

Virtual Memory

(min / max)

1GB /1GB

1GB /2GB

1GB /4GB

1GB /6GB

1GB/12GB

 

Virtual Storage

(min / max)

 

Managed Wireless APs

(tunnel mode / global)

30GB / 2TB

 

 

32 / 32         32 / 64       256 / 512     256 / 512        1024 /

4096

 

Virtual Domains

(default / max)

1 / 1           10 / 10         10 / 25         10 / 50        10 / 250

 

After placing an order for FortiGate VM, a license registration code is sent to the email address used on the order form. Use the registration number provided to register the FortiGate VM with Customer Service & Support and then download the license file. Once the license file is uploaded to the FortiGate VM and validated, your FortiGate VM appliance is fully functional.

 

FortiGate VM evaluation license

FortiGate VM includes a limited embedded 15-day trial license that supports:

  • 1 CPU maximum
  • 1024 MB memory maximum
  • low encryption only (no HTTPS administrative access)
  • all features except FortiGuard updates

You cannot upgrade the firmware, doing so will lock the Web-based Manager until a license is uploaded. Technical support is not included. The trial period begins the first time you start FortiGate VM. After the trial license expires, functionality is disabled until you upload a license file.

 

Registering FortiGate VM with Customer Service & Support

To obtain the FortiGate VM license file you must first register your FortiGate VM with Customer Service & Support.

 

To register your FortiGate VM:

1. Log in to the Customer Service & Support portal using an existing support account or select Sign Up to create a new account.

2. In the main page, under Asset, select Register/Renew.

 

The Registration page opens.

3. Enter the registration code that was emailed to you and select Register. A registration form will display.

4. After completing the form, a registration acknowledgement page will appear.

5. Select the License File Download link.

6. You will be prompted to save the license file (.lic) to your local computer. See “Upload the license file” for instructions on uploading the license file to your FortiGate VM via the Web-based Manager.

 

 

Downloading the FortiGate VM deployment package

FortiGate VM deployment packages are included with FortiGate firmware images on the Customer Service & Support site. First, see the following table to determine the appropriate VM deployment package for your VM platform.

 

Selecting the correct FortiGate VM deployment package for your VM platform

VM Platform                                                              FortiGate VM Deployment File

Citrix XenServer v5.6sp2, 6.0 and later                          FGT_VM64-v500-buildnnnn-FORTINET. out.CitrixXen.zip

OpenXen v3.4.3, 4.1                                                      FGT_VM64-v500-buildnnnn-FORTINET. out.OpenXen.zip

Microsoft Hyper-V Server 2008R2 and 2012                   FGT_VM64-v500-buildnnnn-FORTINET. out.hyperv.zip

KVM (qemu 0.12.1)                                                        FGT_VM64-v500-buildnnnn-FORTINET. out.kvm.zip

 

VM Platform                                                              FortiGate VM Deployment File

VMware ESX 4.0, 4.1

ESXi 4.0/4.1/5.0/5.1/5.5

FGT_VM32-v500-buildnnnn-FORTINET. out.ovf.zip (32-bit)

FGT_VM64-v500-buildnnnn-FORTINET. out.ovf.zip

 

For more information see the FortiGate product datasheet available on the Fortinet web site, http://www.fortinet.com/products/fortigate/virtualappliances.html.

The firmware images FTP directory is organized by firmware version, major release, and patch release. The firmware images in the directories follow a specific naming convention and each firmware image is specific to the device model. For example, the FGT_VM32-v500-build0151-FORTINET.out.ovf.zip image found in the v5.0 Patch Release 2 directory is specific to the FortiGate VM 32-bit environment.

You can also download the FortiOS Release Notes, FORTINET-FORTIGATE MIB file, FSSO images, and SSL VPN client in this directory. The Fortinet Core MIB file is loc- ated in the main FortiGate v5.00 directory.

 

To download the FortiGate VM deployment package:

1. In the main page of the Customer Service & Support site, select Download > Firmware Images.

 

The Firmware Images page opens.

2. In the Firmware Images page, select FortiGate.

3. Browse to the appropriate directory on the FTP site for the version that you would like to download.

4. Download the appropriate .zip file for your VM server platform.

 

You can also download the FortiGate Release Notes.

5. Extract the contents of the deployment package to a new file folder.

 

Deployment package contents

 

Citrix XenServer

The FORTINET.out.CitrixXen.zip file contains:

  • fortios.vhd: the FortiGate VM system hard disk in VHD format
  • fortios.xva: binary file containing virtual hardware configuration settings
  • in the ovf folder:
  • FortiGate-VM64.ovf: Open Virtualization Format (OVF) template file, containing virtual hardware settings for Xen
  • fortios.vmdk: the FortiGate VM system hard disk in VMDK format
  • datadrive.vmdk: the FortiGate VM log disk in VMDK format

The ovf folder and its contents is an alternative method of installation to the .xva and VHD disk image.

 

OpenXEN

The FORTINET.out.OpenXen.zip file contains only fortios.qcow2, the FortiGate VM system hard disk in qcow2 format. You will need to manually:

  • create a 30GB log disk
  • specify the virtual hardware settings

 

Microsoft Hyper-V

The FORTINET.out.hyperv.zip file contains:

  • in the Virtual Hard Disks folder:
  • fortios.vhd: the FortiGate VM system hard disk in VHD format
  • DATADRIVE.vhd: the FortiGate VM log disk in VHD format
  • In the Virtual Machines folder:
  • fortios.xml: XML file containing virtual hardware configuration settings for Hyper-V. This is compatible with Windows Server 2012.
  • Snapshots folder: optionally, Hyper-V stores snapshots of the FortiGate VM state here

 

KVM

The FORTINET.out.kvm.zip contains only fortios.qcow2, the FortiGate VM system hard disk in qcow2 format. You will need to manually:

  • create a 30GB log disk
  • specify the virtual hardware settings

 

VMware ESX/ESXi

The FORTINET.out.ovf.zip file contains:

  • fortios.vmdk: the FortiGate VM system hard disk in VMDK format
  • datadrive.vmdk: the FortiGate VM log disk in VMDK format
  • Open Virtualization Format (OVF) template files:
  • FortiGate-VM64.ovf: OVF template based on Intel e1000 NIC driver
  • FortiGate-VM64.hw04.ovf: OVF template file for older (v3.5) VMware ESX server
  • FortiGate-VMxx.hw07_vmxnet2.ovf: OVF template file for VMware vmxnet2 driver
  • FortiGate-VMxx.hw07_vmxnet3.ovf: OVF template file for VMware vmxnet3 driver

 

Use the VMXNET3 interface (FortiGate-VMxx.hw07_vmxnet3.ovf template) if the virtual appliance will distribute workload to multiple processor cores.

 

Deploying the FortiGate VM appliance

Prior to deploying the FortiGate VM appliance, the VM platform must be installed and configured so that it is ready to create virtual machines. The installation instructions for FortiGate VM assume that

  • You are familiar with the management software and terminology of your VM platform.
  • An Internet connection is available for FortiGate VM to contact FortiGuard to validate its license or, for closed environments, a FortiManager can be contacted to validate the FortiGate VM license. See “Validate the FortiGate VM license with FortiManager”.

For assistance in deploying FortiGate VM, refer to the deployment chapter in this guide that corresponds to your VMware environment. You might also need to refer to the documentation provided with your VM server. The deployment chapters are presented as examples because for any particular VM server there are multiple ways to create a virtual machine. There are command line tools, APIs, and even alternative graphical user interface tools.

Before you start your FortiGate VM appliance for the first time, you might need to adjust virtual disk sizes and networking settings. The first time you start FortiGate VM, you will have access only through the console window of your VM server environment. After you configure one FortiGate network interface with an IP address and administrative access, you can access the FortiGate VM web-based manager.

After deployment and license validation, you can upgrade your FortiGate VM appliance’s firmware by downloading either FGT_VM32-v500-buildnnnn-FORTINET.out (32-bit) or FGT_VM64-v500-buildnnnn- FORTINET.out (64-bit) firmware. Firmware upgrading on a VM is very similar to upgrading firmware on a hardware FortiGate unit.

Troubleshooting Virtual Domains

Troubleshooting Virtual Domains

When you are configuring VDOMs you may run into some issues, with your VDOM configuration, your network configuration, or your device setup. This section addresses common problems and specific concerns that an administrator of a VDOM network may have.

 

This section includes:

  • VDOM admin having problems gaining access
  • FortiGate unit running very slowly
  • General VDOM tips and troubleshooting

 

VDOM admin having problems gaining access

With VDOMs configured, administrators have an extra layer of permissions and may have problems accessing their information.

 

Confirm the admin’s VDOM

Each administrator account, other than the super_admin account, is tied to one specific VDOM. That administrator is not able to access any other VDOM. It may be possible they are trying to access the wrong VDOM.

 

Confirm the VDOM’s interfaces

An administrator can only access their VDOM through interfaces that are assigned to that VDOM. If interfaces on that VDOM are disabled or unavailable there will be no method of accessing that VDOM by its local administrator. The super_admin will be required to either bring up the interfaces, fix the interfaces, or move another interface to that VDOM to restore access.

 

Confirm the VDOMs admin access

As with all FortiGate units, administration access on the VDOM’s interfaces must be enabled for that VDOM’s administrators to gain access. For example if SSH is not enabled, that is not available to administrators.

To enable admin access, the super_admin will go to the Global > Network > Interfaces page, and for the interface in question enable the admin access.

 

FortiGate unit running very slowly

You may experience a number of problems resulting from your FortiGate unit being overloaded. These problems may appear as:

  • CPU and memory threshold limits exceeded on a continual basis
  • AV failopen happening on a regular basis
  • dropped traffic or sessions due to lack of resources

These problems are caused by a lack of system resources. There are a number of possible reasons for this.

 

Too many VDOMs

If you have configured many VDOMs on your system, past the default ten VDOMs, this could easily be your problem.

Each VDOM you create on your FortiGate unit requires system resources to function – CPU cycles, memory, and disk space. When there are too many VDOMs configured there are not enough resources for operation. This may be a lack of memory in the session table, or no CPU cycles for processing incoming IPS traffic, or even a full disk drive.

Go to Global > System > VDOM and see the number of configured VDOMs on your system. If you are running 500 or more VDOMs, you must have a FortiGate 5000 chassis. Otherwise you need to reduce the number of VDOMs on your system to fix the problem. Even if you have the proper hardware, you may encounter noticeably slow throughput if you are using advanced features such as security profiles or deep content inspection with many configured VDOMs.

 

One or more VDOMs are consuming all the resources

If you have sufficient hardware to support the number of VDOMs you are running, check the global resources on your FortiGate unit. At a glance it will tell you if you are running out of a particular resource such as sessions, or users. If this is the case, you can then check your VDOMs to see if one particular VDOM is using more than its share of resources. If that is the case you can change the resource settings to allow that VDOM (or those VDOMs) fewer resources and in turn allow the other VDOMs access to those resources.

 

Too many Security Features in use

It is likely that reducing the Security Features in use regardless of number of VDOMs will greatly improve overall system performance and should be considered as an option.

Finally it is possible that your FortiGate unit configuration is incorrect in some other area, which is using up all your resources. For example, forgetting that you are running a network sniffer on an interface will create significant amounts of traffic that may prevent normal operation.

 

General VDOM tips and troubleshooting

Besides ping and traceroute, there are additional tools for troubleshooting your VDOM configurations. These include packet sniffing and debugging the packet flow.

 

Perform a sniffer trace

When troubleshooting networks, it helps to look inside the headers of packets to determine if they are traveling along the route you expect that they are. Packet sniffing can also be called a network tap, packet capture, or logic analyzing.

If your FortiGate unit has NP interfaces that are offloading traffic, this will change the sniffer trace. Before performing a trace on any NP interfaces, you should disable off- loading on those interfaces.

 

What sniffing packets can tell you

If you are running a constant traffic application such as ping, packet sniffing can tell you if the traffic is reaching the destination, what the port of entry is on the FortiGate unit, if the ARP resolution is correct, and if the traffic is being sent back to the source as expected.

Sniffing packets can also tell you if the Fortigate unit is silently dropping packets for reasons such as RPF (Reverse Path Forwarding), also called Anti Spoofing, which prevents an IP packet from being forwarded if its Source IP does not either belong to a locally attached subnet (local interface), or be part of the routing between the FortiGate and another source (static route, RIP, OSPF, BGP). Note that RPF can be disabled by turning on asymmetric routing in the CLI (config system setting, set asymmetric enable), however this will disable stateful inspection on the FortiGate unit and cause many features to be turned off.

If you configure virtual IP addresses on your Fortigate unit, it will use those addresses in preference to the physical IP addresses. You will notice this when you are sniffing packets because all the traffic will be using the virtual IP addresses. This is due to the ARP update that is sent out when the VIP address is configured.

 

How to sniff packets

When you are using VDOMs, you must be in a VDOM to access the diag sniffer command. At the global level, the command is not available. This is limit the packets only to the ones on your VDOM, and protects the privacy of other VDOM clients.

The general form of the internal FortiOS packet sniffer command is:

diag sniffer packet <interface_name> <‘filter’> <verbose> <count>

To stop the sniffer, type CTRL+C.

<interface_name>                      The name of the interface to sniff, such as “port1” or “internal”. This can also be “any” to sniff all interfaces.

<‘filter’>

What to look for in the information the sniffer reads.  none indicates no fil- tering, and all packets will be displayed as the other arguments indicate.

The filter must be inside single quotes (‘).

<verbose>                                  The level of verbosity as one of:

1 – print header of packets

2 – print header and data from IP of packets

3 – print header and data from Ethernet of packets

<count>                                      The number of packets the sniffer reads before stopping. If you don’t put a number here, the sniffer will run forever unit you stop it with <CTRL C>.

For a simple sniffing example, enter the CLI command diag sniffer packet port1 none 1 3. This will display the next 3 packets on the port1 interface using no filtering, and using verbose level 1. At this verbosity level you can see the source IP and port, the destination IP and port, action (such as ack), and sequence numbers.

In the output below, port 443 indicates these are HTTPS packets, and 172.20.120.17 is both sending and receiving traffic.

 

Head_Office_620b # diag sniffer packet port1 none 1 3  
interfaces=[port1]  
filters=[none]  
0.545306 172.20.120.17.52989 -> 172.20.120.141.443: psh 3177924955 ack 1854307757
 

0.545963 172.20.120.141.443 -> 172.20.120.17.52989:

 

psh

 

1854307757

 

ack

 

3177925808

 

0.562409 172.20.120.17.52988 -> 172.20.120.141.443:

 

psh

 

4225311614

 

ack

 

3314279933

For a more advanced example of packet sniffing, the following commands will report packets on any interface travelling between a computer with the host name of PC1 and the computer with the host name of PC2. With verbosity 4 and above, the sniffer trace will display the interface names where traffic enters or leaves the FortiGate unit. Remember to stop the sniffer, type CTRL+C. Note that PC1 and PC2 may be VDOMs.

FGT# diagnose sniffer packet any “host <PC1> or host <PC2>” 4

or

FGT# diagnose sniffer packet any “(host <PC1> or host <PC2>) and icmp” 4

The following sniffer CLI command includes the ARP protocol in the filter which may be useful to troubleshoot a failure in the ARP resolution (for instance PC2 may be down and not responding to the FortiGate ARP requests).

FGT# diagnose sniffer packet any “host <PC1> or host <PC2> or arp” 4

 

Debugging the packet flow

Traffic should come in and leave the VDOM. If you have determined that network traffic is not entering and leaving the VDOM as expected, debug the packet flow.

Debugging can only be performed using CLI commands. Debugging the packet flow requires a number of debug commands to be entered as each one configures part of the debug action, with the final command starting the debug.

If your FortiGate unit has NP interfaces that are offloading traffic, this will change the packet flow. Before performing the debug on any NP interfaces, you should disable off- loading on those interfaces.

The following configuration assumes that PC1 is connected to the internal interface of the FortiGate unit and has an IP address of 10.11.101.200. PC1 is the host name of the computer.

 

To debug the packet flow in the CLI, enter the following commands:

FGT# diag debug enable

FGT# diag debug flow filter add <PC1> FGT# diag debug flow show console enable FGT# diag debug flow trace start 100

FGT# diag debug enable

 

The start 100 argument in the above list of commands will limit the output to 100 packets from the flow. This is useful for looking at the flow without flooding your log or your display with too much information.

 

To stop all other debug activities, enter the command:

FGT# diag debug flow trace stop

 

The following is an example of debug flow output for traffic that has no matching Firewall Policy, and is in turn blocked by the FortiGate unit. The denied message indicates the traffic was blocked. Note that even with VDOMs not enabled, vd-root is still shown.

id=20085 trace_id=319 func=resolve_ip_tuple_fast line=2825 msg=”vd-root received a packet(proto=6, 192.168.129.136:2854->192.168.96.153:1863) from port3.”

id=20085 trace_id=319 func=resolve_ip_tuple line=2924 msg=”allocate a new session-013004ac”

id=20085 trace_id=319 func=vf_ip4_route_input line=1597 msg=”find a route: gw-192.168.150.129 via port1″

id=20085 trace_id=319 func=fw_forward_handler line=248 msg=” Denied by forward policy check”

HA virtual clusters and VDOM links

HA virtual clusters and VDOM links

FortiGate HA is implemented by configuring two or more FortiGate units to operate as an HA cluster. To the network, the HA cluster appears to function as a single FortiGate unit, processing network traffic and providing normal security services such as firewall, VPN, IPS, virus scanning, web filtering, and spam filtering.

Virtual clustering extends HA features to provide failover protection and load balancing for a FortiGate unit operating with virtual domains. A virtual cluster consists of a cluster of two FortiGate units operating with virtual domains. Traffic on different virtual domains can be load balanced between the cluster units.

With virtual clusters (vclusters) configured, inter-VDOM links must be entirely within one vcluster. You cannot create links between vclusters, and you cannot move a VDOM that is linked into another virtual cluster. If your FortiGate units are operating in HA mode, with multiple vclusters when you create the vdom-link, the CLI command  config system vdom-link includes an option to set which vcluster the link will be in.

 

What is virtual clustering?

Virtual clustering is an extension of the FGCP for FortiGate units operating with multiple VDOMS enabled. Virtual clustering operates in active-passive mode to provide failover protection between two instances of a VDOM operating on two different cluster units. You can also operate virtual clustering in active-active mode to use HA load balancing to load balance sessions between cluster units. Alternatively, by distributing VDOM processing between the two cluster units you can also configure virtual clustering to provide load balancing by distributing sessions for different VDOMs to each cluster unit.

 

Virtual clustering and failover protection

Virtual clustering operates on a cluster of two (and only two) FortiGate units with VDOMs enabled. Each VDOM creates a cluster between instances of the VDOMs on the two FortiGate units in the virtual cluster. All traffic to and from the VDOM stays within the VDOM and is processed by the VDOM. One cluster unit is the primary unit for each VDOM and one cluster unit is the subordinate unit for each VDOM. The primary unit processes all traffic for the VDOM. The subordinate unit does not process traffic for the VDOM. If a cluster unit fails, all traffic fails over to the cluster unit that is still operating.

 

Virtual clustering and heartbeat interfaces

The HA heartbeat provides the same HA services in a virtual clustering configuration as in a standard HA configuration. One set of HA heartbeat interfaces provides HA heartbeat services for all of the VDOMs in the cluster. You do not have to add a heartbeat interface for each VDOM.

 

Virtual clustering and HA override

For a virtual cluster configuration, override is enabled by default for both virtual clusters when you:

  • Enable VDOM portioning from the web-based manager by moving virtual domains to virtual cluster 2
  • Enter set vcluster2 enable from the CLI config system ha command to enable virtual cluster 2.

Usually you would enable virtual cluster 2 and expect one cluster unit to be the primary unit for virtual cluster 1 and the other cluster unit to be the primary unit for virtual cluster 2. For this distribution to occur override must be enabled for both virtual clusters. Otherwise you will need to restart the cluster to force it to renegotiate.

 

Virtual clustering and load balancing or VDOM partitioning

There are two ways to configure load balancing for virtual clustering. The first is to set the HA mode to active- active. The second is to configure VDOM partitioning. For virtual clustering, setting the HA Mode to active-active has the same result as active-active HA for a cluster without virtual domains. The primary unit receives all sessions and load balances them among the cluster units according to the load balancing schedule. All cluster units process traffic for all virtual domains.

Note: If override is enabled the cluster may renegotiate too often. You can choose to disable override at any time. If you decide to disable override, for best results, you should disable it for both cluster units.

In a VDOM partitioning virtual clustering configuration, the HA mode is set to active-passive. Even though virtual clustering operates in active-passive mode you can configure a form of load balancing by using VDOM partitioning to distribute traffic between both cluster units. To configure VDOM partitioning you set one cluster unit as the primary unit for some virtual domains and you set the other cluster unit as the primary unit for other virtual domains. All traffic for a virtual domain is processed by the primary unit for that virtual domain. You can control the distribution of traffic between the cluster units by adjusting which cluster unit is the primary unit for each virtual domain.

For example, you could have 4 VDOMs, two of which have a high traffic volume and two of which have a low traffic volume. You can configure each cluster unit to be the primary unit for one of the high volume VDOMs and one of the low volume VDOMs. As a result each cluster unit will be processing traffic for a high volume VDOM and a low volume VDOM, resulting in an even distribution of traffic between the cluster units. You can adjust the distribution at any time. For example, if a low volume VDOM becomes a high volume VDOM you can move it from one cluster unit to another until the best balance is achieved. From the web-based manager you configure VDOM partitioning by setting the HA mode to active-passive and distributing virtual domains between Virtual Cluster 1 and Virtual Cluster 2. You can also configure different device priorities, port monitoring, and remote link failover, for Virtual Cluster 1 and Virtual Cluster 2.

From the CLI you configure VDOM partitioning by setting the HA mode to a-p. Then you configure device priority, port monitoring, and remote link failover and specify the VDOMs to include in virtual cluster 1. You do the same for virtual cluster 2 by entering the config secondary-vcluster command.

Failover protection does not change. If one cluster unit fails, all sessions are processed by the remaining cluster unit. No traffic interruption occurs for the virtual domains for which the still functioning cluster unit was the primary unit. Traffic may be interrupted temporarily for virtual domains for which the failed unit was the primary unit while processing fails over to the still functioning cluster unit. If the failed cluster unit restarts and rejoins the virtual cluster, VDOM partitioning load balancing is restored.

Dynamic routing over inter-VDOM links

Dynamic routing over inter-VDOM links

BGP is supported over inter-VDOM links. Unless otherwise indicated, routing works as expected over inter-VDOM links.

If an inter-VDOM link has no assigned IP addresses to it, it may be difficult to use that interface in dynamic routing configurations. For example BGP requires an IP address to define any BGP router added to the network.

In OSPF, you can configure a router using a router ID and not its IP address. In fact, having no IP address avoids possible confusing between which value is the router ID and which is the IP address. However for that router to become adjacent with another OSPF router it will have to share the same subnet, which is technically impossible without an IP address. For this reason, while you can configure an OSPF router using an IP-less inter-VDOM link, it will likely be of limited value to you.

In RIP the metric used is hop count. If the inter-VDOM link can reach other nodes on the network, such as through a default route, then it may be possible to configure a RIP router on an inter-VDOM link. However, once again it may be of limited value due to limitations.

As stated earlier, BGP requires an IP address to define a router — an IP-less inter-VDOM link will not work with BGP.

In Multicast, you can configure an interface without using an IP address. However that interface will be unable to become an RP candidate. This limits the roles available to such an interface.

Configuring VDOM links

Configuring VDOM links

Once VDOMs are configured on your FortiGate unit, configuring inter-VDOM routing and VDOM-links is very much like creating a VLAN interface. VDOM-links are managed through the web-based manager or CLI. In the web-based manager, VDOM link interfaces are managed in the network interface list.

This section includes the following topics:

  • Creating VDOM links
  • IP addresses and inter-VDOM links
  • Deleting VDOM links
  • NAT to Transparent VDOM links

 

Creating VDOM links

VDOM links connect VDOMs together to allow traffic to pass between VDOMs as per firewall policies. Inter- VDOM links are virtual interfaces that are very similar to VPN tunnel interfaces except inter-VDOM links do not require IP addresses.

To create a VDOM link, you first create the point-to-point interface, and then bind the two interface objects associated with it to the virtual domains.

In creating the point-to-point interface, you also create two additional interface objects by default. They are called vlink10 and vlink11 – the interface name you chose with a 1 or a 0 to designate the two ends of the link.

Once the interface objects are bound, they are treated like normal FortiGate interfaces and need to be configured just like regular interfaces.

The assumptions for this example are as follows:

  • Your FortiGate unit has VDOMs enabled and you have 2 VDOMs called customer1 and customer2 already configured. For more information on configuring VDOMs see Configuring Virtual Domains.
  • You are using a super_admin account.

 

To configure an inter-VDOM link – web-based manager:

1. Go to Global > Network > Interfaces.

2. Select Create New > VDOM link, enter the following information, and select OK.

 

Name                                           vlink1

(The name can be up to 11 characters long. Valid characters are letters, numbers, “-”, and “_”. No spaces are allowed.)

Interface #0

Virtual Domain                  customer1
IP/Netmask                        10.11.12.13/255.255.255.0
Administrative Access    HTTPS, SSL
Interface #1
Virtual Domain                  customer2
IP/Netmask                        172.120.100.13/255.255.255.0
Administrative Access    HTTPS, SSL

 

To configure an inter-VDOM link – CLI:

config global

config system vdom-link edit vlink1

end

config system interface edit vlink10

set vdom customer1 next

edit vlink11

set vdom customer2 end

 

Once you have created and bound the interface ends to VDOMs, configure the appropriate firewall policies and other settings that you require. To confirm the inter-VDOM link was created, find the VDOM link pair and use the expand arrow to view the two VDOM link interfaces. You can select edit to change any information.

 

IP addresses and inter-VDOM links

Besides being virtual interfaces, here is one main difference between inter-VDOM links and regular interfaces— default inter-VDOM links do not require IP addresses. IP addresses are not required by default because an inter- VDOM link is an internal connection that can be referred to by the interface name in firewall policies, and other system references. This introduces three possible situations with inter-VDOM links that are:

  • unnumbered – an inter-VDOM link with no IP addresses for either end of the tunnel
  • half numbered – an inter-VDOM link with one IP address for one end and none for the other end
  • full numbered – an inter-VDOM link with two IP addresses, one for each end.

Not using an IP address in the configuration can speed up and simplify configuration for you. Also you will not use up all the IP addresses in your subnets if you have many inter-VDOM links.

Half or full numbered interfaces are required if you are doing NAT, either SNAT or DNAT as you need an IP number on both ends to translate between.

You can use unnumbered interfaces in static routing, by naming the interface and using 0.0.0.0 for the gateway. Running traceroute will not show the interface in the list of hops. However you can see the interface when you are sniffing packets, which is useful for troubleshooting.

 

Deleting VDOM links

When you delete the VDOM link, the two link objects associated with it will also be deleted. You cannot delete the objects by themselves. The example uses a VDOM routing connection called “vlink1”. Removing vlink1 will also remove its two link objects vlink10 and vlink11.

Before deleting the VDOM link, ensure all policies, firewalls, and other configurations that include the VDOM link are deleted, removed, or changed to no longer include the VDOM link.

 

To remove a VDOM link – web-based manager:

1. Go to Global > Network > Interfaces.

2. Select Delete for the VDOM link vlink1.

 

To remove a VDOM link – CLI:

config global

config system vdom-link delete vlink1

end

 

NAT to Transparent VDOM links

Inter-VDOM links can be created between VDOMs in NAT mode and VDOMs in Transparent mode, but it must be done through the CLI, as the VDOM link type must be changed from the default PPP to Ethernet for the two VDOMs to communicate. The below example assumes one vdom is in NAT mode and one is Transparent.

An IP address must be assigned to the NAT VDOM’s interface, but no IP address should be assigned to the Transparent VDOM’s interface.

 

To configure a NAT to Transparent VDOM link – CLI:

config global

config system vdom-link edit vlink1

set type ethernet end

config system interface edit vlink10

set vdom (interface 1 name)

set ip (interface 1 ip)

next

edit vlink11

set vdom (interface 2 name)

end

 

Ethernet-type is not recommended for standard NAT to NAT inter-VDOM links, as the default PPP-type link does not require the VDOM links to have addresses, while Ethernet-type does. VDOM link addresses are explained in IP addresses and inter-VDOM links.