Assembly Data System NFV Lab


 

The ADS NFV Lab testbed is located in Rome, in ADS Software Factory which now provides a platform for customer demonstrations and PoC. The platform is made available to SoftFIRE with the aim of setting up additional NFV capabilities in the project. The ADS component testbed is also an optimal working environment for the development and test of own-brand VNFs and for benchmarking cloud virtual network performance.

ADS NFV infrastructure is based on RedHat OpenStack 10 (Newton), released on December 2016. This release is suggested for real production infrastructures for telecom operators, who need stable and supported virtualisation environments.

Currently the ADS component testbed consists of 5 nodes with the following roles:


Table 1: Network equipment provided by the ADS testbed.

The storage backend of the infrastructure is based on Ceph Software-Defined Storage, which provides high availability and high scalability object and block storage on general purpose hardware. The compute nodes leverage on the Ceph node (see Table 1); hence the VM volumes are not on the same compute node, but they are fully decoupled from the compute resources.

Another purpose of the ADS component testbed is to provide a platform for SDN experiments, hence the OpenStack installation is fully integrated with OpenDaylight as its networking backend, in order to provide more advanced SDN functionalities than those provided by default the OpenStack Neutron module.

The Openstack installation follows the reference design of RedHat, as illustrated below in Figure 1. Control and service networks are isolated on separate VLANs, while interface bonding assures High Availability.


Figure 1: The ADS testbed network architecture.

RedHat OpenStack is based on the Triple-O project (OpenStack on OpenStack), which installs two OpenStack instances, named Overcloud and Undercloud. The architecture is shown in Figure 2. The latter is a single-system OpenStack installation (the Director node) that includes components for provisioning and managing the Overcloud OpenStack nodes throughout the use of Heat stack templates that define the Overcloud Platform, and Ironic module to manage bare metal instances. The Director node is not used by the workload cloud, but it is used only to provision and manage the nodes of the infrastructure.

Figure 2: Triple-O Architecture.